Statistical Learning Theory and Pattern Recognition
We are interested in a framework for automated learning / information
extraction from a (usually) noisy data set.
In particular we are interested in what one can learn
and how the performance of a learning system may depend on the
parameters of the learning problem.
Learning Models and Algorithms
Some common learning models are: neural networks; rbf networks;
support vector machines; gaussian processes; hidden markov models; etc.
We are interested in the application of such learning models to
real data, and the development of new learning models, along
with learning algorithms.
Reinforcement Learning
Such learning usually models situations where the
data set has an inherent time component, and
the feedback is
a reward or punishment. We are interested in such learning applied to
game playing and learning in non-stationary Markov Decision
processes.
Unsupervised Learning and Probability Density Estimation
We are
interested in the
inference of structure from unstructured data, such as clusters and
probability densities. We have investigated the use of
neural networks to estimate multi-dimensional densities and generate
random variates from arbitrary distributions.
Optimization and Combinatorial Optimization
Many problems in combinatorial optimization are hard to
solve efficiently if one requires an exact solution. However, if
one is allowed some error, then learning approximate solutions that
are effective for the particular domain of interest
are useful.
We are interested in the development of such approaches using
supervised learning and inverse algorithm engineering.