
My research interests are mainly categorized under the fields of
Machine Learning, Data Mining and Artificial Intelligence. In
particular, I've been working on research projects in Spectral Data Analysis,
Learning Graphical Models, Causal Learning and Concept Learning in
Mobile Robots. More recently, I've been focusing on developing spectral
methods for clustering and semisupervised learning in highdimensional
and largescale problems.


Graphbased Methods for Largescale Problems
In
many Machine Learning problems, graphbased methods enable us to
learn new metrics (kernels) which encodes the datadriven, problemspecific
similarity between the datapoints. However, extending these methods to work with
largescale and highdimensional problems is a big challenge which
motivates us to develop new techniques to deal with this problem.

Related Publications:
S. Amizadeh, B. Thiesson and M. Hauskrecht, The Bregman Variational DualTree Framework, to appear in the proc. of the 29th Conference on Uncertainty in Artificial Intelligence (UAI13), pp: 2231, Bellevue, WA, USA, July 2013.
S. Amizadeh, B. Thiesson and M. Hauskrecht, Variational DualTree Framework for LargeScale Transition Matrix Approximation, in the 28th Conference on Uncertainty in Artificial Intelligence (UAI12), pp: 6473, Catalina Island, USA, August 2012. [ supplementary] [ Presentation at Microsoft Research]
S. Amizadeh, H. Valizadegan and M. Hauskrecht, Factorized Diffusion Map Approximation, in JMLR W&CP 22: the 15th International Conference on Artificial Intelligence and Statistics (AISTATS12), pp: 3746, La Palma, Canary Islands, April 2012. [ supplementary]
S. Amizadeh, S. Wang, and M. Hauskrecht, An Efficient Framework for Constructing Generalized LocallyInduced Text Metrics, in the 22nd International Joint Conference on Artificial Intelligence (IJCAI11), pp:11591164, Barcelona, Spain 2011.
S. Amizadeh, M. Chen, D. Dash, M. Hauskrecht, W. Schneider, Lowdimensional Embedding of Largescale Infinitedimensional Function Spaces with Application to Human Brain Connectome. In NIPS workshop on Lowrank Methods for Large Scale Machine Learning, in conjunction with the 24th annual conference on Neural Information Processing Systems (NIPS), 2010.


Structure Learning in Graphical Models
Learning
the structure of highdimensional Graphical Models is a hard problem
escpecially with a limited sample size. The key solution is
regularization which is, in this case, enforcing sparsity on the
structure. However, the sparsity of the target structure may not be
uniform across the graph; that is, there might be some global shape in
the target structure. Our goal here is to develop a regularization
framework for biasing the learning algorithm toward such a global structure using local
regularizers.




Active Sampling for Model Evaluation
Model
evaluation (testing) is an important phase of building classifier
models in Machine Learning. In many real problems, the high costs of
labeling examples restrict the test set's size. The optimal strategy
for picking examples to be labeled for evaluation is random sampling.
However, we need to do more wisely when the class disrtibution is
highly unbalanced. The problem becomes even harder if the
tobeevaluated models are unknown a priori.

Related Publications:

Online Causal Learning
Human
causal learning as opposed to many Machine Learning counterparts is
pretty fast! We, as humans, are able to pick up the cause and effect
relationships in our perception stream only by observing a few
examples. Moreover, we can generalize these learned relationships
quickly if provided with some notion of ISA hierarchy over our
perceptual entities. The goal here is to devise a computational
framework capable of fast online causal learning and generalization.





Online Concept Learning
For
obvious computational reasons, autonomous agents need to conceptualize
their continuous perceptual and motor spaces into a finite number of
abstract entities called concepts while they are learning from their
environment. We have developed a Bayesian framework which also borrows
ideas from Reinforcement Learning and Online Clustering to accomplish
the simultaneous tasks of learning and abstraction.


Related Publications:
H. Firouzi, M. Nili Ahmadabadi, B. N. Araabi, S. Amizadeh,
M. S. Mirian, Interactive Learning in Continuous Multimodal Space: A
Bayesian Approach to Soft Partitioning and Learning, IEEE Transactions on Autonomous Mental Development, Vol. 4, No. 2, pp: 124138. 2012.
S. Amizadeh, M. Nili Ahmadabadi, B. N. Araabi and R. Siegwart, A Bayesian Approach to Conceptualization Using Reinforcement Learning, in IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Switzerland, Sep. 2007.
S. Amizadeh, M. Nili Ahmadabadi, C. Lucas, Bayesian ContinuousState Reinforcement Learning, in Proc. of International Computer Society of Iran Computer Conference (CSICC07), pp: 15151521, Tehran, Iran, Feb. 2007.
S. Amizadeh, A Bayesian Approach to Hierarchical Concept Learning, M.S. thesis, University of Tehran, Jul. 2007 [in Farsi].


I have also worked on research projects in Natural Language Processing and Genetic Algorithms in the past.
Related Publications:
H. Harkema, H. Piwowar, S. Amizadeh, J. Dowling, J. Ferraro, P. Haug, W. Chapman, A Baseline System for i2b2 Obesity Challenge, in the 2nd i2b2 Workshop on Challenges in Natural Language Processing for Clinical Data, Nov. 2008.
S. Amizadeh, F. Rastegar and C. Lucas, Incorporating Heuristics in Evolutionary Optimization, in International Journal of Information Technology and Intelligent Computing, Vol. 1, No. 2, pp: 259270, 2006.


