Research









My research interests are mainly categorized under the fields of Machine Learning, Data Mining and Artificial Intelligence. In particular, I've been working on research projects in Spectral Data Analysis, Learning Graphical Models, Causal Learning and Concept Learning in Mobile Robots. More recently, I've been focusing on developing spectral methods for clustering and semi-supervised learning in high-dimensional and large-scale problems.



Graph-based Methods for Large-scale Problems

In many Machine Learning problems, graph-based methods enable us to learn new metrics (kernels) which encodes the data-driven, problem-specific similarity between the datapoints. However, extending these methods to work with large-scale and high-dimensional problems is a big challenge which motivates us to develop new techniques to deal with this problem.


Related Publications:

S. Amizadeh, B. Thiesson and M. Hauskrecht, The Bregman Variational Dual-Tree Framework, to appear in the proc. of the 29th Conference on Uncertainty in Artificial Intelligence (UAI-13), pp: 22-31, Bellevue, WA, USA, July 2013.

S. Amizadeh, B. Thiesson and M. Hauskrecht, Variational Dual-Tree Framework for Large-Scale Transition Matrix Approximation, in the 28th Conference on Uncertainty in Artificial Intelligence (UAI-12), pp: 64-73, Catalina Island, USA, August 2012. [supplementary] [Presentation at Microsoft Research]

S. Amizadeh
, H. Valizadegan and M. Hauskrecht, Factorized Diffusion Map Approximation, in JMLR W&CP 22: the 15th International Conference on Artificial Intelligence and Statistics (AISTATS-12), pp: 37-46, La Palma, Canary Islands, April 2012. [supplementary]

S. Amizadeh
, S. Wang, and M. Hauskrecht, An Efficient Framework for Constructing Generalized Locally-Induced Text Metrics, in the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11), pp:1159-1164, Barcelona, Spain 2011.

S. Amizadeh, M. Chen, D. Dash, M. Hauskrecht, W. Schneider, Low-dimensional Embedding of Large-scale Infinite-dimensional Function Spaces with Application to Human Brain Connectome. In NIPS workshop on Low-rank Methods for Large Scale Machine Learning, in conjunction with the 24th annual conference on Neural Information Processing Systems (NIPS), 2010.


Structure Learning in Graphical Models

Learning the structure of high-dimensional Graphical Models is a hard problem escpecially with a limited sample size. The key solution is regularization which is, in this case, enforcing sparsity on the structure. However, the sparsity of the target structure may not be uniform across the graph; that is, there might be some global shape in the target structure. Our goal here is to develop a regularization framework for biasing the learning algorithm toward such a global structure using local regularizers.



Related Publications:

S. Amizadeh and M. Hauskrecht, Latent Variable Model for Learning in Pairwise Markov Networks, in AAAI-10: the 24th Conference on Artificial Intelligence, pp: 382-387, Atlanta, U.S. 2010.



Active Sampling for Model Evaluation

Model evaluation (testing) is an important phase of building classifier models in Machine Learning. In many real problems, the high costs of labeling examples restrict the test set's size. The optimal strategy for picking examples to be labeled for evaluation is random sampling. However, we need to do more wisely when the class disrtibution is highly unbalanced. The problem becomes even harder if the to-be-evaluated models are unknown a priori.

Related Publications:


H. Valizadegan, S. Amizadeh, M. Hauskrecht, Sampling Strategies to Evaluate the Performance of Unknown Predictors, in 2012 SIAM International Conference on Data Mining (SDM-12), pp: 494-505, Anaheim, California, April 2012.


Online Causal Learning

Human causal learning as opposed to many Machine Learning counterparts is pretty fast! We, as humans, are able to pick up the cause and effect relationships in our perception stream only by observing a few examples. Moreover, we can generalize these learned relationships quickly if provided with some notion of IS-A hierarchy over our perceptual entities. The goal here is to devise a computational framework capable of fast online causal learning and generalization.



Related Publications:

S. Amizadeh and D. Dash, Efficient Causal Discovery and Abstraction in Perception Streams. In NIPS workshop on Bounded-rational analyses of human cognition: Bayesian models, approximate inference, and the brain, in conjunction with the 23rd annual conference on Neural Information Processing Systems (NIPS), 2009.



Online Concept Learning

For obvious computational reasons, autonomous agents need to conceptualize their continuous perceptual and motor spaces into a finite number of abstract entities called concepts while they are learning from their environment. We have developed a Bayesian framework which also borrows ideas from Reinforcement Learning and Online Clustering to accomplish the simultaneous tasks of learning and abstraction.



Related Publications:

H. Firouzi, M. Nili Ahmadabadi, B. N. Araabi, S. Amizadeh, M. S. Mirian, Interactive Learning in Continuous Multimodal Space: A Bayesian Approach to Soft Partitioning and Learning, IEEE Transactions on Autonomous Mental Development, Vol. 4, No. 2, pp: 124-138. 2012.

S. Amizadeh, M. Nili Ahmadabadi, B. N. Araabi and R. Siegwart, A Bayesian Approach to Conceptualization Using Reinforcement Learning, in IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Switzerland, Sep. 2007.
   
S. Amizadeh, M. Nili Ahmadabadi, C. Lucas, Bayesian Continuous-State Reinforcement Learning, in Proc. of International Computer Society of Iran Computer Conference (CSICC07), pp: 1515-1521, Tehran, Iran, Feb. 2007.

S. Amizadeh, A Bayesian Approach to Hierarchical Concept Learning, M.S. thesis, University of Tehran, Jul. 2007 [in Farsi].


I have also worked on research projects in Natural Language Processing and Genetic Algorithms in the past.

Related Publications:

H. Harkema, H. Piwowar, S. Amizadeh, J. Dowling, J. Ferraro, P. Haug, W. Chapman, A Baseline System for i2b2 Obesity Challenge, in the 2nd i2b2 Workshop on Challenges in Natural Language Processing for Clinical Data, Nov. 2008.

S. Amizadeh, F. Rastegar and C. Lucas, Incorporating Heuristics in Evolutionary Optimization, in International Journal of Information Technology and Intelligent Computing, Vol. 1, No. 2, pp: 259-270, 2006.


Saeed Amizadeh © 2011 Web Design by Free Templates Online