blank My main area of research is Parallel Computer Architecture and its interface with Compilers and Parallel Programming.

My current primary research interest is improving the performance of scripting languages through hardware/software co-optimization. Scripting languages are gaining wide spread use as programming comes to the masses. However, scripting languages are notoriously hard to compile efficiently due to their unstructured dynamism. On the other hand, trends in hardware point to a future where processors are increasingly energy-conscious, multicore, and heterogeneous. This adds another layer of complexity to efficient compilation and execution as execution has to be increasingly parallelized to take advantage of the available resources. My goal is to find a cross-disciplinary solution to this problem that involves both hardware and compiler, initially targeting the JavaScript language.

My past research has also focused on hardware/compiler cross-disciplinary solutions. I have worked on leveraging hardware Transactional Memory (TM) support to perform compiler optimizations such as alias speculation and memory ordering speculation. I have also worked on leveraging hardware Bloom filters to perform compiler optimizations such as function memoization. I have also worked on TM and Thread Level Speculation (TLS) support to make parallel programming easier and sometimes to auto-parallelize programs. I contributed to the release of the TM and TLS system in the IBM Bluegene/Q supercomputer which is one of the first machines to support this in hardware.


Hardware / Compiler Interface

Support for Parallel Execution

Parallel Computer Architecture


Transactional Memory (TM) / Thread Level Speculation (TLS)