Habanero Extreme Scale Software Research Laboratory

A photo of Professor Sarkar and a few members of the Habanero Lab Group


It is widely recognized that a major disruption is underway in computer hardware as processors strive to extend, and go beyond, the end of Moore’s Law. This disruption is bringing the emergence of new heterogenous processors, heterogeneous memories, near-memory computation structures, and even Non-von Neumann computing elements.

Unlike previous generations of hardware evolution, these “extreme heterogeneity” platforms will have a profound impact on future software. The software challenges are further compounded by the need to support new workloads and application domains (e.g., data analytics, machine learning), that traditionally had low representation in past benchmarks for high performance computing.

Additionally, since the end of Dennard Scaling, as well as the increasing role of distributed memories and distributed computing, all software is now becoming parallel and distributed by default. However, the current foundations of parallel and distributed software are unstructured and non-compositional in nature, as exemplified by the use of threads, locks, barriers, and blocking communications.


The Habanero Extreme Scale Software Research Laboratory was founded to address the challenges of creating software for extreme scale (1000+way parallelism per socket, 1M+way parallelism per cluster) systems.

We strive to assist future software to be developed with programming systems that enable application developers to reuse their investments across multiple generations of extreme scale hardware platforms.

Our research towards this goal is driven by a new foundation for parallel and distributed software based on structured and compositional execution model primitives that enable enhanced performance and verifiability relative to past models.

We do this through a combination of:

  • Programming Models
  • Compilers
  • Runtime Systems
  • Debugging & Verification tools


In addition to providing tools and resources for the HPC community, we also envision the broader impact of this research, including:

  • Updating the pedagogy of parallel computing in introductory Computer Science courses
  • Building open source testbeds to grow the ecosystem of researchers in the parallel software area
  • Using our research infrastructure as the basis for building reference implementations of future industry standards.