It is widely recognized that a major disruption is under way in computer hardware as processors strive to extend, and go beyond, the end of Moore’s Law. This disruption will include new forms of heterogenous processors, heterogeneous memories, near-memory computation structures, and, in some cases, Non-von Neumann computing elements. Unlike previous generations of hardware evolution, these “extreme heterogeneity” platforms will have a profound impact on future software. The software challenges are further compounded by the need to support new workloads and application domains (e.g., data analytics, machine learning), that traditionally had low representation in past benchmarks for high performance computing.

The Habanero Extreme Scale Software Research Laboratory was created to address the challenges of  software for extreme scale systems (systems with more than thousand-way parallelism in a socket, and more than million-way parallelism in a cluster) by developing new programming technologies — programming models, compilers, runtime systems, and debugging/verification tools — that support portable parallel abstractions for future hardware with high productivity and high performance. With increasing levels of parallelism in all computers since the end of Dennard Scaling, as well as the increasing role of distributed memories and distributed computing, all software is now becoming parallel and distributed by default. However, the current foundations of parallel and distributed software is unstructured and non-compositional in nature, as exemplified by the use of threads, locks, barriers, and blocking communications. Our goal is to ensure that future software is developed with programming systems that enable application developers to reuse their investments across multiple generations of extreme scale hardware platforms. Our research towards this goal is driven by a new foundation for parallel, concurrent and distributed software based on structured and compositional execution model primitives that enable enhanced performance and verifiability relative to past models. We also envision broader impact of this research that includes updating the pedagogy of parallel computing in introductory Computer Science courses, building open source testbeds to grow the ecosystem of researchers in the parallel software area, and using our research infrastructure as the basis for building reference implementations of future industry standards.