The Chapel Parallel Programming Language

 

Publications and Papers

Featured Publications

Development of a Knowledge-Sharing Parallel Computing Approach for Calibrating Distributed Watershed Hydrologic Models. Marjan Asgari, Wanhong Yang, John Lindsay, Hui Shao, Yongbo Liu, Rodrigo De Queiroga Miranda, and Maryam Mehri Dehnavi. Environmental Modelling & Software, volume 164, 2023.
This paper uses Chapel in a novel knowledge-sharing setting to support a general parallel framework for calibrating distributed hydrologic models. The approach is unique due to the use of a novel search algorithm as well as its interoperability with C#, fault tolerance, parallelism, and reliability.
Compiler Optimization for Irregular Memory Access Patterns in PGAS Programs [slides]. Thomas B. Rolinger, Christopher D. Krieger, Alan Sussman. LCPC 2022. October 13, 2022.
This paper presents a compiler optimization that targets irregular memory accesses patterns in Chapel programs. Specifically, it uses static analysis to identify irregular memory accesses to distributed arrays in parallel loops and employs code transformations to generate an inspector and executor to perform selective data replication at runtime.
Towards Chapel-based Exascale Tree Search Algorithms: dealing with multiple GPU accelerators, [slides]. Tiago Carneiro, Nouredine Melab, Akihiro Hayashi, Vivek Sarkar. HPCS 2020, Outstanding Paper Award winner. March 22–27, 2021.
This paper revisits the design and implementation of tree search algorithms dealing with multiple GPUs, in addition to scalability and productivity-awareness using Chapel. The proposed algorithm exploits Chapel's distributed iterators by combining a partial search strategy with pre-compiled CUDA kernels for more efficient exploitation of the intra-node parallelism.
Development of Parallel CFD Applications with the Chapel Programming Language, [video | slides]. Matthieu Parenteau, Simon Bourgault-Côté, Frédéric Plante, Engin Kayraklioglu, Éric Laurendeau. AIAA Scitech 2021 Forum. January 13, 2021.
This paper describes a Computational Fluid Dynamics framework being developed using Chapel by a team at Polytechnique Montreal. The use of Chapel is described, and scaling results are given on up to 9k cores of a Cray XC. Comparisons are made against well-established CFD software packages.
(see also the publications at CHIUW, Chapel's annual workshop)

Recent Publications

A Local Search for Automatic Parameterization of Distributed Tree Search Algorithms. Tiago Carneiro, Loizos Koutsantonis, Nouredine Melab, Emmanuel Kieffer, Pascal Bouvry. 12th IEEE Workshop Parallel / Distributed Combinatorics and Optimization (PDCO 2022), June 3, 2022.
This work presents a local search for automatic parameterization of ChapelBB, a distributed tree search application for solving combinatorial optimization problems written in Chapel. The main objective of the proposed heuristic is to overcome the limitation of manual parameterization, which covers a limited feasible space.
Extending Chapel to support fabric-attached memory [slides]. Amitha C, Bradford L. Chamberlain, Clarete Riana Crasta, and Sharad Singhal. CUG 2022, Monterey CA, May 4, 2022.
This paper describes an implementation of Chapel's arrays that leverages the language's support for user-defined data distributions to implement the array using fabric-attached memory (FAM) rather than simply local DRAM.
A performance-oriented comparative study of the Chapel high-productivity language to conventional programming environments [slides]. Guillaume Helbecque, Jan Gmys, Tiago Carneiro, Nouredine Melab, Pascal Bouvry. 13th International Workshop on Programming Models and Applications for Multicores and Manycores (PMAM 2022), Seoul, South Korea, April 2, 2022.
This work compares the performance of a Chapel-based fractal generation on shared- and distributed-memory platforms with corresponding OpenMP and MPI+X implementations.

Chapel Overviews

Chapel chapter, Bradford L. Chamberlain, Programming Models for Parallel Computing, edited by Pavan Balaji, published by MIT Press, November 2015.
This is currently the best introduction to Chapel's history, motivating themes, and features. It also provides a brief summary of current and future activities at the time of writing. An early pre-print of this chapter was made available under the name A Brief Overview of Chapel.
Parallel Programmability and the Chapel Language Bradford L. Chamberlain, David Callahan, Hans P. Zima. International Journal of High Performance Computing Applications, August 2007, 21(3): 291-312.
This is an early overview of Chapel's themes and main language concepts.

Chapel Project Updates

(see also Chapel's release notes and release notes archives)
Chapel Comes of Age: Productive Parallelism at Scale [slides (with outtakes)]. Brad Chamberlain, Elliot Ronaghan, Ben Albrecht, Lydia Duncan, Michael Ferguson, Ben Harshbarger, David Iten, David Keaton, Vassily Litvinov, Preston Sahabu, and Greg Titus. CUG 2018, Stockholm Sweden, May 22, 2018.
This paper describes the progress that has been made with Chapel since the HPCS program wrapped up.
The State of the Chapel Union [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Martha Dumler, Thomas Hildebrandt, David Iten, Vassily Litvinov, Greg Titus. CUG 2013, May 2013.
This paper provides a snapshot of the Chapel project at the juncture between the end of the HPCS project and the start of the next phase in Chapel's development. It covers past successes, current status, and future directions.

Chapel Optimizations

Locality-Based Optimizations in the Chapel Compiler [slides]. Engin Kayraklioglu, Elliot Ronaghan, Michael P. Ferguson, and Bradford L. Chamberlain. LCPC 2021. October 13, 2021.
This paper describes a pair of recent compiler optimizations focused on reducing communication overheads in Chapel, leveraging Chapel's high-level abstractions—one that strength reduces local array accesses, and a second which aggregates communications to amortize overheads.
A Machine-Learning-Based Framework for Productive Locality Exploitation. Engin Kayraklioglu, Erwan Fawry, Tarek El-Ghazawi. IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS). Volume 32, Issue 6. June 2021
This paper describes an approach that can efficiently train machine learning models that can be used to improve application execution times and scalability on distributed memory systems. This is achieved by analyzing the fine-grained communication profile of the application with small input data, and then predicting the communication patterns for more realistic inputs and coarsening the communication.
LLVM-based Communication Optimizations for PGAS Programs. Akihiro Hayashi, Jisheng Zhao, Michael Ferguson, Vivek Sarkar. 2nd Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC2), November 2015.
This paper describes how LLVM passes can optimize communication in PGAS languages like Chapel. In particular, by representing potentially remote addresses using a distinct address space, existing LLVM optimization passes can be used to reduce communication.
Caching Puts and Gets in a PGAS Language Runtime [slides]. Michael P. Ferguson, Daniel Buettner. 9th International Conference on Partitioned Global Address Space Programming Models (PGAS 2015), Sept 2015.
This paper describes an optimization implemented for Chapel in which the runtime library aggregates puts and gets in accordance with Chapel's memory consistency model in order to reduce the potential overhead of doing fine-grained communications.

Applications of Chapel

A Comparative Study of High-Productivity High-Performance Programming Languages for Parallel Metaheuristics [PDF]. Jan Gmys, Tiago Carneiro, Nouredine Melab, El-Ghazali Talbi, Daniel Tuyttens. Swarm and Evolutionary Computation, volume 57. September 2020.
This paper compares Chapel with Julia, Python/Numba, and C+OpenMP in terms of performance, scalability and productivity. Two parallel metaheuristics are implemented for solving the 3D Quadratic Assignment Problem (Q3AP), using thread-based parallelism on a multi-core shared-memory computer. The paper also evaluates and compares the performance of the languages for a parallel fitness evaluation loop, using four different test functions with different computational characteristics. The authors provide feedback on the implementation and parallelization process in each language.
Hypergraph Analytics of Domain Name System Relationships. Cliff A Joslyn, Sinan Aksoy, Dustin Arendt, Jesun Firoz, Louis Jenkins, Brenda Praggastis, Emilie Purvine, Marcin Zalewski. 17th Workshop on Algorithms and Models for the Web Graph (WAW 2020). September 21–24, 2020.
This paper applies hypergraph analytics over a gigascale DNS data using CHGL, performing compute-intensive calculations for data reduction and segmentation. Identified portions are then sent to HNX for both exploratory analysis and knowledge discovery targeting known tactics, techniques, and procedures.
Towards ultra-scale Branch-and-Bound using a high-productivity language. Tiago Carneiro, Jan Gmys, Nouredine Melab, and Daniel Tuyttens. Future Generation Computer Systems, volume 105, pages 196-209. April 2020.
This paper uses Chapel to study the design and implementation of distributed Branch-and-Bound algorithms for solving large combinatorial optimization problems. Experiments on the proposed algorithms are performed using the Flow-shop scheduling problem as a test-case. The Chapel-based application is compared to a state-of-the-art MPI+Pthreads-based counterpart in terms of performance, scalability, and productivity.
Graph Algorithms in PGAS: Chapel and UPC++. Louis Jenkins, Jesun Sahariar Firoz, Marcin Zalewski, Cliff Joslyn, Mark Raugas. 2019 IEEE High Performance Extreme Computing Conference (HPEC ‘19), September 24–26, 2019.
This paper compares implementations of Breadth-First Search and Triangle Counting in Chapel and UPC++

Multiresolution Chapel Features

User-Defined Parallel Zippered iterators in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, Angeles Navarro. PGAS 2011: Fifth Conference on Partitioned Global Address Space Programming Models, October 2011.
This paper describes how users can create parallel iterators that support zippered iteration in Chapel, demonstrating them via several examples that partition iteration spaces statically and dynamically.
Authoring User-Defined Domain Maps in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, David Iten, Vassily Litvinov. CUG 2011, May 2011.
This paper builds on our HotPAR 2010 paper by describing the programmer's role in implementing user-defined distributions and layouts in Chapel.
User-Defined Distributions and Layouts in Chapel: Philosophy and Framework [slides]. Bradford L. Chamberlain, Steven J. Deitz, David Iten, Sung-Eun Choi. 2nd USENIX Workshop on Hot Topics in Parallelism (HotPar'10), June 2010.
This paper describes our approach and software framework for implementing user-defined distributions and memory layouts using Chapel's domain map concept.

Chapel Tools

ChplBlamer: A Data-centric and Code-centric Combined Profiler for Multi-locale Chapel Programs [slides]. Hui Zhang and Jeffrey K. Hollingsworth. In Proceedings of the 32nd ACM International Conference on Supercomputing (ICS'18), pages 252–262. June 2018.
This paper describes a tool that uses a combination of data-centric and code-centric information to relate performance profiling information back to user-level data structures and source code in Chapel programs.
APAT: an access pattern analysis tool for distributed arrays. Engin Kayraklioglu and Tarek El-Ghazawi. In Proceedings of the 15th ACM International Conference on Computing Frontiers (CF'18), pages 248–251. May 2018.
This paper proposes a high-level, data-centric profiler to analyze how distributed arrays are used by each locale.

Chapel Explorations

LAPPS: Locality-Aware Productive Prefetching Support for PGAS. Engin Kayraklioglu, Michael Ferguson, and Tarek El-Ghazawi. ACM Transactions on Architecture and Code Optimizations (ACM TACO). Volume 15, Issue 3. September 2018.
This paper describes a high-level, easy-to-use language feature to improve data locality efficiently.
Parameterized Diamond Tiling for Stencil Computations with Chapel Parallel Iterators [slides]. Ian J. Bertolacci, Catherine Olschanowsky, Ben Harshbarger, Bradford L. Chamberlain, David G. Wonnacott, Michelle Mills Strout. ICS 2015, June 2015.
This paper explores the expression of parameterized diamond-shaped time-space tilings in Chapel, demonstrating competitive performance with C+OpenMP along with significant software engineering benefits due to Chapel's support for parallel iterators.

Chapel Historical Papers

The Cascade High Productivity Language [slides]. David Callahan, Bradford L. Chamberlain, Hans P. Zima. In 9th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2004), pages 52-60. IEEE Computer Society, April 2004.
This is the original Chapel paper which lays out some of our motivation and foundations for exploring the language. Note that the language has evolved significantly since this paper was published, but it remains an interesting historical artifact.

Archived Publications and Papers