The Chapel Parallel Programming Language

 

Publications and Papers

Featured Publications

Towards Chapel-based Exascale Tree Search Algorithms: dealing with multiple GPU accelerators, [slides]. Tiago Carneiro, Nouredine Melab, Akihiro Hayashi, Vivek Sarkar. HPCS 2020, Outstanding Paper Award winner. March 22–27, 2021.
This paper revisits the design and implementation of tree search algorithms dealing with multiple GPUs, in addition to scalability and productivity-awareness using Chapel. The proposed algorithm exploits Chapel's distributed iterators by combining a partial search strategy with pre-compiled CUDA kernels for more efficient exploitation of the intra-node parallelism.
Development of Parallel CFD Applications with the Chapel Programming Language, [video | slides]. Matthieu Parenteau, Simon Bourgault-Côté, Frédéric Plante, Engin Kayraklioglu, Éric Laurendeau. AIAA Scitech 2021 Forum. January 13, 2021.
This paper describes a Computational Fluid Dynamics framework being developed using Chapel by a team at Polytechnique Montreal. The use of Chapel is described, and scaling results are given on up to 9k cores of a Cray XC. Comparisons are made against well-established CFD software packages.
Chapel Comes of Age: Productive Parallelism at Scale [slides (with outtakes)]. Brad Chamberlain, Elliot Ronaghan, Ben Albrecht, Lydia Duncan, Michael Ferguson, Ben Harshbarger, David Iten, David Keaton, Vassily Litvinov, Preston Sahabu, and Greg Titus. CUG 2018, Stockholm Sweden, May 22, 2018.
This paper describes the progress that has been made with Chapel since the HPCS program wrapped up.
(also see the publications at CHIUW, Chapel's annual workshop)

Other Recent Publications

A Machine-Learning-Based Framework for Productive Locality Exploitation. Engin Kayraklioglu, Erwan Fawry, Tarek El-Ghazawi. IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS). Volume 32, Issue 6. June 2021
This paper describes an approach that can efficiently train machine learning models that can be used to improve application execution times and scalability on distributed memory systems. This is achieved by analyzing the fine-grained communication profile of the application with small input data, and then predicting the communication patterns for more realistic inputs and coarsening the communication.
A Comparative Study of High-Productivity High-Performance Programming Languages for Parallel Metaheuristics [PDF]. Jan Gmys, Tiago Carneiro, Nouredine Melab, El-Ghazali Talbi, Daniel Tuyttens. Swarm and Evolutionary Computation, volume 57. September 2020.
This paper compares Chapel with Julia, Python/Numba, and C+OpenMP in terms of performance, scalability and productivity. Two parallel metaheuristics are implemented for solving the 3D Quadratic Assignment Problem (Q3AP), using thread-based parallelism on a multi-core shared-memory computer. The paper also evaluates and compares the performance of the languages for a parallel fitness evaluation loop, using four different test functions with different computational characteristics. The authors provide feedback on the implementation and parallelization process in each language.
Hypergraph Analytics of Domain Name System Relationships. Cliff A Joslyn, Sinan Aksoy, Dustin Arendt, Jesun Firoz, Louis Jenkins, Brenda Praggastis, Emilie Purvine, Marcin Zalewski. 17th Workshop on Algorithms and Models for the Web Graph (WAW 2020). September 21–24, 2020.
This paper applies hypergraph analytics over a gigascale DNS data using CHGL, performing compute-intensive calculations for data reduction and segmentation. Identified portions are then sent to HNX for both exploratory analysis and knowledge discovery targeting known tactics, techniques, and procedures.

Chapel Overviews

Chapel chapter, Bradford L. Chamberlain, Programming Models for Parallel Computing, edited by Pavan Balaji, published by MIT Press, November 2015.
This is currently the best introduction to Chapel's history, motivating themes, and features. It also provides a brief summary of current and future activities at the time of writing. An early pre-print of this chapter was made available under the name A Brief Overview of Chapel.
Parallel Programmability and the Chapel Language Bradford L. Chamberlain, David Callahan, Hans P. Zima. International Journal of High Performance Computing Applications, August 2007, 21(3): 291-312.
This is an early overview of Chapel's themes and main language concepts.

Applications of Chapel

Towards ultra-scale Branch-and-Bound using a high-productivity language. Tiago Carneiro, Jan Gmys, Nouredine Melab, and Daniel Tuyttens. Future Generation Computer Systems, volume 105, pages 196-209. April 2020.
This paper uses Chapel to study the design and implementation of distributed Branch-and-Bound algorithms for solving large combinatorial optimization problems. Experiments on the proposed algorithms are performed using the Flow-shop scheduling problem as a test-case. The Chapel-based application is compared to a state-of-the-art MPI+Pthreads-based counterpart in terms of performance, scalability, and productivity.
Chapel HyperGraph Library (CHGL) [slides]. Louis Jenkins, Tanveer Bhuiyan, Sarah Harun, Christopher Lightsey, David Mentgen, Sinan Aksoy, Timothy Stavenger, Marcin Zalewski, Hugh Medal, and Cliff Joslyn. 2018 IEEE High Performance Extreme Computing Conference (HPEC '18). September 25–27, 2018.
This paper describes the design and implementation of a HyperGraph library provided as a scalable distributed data structure.
Graph Algorithms in PGAS: Chapel and UPC++. Louis Jenkins, Jesun Sahariar Firoz, Marcin Zalewski, Cliff Joslyn, Mark Raugas. 2019 IEEE High Performance Extreme Computing Conference (HPEC ‘19), September 24–26, 2019.
This paper compares implementations of Breadth-First Search and Triangle Counting in Chapel and UPC++

Multiresolution Chapel Features

User-Defined Parallel Zippered iterators in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, Angeles Navarro. PGAS 2011: Fifth Conference on Partitioned Global Address Space Programming Models, October 2011.
This paper describes how users can create parallel iterators that support zippered iteration in Chapel, demonstrating them via several examples that partition iteration spaces statically and dynamically.
Authoring User-Defined Domain Maps in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, David Iten, Vassily Litvinov. CUG 2011, May 2011.
This paper builds on our HotPAR 2010 paper by describing the programmer's role in implementing user-defined distributions and layouts in Chapel.
User-Defined Distributions and Layouts in Chapel: Philosophy and Framework [slides]. Bradford L. Chamberlain, Steven J. Deitz, David Iten, Sung-Eun Choi. 2nd USENIX Workshop on Hot Topics in Parallelism (HotPar'10), June 2010.
This paper describes our approach and software framework for implementing user-defined distributions and memory layouts using Chapel's domain map concept.

Chapel Tools

ChplBlamer: A Data-centric and Code-centric Combined Profiler for Multi-locale Chapel Programs [slides]. Hui Zhang and Jeffrey K. Hollingsworth. In Proceedings of the 32nd ACM International Conference on Supercomputing (ICS'18), pages 252–262. June 2018.
This paper describes a tool that uses a combination of data-centric and code-centric information to relate performance profiling information back to user-level data structures and source code in Chapel programs.
APAT: an access pattern analysis tool for distributed arrays. Engin Kayraklioglu and Tarek El-Ghazawi. In Proceedings of the 15th ACM International Conference on Computing Frontiers (CF'18), pages 248–251. May 2018.
This paper proposes a high-level, data-centric profiler to analyze how distributed arrays are used by each locale.

Chapel Explorations

LAPPS: Locality-Aware Productive Prefetching Support for PGAS. Engin Kayraklioglu, Michael Ferguson, and Tarek El-Ghazawi. ACM Transactions on Architecture and Code Optimizations (ACM TACO). Volume 15, Issue 3. September 2018.
This paper describes a high-level, easy-to-use language feature to improve data locality efficiently.
Parameterized Diamond Tiling for Stencil Computations with Chapel Parallel Iterators [slides]. Ian J. Bertolacci, Catherine Olschanowsky, Ben Harshbarger, Bradford L. Chamberlain, David G. Wonnacott, Michelle Mills Strout. ICS 2015, June 2015.
This paper explores the expression of parameterized diamond-shaped time-space tilings in Chapel, demonstrating competitive performance with C+OpenMP along with significant software engineering benefits due to Chapel's support for parallel iterators.

Chapel Optimizations

LLVM-based Communication Optimizations for PGAS Programs. Akihiro Hayashi, Jisheng Zhao, Michael Ferguson, Vivek Sarkar. 2nd Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC2), November 2015.
This paper describes how LLVM passes can optimize communication in PGAS languages like Chapel. In particular, by representing potentially remote addresses using a distinct address space, existing LLVM optimization passes can be used to reduce communication.
Caching Puts and Gets in a PGAS Language Runtime [slides]. Michael P. Ferguson, Daniel Buettner. 9th International Conference on Partitioned Global Address Space Programming Models (PGAS 2015), Sept 2015.
This paper describes an optimization implemented for Chapel in which the runtime library aggregates puts and gets in accordance with Chapel's memory consistency model in order to reduce the potential overhead of doing fine-grained communications.

Archived Publications and Papers