Parallelism at Any Scale
Chapel is built from the ground up with productive and performant parallel computing in mind. Conventionally, leveraging parallelism at different scales requires different programming models with different features, syntax, and interfaces. Chapel programs can leverage multiple types of parallelism with a single unified set of language features:
Shared-Memory Parallelism
Effortlessly leverage your multi-core CPUs for task and data parallelism.
Distributed-Memory Parallelism
Chapel’s global namespace makes distributed-memory parallelism as easy as writing code for your laptop.
Parallel Programming Features that Fit Together
Conventional distributed programming requires users to write their code in terms of individual processes, manually coordinating all communication between each node. Distributed programming doesn’t have to be this way, and Chapel’s global namespace is the perfect alternative. You can compute on distributed data structures with the same code you would use for a completely local version. Keep the performance of distributed parallelism, lose the finicky sends and receives. Or don’t! If you need, you can distribute data and coordinate message passing manually, or mix manual control with global-view programming. The beauty of Chapel is that you can choose the level(s) of abstraction best fit for your project.
Locales
First-class concepts for enumerating and reasoning about machine resources.
Distributed Domains and Arrays
Automatically distribute data across multiple nodes with Chapel’s built-in data distributions.