Doing AI in Chapel
As a general-purpose, scalable parallel programming language, Chapel is well-suited for computations in Artificial Intelligence and Machine Learning. In practice, Chapel can either be used as a scalable driver of existing, highly tuned library routines for AI/ML, or to write novel distributed algorithms that do not yet have a vendor-tuned library solution. Moreover, due to its portable design and implementation, Chapel avoids lock-in to any single hardware vendor.
Some past examples of using Chapel for AI/ML-related workloads include:
-
CrayAI’s HyperParameter Optimization (HPO) module, which is described in this SC20 tutorial talk and was recently revisited in this blog article
-
Student projects exploring the use of Chapel to express tensors and transformers, or to drive PyTorch, including:
-
the ChAI project, developed by interns and students from Oregon State University—this ChapelCon ‘25 talk serves as a good introduction
-
a project by a University of Tokyo summer student that compared hand-written transformers in Chapel against C++ and Pytorch
-
In addition to the above, there are ongoing explorations of Chapel’s use in AI that are not yet publicly documented.
Writing Chapel with AI
Beyond using Chapel to write AI computations, developers have also explored the use of AI to generate Chapel programs. As an example, see Experimenting with the Model Context Protocol and Chapel on the Chapel blog for one developer’s experiences using Claude to write Chapel programs.