In this edition of our Seven Questions for Chapel Users series, we turn to Dr. Nelson Luís Dias from Brazil who is using Chapel to analyze data generated by the Amazon Tall Tower Observatory (ATTO), a project dedicated to long-term, 24/7 monitoring of greenhouse gas fluctuations. Read on to learn more about his work and use of Chapel!

The Amazon Tall Tower Observatory, photo by Sebastian Brill, MPI-C

The Amazon Tall Tower Observatory, photo by Sebastian Brill, MPI-C

1. Who are you?

I am Nelson Luís Dias, Department of Environmental Engineering, Federal University of Paraná, Brazil. Professor. My computing background is serial programming with “classical” programming languages such as Fortran, Pascal, and C (mainly), as well as Python. I had not done any parallel programming until Chapel.

2. What do you do? What problems are you trying to solve?

Greenhouse gas flux measurements are essential to understand the effect on the climate of carbon emission, and they involve very large amounts of data.

In the last decades, fluxes of greenhouse gases (CO2, CH4, and some others) have been measured over all types of ecosystems around the world. These measurements are essential to understand the effect on the climate of carbon emission and sequestering by those ecosystems, and they involve very large amounts of data, typically wind speed components and gas concentrations measured 10 times per second. To further extract hourly and daily fluxes from the data requires statistical processing of those datasets, and fast (compiled) and easily parallelizable languages such as Chapel help to streamline the calculation of those fluxes.

My research areas are Hydrology and Atmospheric Turbulence. I work with my students and postdocs analyzing turbulence data, and more recently (we are in our first steps) with computational fluid dynamics. As an example of my research work using Chapel, here is an evaporation model that I created, fully developed in Chapel: It was published in Water Resources Research. The model itself, called STAEBLE, is available in GitHub. I think it is a good example of the language’s qualities, such as ease of coding and general applicability.

3. How does Chapel help you with these problems?

Chapel allows me to use the available CPU and GPU power efficiently, and reduces the time involved in most analyses significantly. This can be done without low-level programming of message passing, data synchronization, managing threads, etc.

Turbulence datasets are very large, and their statistical processing can be quite demanding. Chapel allows me to use the available CPU and GPU power in the computer efficiently, and reduces the time involved in most analyses significantly. This can be done without low-level programming of message passing, data synchronization, managing threads, etc. (which I have not mastered). We work mainly with desktop computers with relatively high-end CPUs and GPUs.

An application of the Lowess smoothing filter to field data using a parallel Chapel implementation

An application of the Lowess smoothing filter to field data using a parallel Chapel implementation

Chapel is very good at simpler tasks as well, such as scripting. I have mostly replaced traditional scripting tools such as AWK and Python with small Chapel programs.

What is also very important is that Chapel is very good at simpler tasks as well, such as scripting. I have by now mostly replaced traditional scripting tools such as AWK and Python with small Chapel programs. This reduces the brain RAM :-) required to cope with different syntaxes, and streamlines my workflow.

4. What initially drew you to Chapel?

Two things mostly. First, I was using Python heavily, but it is too slow for many “for loops” that are intrinsic in my line of work. This had already sent me in the direction of alternatives such as Numba, Codon, maybe Julia, etc. Second, I am interested in turbulence numerical modeling, and this means the need for a compiled, parallel language. Chapel answers both.

5. What are your biggest successes that Chapel has helped achieve?

Probably rewriting all my data processing routines in Chapel. It is very easy to translate from C because the syntax is similar for both languages; and almost the same goes for Python. Along the way, some novel routines have already been written in Chapel for the first time. I am particularly fond of my Levenberg-Marquardt implementation, which is part of my nstat.chpl module (see https://nldias.github.io/software.html). The result is both faster coding and faster processing, overall. Most of the processing time improvements come from straightforward use of forall and foreach loops.

6. If you could improve Chapel with a finger snap, what would you do?

Two things, both non-essential:

  1. Smaller generated executables. Chapel’s executables are big. This would help save some space on disk, and time when backing up files to Cloud storage.

  2. Allow generic arrays to be arguments of first-class procedures. But Chapel allows an elegant work-around, among many possibilities. See my solution here.

7. Anything else you’d like people to know?

I foresee Chapel carving a place in the sun as a modern, fast, language for science and engineering.

Chapel is not only very good at high-performance computing, parallel computing, etc. It is also very good for general-purpose computing. As of 2.1, it is already being made available via linux packages (.deb, etc.) which is a big leap into making it accessible to a wider audience. I plan to teach numerical methods with it this coming semester, and can foresee it carving a place in the sun as a modern, fast, language for science and engineering.


We’d like to thank Nelson for his participation in this interview series! If you’d like to hear more about his experiences with Chapel, be sure to check out his fan-favorite talks at CHIUW 2022 (slides) and ChapelCon'24 (slides).

And if you have any other questions for Nelson, or comments on this series, please direct them to the 7 Questions for Chapel Users thread on Discourse.