Using Chapel on IBM Systems¶
We have not used Chapel on IBM Systems in several years. It is likely the information presented here is stale or outdated. If you are interested in using Chapel on IBM Systems, please let us know.
If you are using Chapel on IBM's MareNostrum, you should refer to Using Chapel on MareNostrum.
We only have limited experience using Chapel on IBM systems. This file contains notes that reflect our experience, focusing first on PowerPC-based systems and then BG systems. If you are not familiar with Chapel, it is recommended that you first try the Chapel Quickstart Instructions to get started with the language.
PowerPC SMP clusters¶
We have run Chapel on clusters of Power5 and Power6 SMP nodes using the following settings:
MANPATHas indicated in Setting up Your Environment for Chapel.
pwr5for a Power5 cluster or
pwr6for a Power6 cluster. For example:
This will cause Chapel to use the IBM xlc and xlC compilers by default.
CHPL_COMMto gasnet. For example:
See Multilocale Chapel Execution for further information about running using multiple locales and GASNet.
Note: if you are using an installation in which your xlc/xlC compilers and
arutility do not use 64-bit object formats by default, you will need to set the
OBJECT_MODEvariable to 64 to use GASNet. For example:
Make sure you're in the top-level chapel/ directory:
Make/re-make the compiler and runtime:
PATHto include the directory
$CHPL_HOME/bin/$CHPL_HOST_PLATFORMwhich is created when you build the compiler. For example:
Compile your Chapel program as usual. See Compiling Chapel Programs for details. For example:
chpl -o hello6-taskpar-dist $CHPL_HOME/examples/hello6-taskpar-dist.chpl
When you compile a multi-locale program for, you will get a single binary by default (e.g.,
hello6-taskpar-dist). In order to run this program properly, you will typically need to write a loadleveler script that requests a number of compute nodes equal to the number of locales that you will specify through the
-nloption, and launches a single copy of the binary per node (either using poe, or on some systems by simply invoking the binary directly, at the bottom of the script). The parallelism within the node will be generated within the binary using pthreads in order to utilize all of the cores per node. In our experience, the details of required options for loadleveler scripts vary greatly from one site to another so check with your site's documentation for details.
There is a prototype loadleveler launcher, which can be utilized by setting the
CHPL_LAUNCHERenvironment variable to
loadleveler. See Chapel Launchers for a general description of the role of launchers in Chapel. This launcher is not sufficiently portable, robust, configurable, or interactive to warrant being made the default for Power5 or Power6 machines. If you are an IBM enthusiast who would like to work with us to improve the utility of this launcher we would greatly appreciate the help.
Additional Notes for Power5 Clusters¶
Our current technique for querying the amount of memory per node is
apparently not portable to the Power5 (which is to say, we get an
insanely large value back). When running the hpcc benchmarks with
the default configuration constants, this will exhibit itself as a
halt indicating that we can't take the log() of a non-positive
integer. Set the problem size explicitly using the
--n flags. If anyone has a chance to debug this problem or
suggest a better way to query the amount of memory before we come up
with a solution, please let us know.
Our current implementation of Chapel relies heavily on POSIX threads (pthreads) to implement both intra- and inter-locale parallelism. Since BG/L does not support pthreads, Chapel is not supported on this platform. If you are interested in running Chapel on BG/L, please contact us and let us know.
We have done some initial experimentation with the GASNet team to try and run Chapel on BG/P with some limited success, however more effort is required to make this a stable and supported platform. If running Chapel on BG/P would be of interest to you, please contact us and let us know.