DistributedIters¶
Usage
use DistributedIters;
This module contains iterators that can be used to distribute a forall loop for a range or domain by dynamically splitting iterations between locales.

config param
debugDistributedIters
: bool = false¶ Toggle debugging output.

config param
timeDistributedIters
: bool = false¶ Toggle perlocale performance timing and output.

config const
infoDistributedIters
: bool = false¶ Toggle invocation information output.

iter
distributedDynamic
(c, chunkSize: int = 1, numTasks: int = 0, parDim: int = 1, localeChunkSize: int = 0, coordinated: bool = false, workerLocales = Locales)¶ Arguments:  c : range(?) or domain  The range (or domain) to iterate over. The range (domain) size must be positive.
 chunkSize : int  The chunk size to yield to each task. Must be positive. Defaults to 1.
 numTasks : int  The number of tasks to use. Must be nonnegative. If this
argument has value 0, the iterator will use the value indicated by
dataParTasksPerLocale
.  parDim : int  If
c
is a domain, then this specifies the dimension index to parallelize across. Must be positive, and must be at most the rank of the domainc
. Defaults to 1.  localeChunkSize : int  Chunk size to yield to each locale. Must be nonnegative. If this argument has value 0, the iterator will use an undefined heuristic in an attempt to choose a value that will perform well.
 coordinated : bool  If true (and multilocale), then have the locale invoking the iterator coordinate task distribution only; that is, disallow it from receiving work.
 workerLocales : [] locale  An array of locales over which to distribute the work.
Defaults to
Locales
(all available locales).
Yields: Indices in the range
c
.This iterator is equivalent to a distributed version of the dynamic policy of OpenMP.
Given an input range (or domain)
c
, each locale (except the calling locale, if coordinated is true) receives chunks of sizelocaleChunkSize
fromc
(or the remaining iterations if there are fewer thanlocaleChunkSize
). Each locale then distributes subchunks of sizechunkSize
as tasks, using thedynamic
iterator from theDynamicIters
module.Available for serial and zippered contexts.

iter
distributedGuided
(c, numTasks: int = 0, parDim: int = 1, minChunkSize: int = 1, coordinated: bool = false, workerLocales = Locales)¶ Arguments:  c : range(?) or domain  The range (or domain) to iterate over. The range (domain) size must be positive.
 numTasks : int  The number of tasks to use. Must be nonnegative. If this
argument has value 0, the iterator will use the value indicated by
dataParTasksPerLocale
.  parDim : int  If
c
is a domain, then this specifies the dimension index to parallelize across. Must be positive, and must be at most the rank of the domainc
. Defaults to 1.  minChunkSize : int  The smallest allowable chunk size. Must be positive. Defaults to 1.
 coordinated : bool  If true (and multilocale), then have the locale invoking the iterator coordinate task distribution only; that is, disallow it from receiving work.
 workerLocales : [] locale  An array of locales over which to distribute the work.
Defaults to
Locales
(all available locales).
Yields: Indices in the range
c
.This iterator is equivalent to a distributed version of the guided policy of OpenMP.
Given an input range (or domain)
c
, each locale (except the calling locale, if coordinated is true) receives chunks of approximately exponentially decreasing size, until the remaining iterations reaches a minimum value,minChunkSize
, or there are no remaining iterations inc
. The chunk size is the number of unassigned iterations divided by the number of locales. Each locale then distributes subchunks as tasks, where each subchunk size is the number of unassigned local iterations divided by the number of tasks,numTasks
, and decreases approximately exponentially to 1. The splitting strategy is therefore adaptive.Available for serial and zippered contexts.