DistributedIters¶
Usage
use DistributedIters;
or
import DistributedIters;
This module contains iterators that can be used to distribute a forall loop for a range or domain by dynamically splitting iterations between locales.
- config param debugDistributedIters: bool = false¶
Toggle debugging output.
- config param timeDistributedIters: bool = false¶
Toggle per-locale performance timing and output.
- config const infoDistributedIters: bool = false¶
Toggle invocation information output.
- iter distributedDynamic(c, chunkSize: int = 1, numTasks: int = 0, parDim: int = 0, localeChunkSize: int = 0, coordinated: bool = false, workerLocales = Locales)¶
- Arguments
c : range(?) or domain – The range (or domain) to iterate over. The range (domain) size must be positive.
chunkSize : int – The chunk size to yield to each task. Must be positive. Defaults to 1.
numTasks :
int
– The number of tasks to use. Must be nonnegative. If this argument has value 0, the iterator will use the value indicated bydataParTasksPerLocale
.parDim :
int
– Ifc
is a domain, then this specifies the dimension index to parallelize across. Must be non-negative and less than the rank of the domainc
. Defaults to 0.localeChunkSize : int – Chunk size to yield to each locale. Must be nonnegative. If this argument has value 0, the iterator will use an undefined heuristic in an attempt to choose a value that will perform well.
coordinated :
bool
– If true (and multi-locale), then have the locale invoking the iterator coordinate task distribution only; that is, disallow it from receiving work.workerLocales :
[] locale
– An array of locales over which to distribute the work. Defaults toLocales
(all available locales).
- Yields
Indices in the range
c
.
This iterator is equivalent to a distributed version of the dynamic policy of OpenMP.
Given an input range (or domain)
c
, each locale (except the calling locale, if coordinated is true) receives chunks of sizelocaleChunkSize
fromc
(or the remaining iterations if there are fewer thanlocaleChunkSize
). Each locale then distributes sub-chunks of sizechunkSize
as tasks, using thedynamic
iterator from theDynamicIters
module.Available for serial and zippered contexts.
- iter distributedGuided(c, numTasks: int = 0, parDim: int = 0, minChunkSize: int = 1, coordinated: bool = false, workerLocales = Locales)¶
- Arguments
c : range(?) or domain – The range (or domain) to iterate over. The range (domain) size must be positive.
numTasks :
int
– The number of tasks to use. Must be nonnegative. If this argument has value 0, the iterator will use the value indicated bydataParTasksPerLocale
.parDim :
int
– Ifc
is a domain, then this specifies the dimension index to parallelize across. Must be non-negative and less than the rank of the domainc
. Defaults to 0.minChunkSize :
int
– The smallest allowable chunk size. Must be positive. Defaults to 1.coordinated :
bool
– If true (and multi-locale), then have the locale invoking the iterator coordinate task distribution only; that is, disallow it from receiving work.workerLocales :
[] locale
– An array of locales over which to distribute the work. Defaults toLocales
(all available locales).
- Yields
Indices in the range
c
.
This iterator is equivalent to a distributed version of the guided policy of OpenMP.
Given an input range (or domain)
c
, each locale (except the calling locale, if coordinated is true) receives chunks of approximately exponentially decreasing size, until the remaining iterations reaches a minimum value,minChunkSize
, or there are no remaining iterations inc
. The chunk size is the number of unassigned iterations divided by the number of locales. Each locale then distributes sub-chunks as tasks, where each sub-chunk size is the number of unassigned local iterations divided by the number of tasks,numTasks
, and decreases approximately exponentially to 1. The splitting strategy is therefore adaptive.Available for serial and zippered contexts.