1 <!--===- docs/DoConcurrent.md
3 Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 See https://llvm.org/LICENSE.txt for license information.
5 SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
9 # `DO CONCURRENT` isn't necessarily concurrent
17 A variant form of Fortran's primary looping construct was
18 added to the Fortran 2008 language standard with the apparent
19 intent of enabling more effective automatic parallel execution of code
20 written in the standard language without the use of
21 non-standard directives.
22 Spelled `DO CONCURRENT`, the construct takes a rectilinear iteration
23 space specification like `FORALL` and allows us to write
24 a multidimensional loop nest construct with a single `DO CONCURRENT`
25 statement and a single terminating `END DO` statement.
27 Within the body of a `DO CONCURRENT` loop the program must respect
28 a long list of restrictions on its use of Fortran language features.
29 Actions that obviously can't be executed in parallel or that
30 don't allow all iterations to execute are prohibited.
32 * Control flow statements that would prevent the loop nest from
33 executing all its iterations: `RETURN`, `EXIT`, and any
34 `GOTO` or `CYCLE` that leaves the construct.
35 * Image control statements: `STOP`, `SYNC`, `LOCK`/`UNLOCK`, `EVENT`,
36 and `ALLOCATE`/`DEALLOCATE` of a coarray.
37 * Calling a procedure that is not `PURE`.
38 * Deallocation of any polymorphic entity, as that could cause
39 an impure FINAL subroutine to be called.
40 * Messing with the IEEE floating-point control and status flags.
41 * Accepting some restrictions on data flow between iterations
42 (i.e., none) and on liveness of modified objects after the loop.
43 (The details are spelled out later.)
45 In return for accepting these restrictions, a `DO CONCURRENT` might
46 compile into code that exploits the parallel features of the target
47 machine to run the iterations of the `DO CONCURRENT` construct.
48 One needn't necessarily require OpenACC or OpenMP directives.
50 But it turns out that these rules, though *necessary* for safe parallel
51 execution, are not *sufficient*.
52 One may write conforming `DO CONCURRENT` constructs that cannot
53 be safely parallelized by a compiler; worse, one may write conforming
54 `DO CONCURRENT` constructs whose parallelizability a compiler cannot
55 determine even in principle -- forcing a conforming compiler to
56 assume the worst and generate sequential code.
60 The Fortran language standard does not actually define `DO CONCURRENT` as a
61 concurrent construct, or even as a construct that imposes sufficient
62 requirements on the programmer to allow for parallel execution.
63 `DO CONCURRENT` is instead defined as executing the iterations
64 of the loop in some arbitrary order (see subclause 11.1.7.4.3 paragraph 3).
66 A `DO CONCURRENT` construct cannot modify an object in one iteration
67 and expect to be able to read it in another, or read it in one before it gets
68 modified by another -- there's no way to synchronize inter-iteration
69 communication with critical sections or atomics.
71 But a conforming `DO CONCURRENT` construct *can* modify an object in
72 multiple iterations of the loop so long as its only reads from that
73 object *after* having modified it earler in the *same* iteration.
74 (See 11.1.7.5 paragraph 4 for the details.)
83 ! And TMP is undefined afterwards
86 The scalar variable `TMP` is used in this loop in a way that conforms
87 to the standard, as every use of `TMP` follows a definition that appears
88 earlier in the same iteration.
90 The idea, of course, is that a parallelizing compiler isn't required to
91 use the same word of memory to hold the value of `TMP`;
92 for parallel execution, `TMP` can be _localized_.
93 This means that the loop can be internally rewritten as if it had been
103 and thus any risk of data flow between the iterations is removed.
105 ## The identification problem
107 The automatic localization rules of `DO CONCURRENT` that allow
108 usage like `TMP` above are not limited to simple local scalar
110 They also apply to arbitrary variables, and thus may apply
111 in cases that a compiler cannot determine exactly due to
112 the presence of indexing, indirection, and interprocedural data flow.
114 Let's see why this turns out to be a problem.
118 DO CONCURRENT (J=1:N)
119 T(IX(J)) = A(J) + B(J)
123 This loop conforms to the standard language if,
124 whenever `IX(J)` equals `IY(J')` for any distinct pair of iterations
126 then the load must be reading a value stored earlier in the
127 same iteration -- so `IX(J')==IY(J')`, and hence `IX(J)==IX(J')` too,
129 Otherwise, a load in one iteration might depend on a store
132 When all values of `IX(J)` are distinct, and the program conforms
133 to the restrictions of `DO CONCURRENT`, a compiler can parallelize
134 the construct easily without applying localization to `T(...)`.
135 And when some values of `IX(J)` are duplicates, a compiler can parallelize
136 the loop by forwarding the stored value to the load in those
138 But at compilation time, there's _no way to distinguish_ these
139 cases in general, and a conservative implementation has to assume
140 the worst and run the loop's iterations serially.
141 (Or compare `IX(J)` with `IY(J)` at runtime and forward the
142 stored value conditionally, which adds overhead and becomes
143 quickly impractical in loops with multiple loads and stores.)
150 TYPE(T) :: T1(N), T2(N)
151 DO CONCURRENT (J=1:N)
152 T1(J)%P = A(J) + B(J)
156 we have the same kind of ambiguity from the compiler's perspective.
157 Are the targets of the pointers used for the stores all distinct
158 from the targets of the pointers used for the loads?
159 The programmer may know that they are so, but a compiler
160 cannot; and there is no syntax by which one can stipulate
163 ## The global variable localization problem
173 PURE REAL FUNCTION F(X)
174 REAL, INTENT(IN) :: X
177 DO CONCURRENT (J=1:N)
182 The variable `T` is obviously meant to be localized.
183 However, a compiler can't be sure that the pure function `F`
184 doesn't read from `T`; if it does, there wouldn't be a
185 practical way to convey the localized copy to it.
187 In summary, standard Fortran defines `DO CONCURRENT` as a serial
188 construct with a sheaf of constraints that we assume are intended
189 to enable straightforward parallelization without
190 all of the complexity of defining threading models or shared memory semantics,
191 with the addition of an automatic localization rule that provides
192 convenient temporaries objects without requiring the use of nested
193 `BLOCK` or `ASSOCIATE` constructs.
194 But the language allows ambiguous cases in which a compiler can neither
195 1. prove that automatic localization *is* required for a given
196 object in every iteration, nor
197 1. prove that automatic localization *isn't* required in any iteration.
199 ## Locality specifiers
201 The Fortran 2018 standard added "locality specifiers" to the
202 `DO CONCURRENT` statement.
203 These allow one to define some variable names as being `LOCAL` or
204 `SHARED`, overriding the automatic localization rule so that it
205 applies only in the remaining cases of "unspecified" locality.
207 `LOCAL` variables are those that can be defined by more than one
208 iteration but are referenced only after having been defined
209 earlier in the same iteration.
210 `SHARED` variables are those that, if defined in
211 any iteration, are not defined or referenced in any other iteration.
213 (There is also a `LOCAL_INIT` specifier that is not relevant to the
214 problem at hand, and a `DEFAULT(NONE)` specifier that requires a
215 locality specifier be present for every variable mentioned in the
216 `DO CONCURRENT` construct.)
218 These locality specifiers can help resolve some otherwise ambiguous
219 cases of localization, but they're not a complete solution to the problems
222 First, the specifiers allow explicit localization of objects
223 (like the scalar `T` in `MODULE M` above) that are not local variables
225 `DO CONCURRENT` still allows a pure procedure called from the loop
226 to reference `T`, and so explicit localization just confirms the
227 worst-case assumptions about interprocedural data flow
228 within an iteration that a compiler must make anyway.
230 Second, the specifiers allow arbitary variables to be localized,
232 One may localize a million-element array of derived type
233 with allocatable components to be created in each iteration,
235 (It is not clear whether localized objects are finalized;
238 Third, as Fortran uses context to distinguish references to
239 pointers from (de)references to their targets, it's not clear
240 whether `LOCAL(PTR)` localizes a pointer, its target, or both.
242 Fourth, the specifiers can be applied only to variable _names_,
243 not to any designator with subscripts or component references.
244 One may have defined a derived type to hold a representation
245 of a sparse matrix, using `ALLOCATABLE` components to store its
246 packed data and indexing structures, but a program cannot localize
247 some parts of it and share the rest.
248 (Perhaps one may wrap `ASSOCIATE` constructs around the
249 `DO CONCURRENT` construct;
250 the interaction between locality specifiers and construct entities is
251 not clearly defined in the language.)
253 In the example above that defines `T(IX(J))` and reads from `T(IY(J))`,
254 the locality specifiers can't be used to share those elements of `T()`
255 that are modified at most once and localize the cases where
256 `IX(J)` is a duplicate and `IY(J)==IX(J)`.
258 Last, when a loop both defines and references many shared objects,
259 including potential references to globally accessible object
260 in called procedures, one may need to name all of them in a `SHARED`
265 These problems have been presented to the J3 Fortran language
268 recent [e-mail discussions](https://mailman.j3-fortran.org/pipermail/j3/2020-July/thread.html)
269 did not include an intent to address them in future standards or corrigenda.
270 The most effective-looking response -- which was essentially "just use
271 `DEFAULT(SHARED)` to disable all automatic localization" -- is not an
272 viable option, since the language does not include such a specifier!
274 Programmers writing `DO CONCURRENT` loops that are safely parallelizable
275 need an effective means to convey to compilers that those compilers
276 do not have to assume only the weaker stipulations required by
277 today's `DO CONCURRENT` without having to write verbose and
278 error-prone locality specifiers (when those would suffice).
279 Specifically, an easy means is required that stipulates that localization
280 should apply at most only to the obvious cases of local non-pointer
281 non-allocatable scalars.
283 In the LLVM Fortran compiler project (a/k/a "flang", "f18") we considered
284 several solutions to this problem.
285 1. Add syntax (e.g., `DO PARALLEL` or `DO CONCURRENT() DEFAULT(PARALLEL)`)
286 by which one can inform the compiler that it should localize only
287 the obvious cases of simple local scalars.
288 Such syntax seems unlikely to ever be standardized, so its usage
289 would be nonportable.
290 1. Add a command-line option &/or a source directive to stipulate
291 the stronger guarantees. Obvious non-parallelizable usage in the construct
292 would elicit a stern warning. The `DO CONCURRENT` loops in the source
293 would continue to be portable to other compilers.
294 1. Assume that these stronger conditions hold by default, and add a command-line
295 option &/or a source directive to "opt out" back to the weaker
296 requirements of the standard language
297 in the event that the program contains one of those inherently
298 non-parallelizable `DO CONCURRENT` loops that perhaps should never have
299 been possible to write in a conforming program in the first place.
300 Actual parallel `DO CONCURRENT` constructs would produce parallel
301 code for users who would otherwise be surprised to learn about these
302 problems in the language.
303 But this option could lead to non-standard behavior for codes that depend,
304 accidentally or not, on non-parallelizable implicit localization.
305 1. Accept the standard as it exists, do the best job of automatic
306 parallelization that can be done, and refer dissatisfied users to J3.
307 This would be avoiding the problem.
309 None of these options is without a fairly obvious disadvantage.
310 The best option seems to be the one that assumes that users who write
311 `DO CONCURRENT` constructs are doing so with the intent to write parallel code.
315 As of August 2020, we observe that the GNU Fortran compiler (10.1) does not
316 yet implement the Fortran 2018 locality clauses, but will parallelize some
317 `DO CONCURRENT` constructs without ambiguous data dependences when the automatic
318 parallelization option is enabled.
320 The Intel Fortran compiler supports the new locality clauses and will parallelize
321 some `DO CONCURRENT` constructs when automatic parallelization option is enabled.
322 When OpenMP is enabled, ifort reports that all `DO CONCURRENT` constructs are
323 parallelized, but they seem to execute in a serial fashion when data flow