1 # Ownership-based Buffer Deallocation
5 One-Shot Bufferize does not deallocate any buffers that it allocates. After
6 running One-Shot Bufferize, the resulting IR may have a number of `memref.alloc`
7 ops, but no `memref.dealloc` ops. Buffer dellocation is delegated to the
8 `-ownership-based-buffer-deallocation` pass. This pass supersedes the now
9 deprecated `-buffer-deallocation` pass, which does not work well with
12 On a high level, buffers are "owned" by a basic block. Ownership materializes
13 as an `i1` SSA value and can be thought of as "responsibility to deallocate". It
14 is conceptually similar to `std::unique_ptr` in C++.
16 There are few additional preprocessing and postprocessing passes that should be
17 run together with the ownership-based buffer deallocation pass. The recommended
18 compilation pipeline is as follows:
22 | it's recommended to perform all bufferization here at latest,
23 | <- any allocations inserted after this point have to be handled
27 ownership-based-buffer-deallocation
29 canonicalize <- mostly for scf.if simplifications
31 buffer-deallocation-simplification
32 V <- from this point onwards no tensor values are allowed
40 The entire deallocation pipeline (excluding `-one-shot-bufferize`) is exposed
41 as `-buffer-deallocation-pipeline`.
43 The ownership-based buffer deallocation pass processes operations implementing
44 `FunctionOpInterface` one-by-one without analysing the call-graph.
45 This means that there have to be [some rules](#function-boundary-abi) on how
46 MemRefs are handled when being passed from one function to another. The rest of
47 the pass revolves heavily around the `bufferization.dealloc` operation which is
48 inserted at the end of each basic block with appropriate operands and should be
49 optimized using the Buffer Deallocation Simplification pass
50 (`--buffer-deallocation-simplification`) and the regular canonicalizer
51 (`--canonicalize`). Lowering the result of the
52 `-ownership-based-buffer-deallocation` pass directly using
53 `--convert-bufferization-to-memref` without beforehand optimization is not
54 recommended as it will lead to very inefficient code (the runtime-cost of
55 `bufferization.dealloc` is `O(|memrefs|^2+|memref|*|retained|)`).
57 ## Function boundary ABI
59 The Buffer Deallocation pass operates on the level of operations implementing
60 the `FunctionOpInterface`. Such operations can take MemRefs as arguments, but
61 also return them. To ensure compatibility among all functions (including
62 external ones), some rules have to be enforced:
63 * When a MemRef is passed as a function argument, ownership is never acquired.
64 It is always the caller's responsibility to deallocate such MemRefs.
65 * Returning a MemRef from a function always passes ownership to the caller,
66 i.e., it is also the caller's responsibility to deallocate memrefs returned
67 from a called function.
68 * A function must not return a MemRef with the same allocated base buffer as
69 one of its arguments (in this case a copy has to be created). Note that in
70 this context two subviews of the same buffer that don't overlap are also
73 For external functions (e.g., library functions written externally in C), the
74 externally provided implementation has to adhere to these rules and they are
75 just assumed by the buffer deallocation pass. Functions on which the
76 deallocation pass is applied and for which the implementation is accessible are
77 modified by the pass such that the ABI is respected (i.e., buffer copies are
78 inserted when necessary).
80 ## Inserting `bufferization.dealloc` operations
82 `bufferization.dealloc` and ownership indicators are the main abstractions in
83 the ownership-based buffer deallocation pass. `bufferization.dealloc`
84 deallocates all given buffers if the respective ownership indicator is set and
85 there is no aliasing buffer in the retain list.
87 ![branch_example_pre_move](/includes/img/bufferization_dealloc_op.svg)
89 `bufferization.dealloc` operations are unconditionally inserted at the end of
90 each basic block (just before the terminator). The majority of the pass is about
91 finding the correct operands for this operation. There are three variadic
92 operand lists to be populated, the first contains all MemRef values that may
93 need to be deallocated, the second list contains their associated ownership
94 values (of `i1` type), and the third list contains MemRef values that are still
95 needed at a later point and should thus not be deallocated (e.g., yielded or
98 `bufferization.dealloc` allows us to deal with any kind of aliasing behavior: it
99 lowers to runtime aliasing checks when not enough information can be collected
100 statically. When enough aliasing information is statically available, operands
101 or the entire op may fold away.
105 To do so, we use a concept of ownership indicators of memrefs which materialize
106 as an `i1` value for any SSA value of `memref` type, indicating whether the
107 basic block in which it was materialized has ownership of this MemRef. Ideally,
108 this is a constant `true` or `false`, but might also be a non-constant SSA
109 value. To keep track of those ownership values without immediately materializing
110 them (which might require insertion of `bufferization.clone` operations or
111 operations checking for aliasing at runtime at positions where we don't actually
112 need a materialized value), we use the `Ownership` class. This class represents
113 the ownership in three states forming a lattice on a partial order:
115 forall X in SSA values. uninitialized < unique(X) < unknown
116 forall X, Y in SSA values.
117 unique(X) == unique(Y) iff X and Y always evaluate to the same value
118 unique(X) != unique(Y) otherwise
120 Intuitively, the states have the following meaning:
121 * Uninitialized: the ownership is not initialized yet, this is the default
122 state; once an operation is finished processing the ownership of all
123 operation results with MemRef type should not be uninitialized anymore.
124 * Unique: there is a specific SSA value that can be queried to check ownership
125 without materializing any additional IR
126 * Unknown: no specific SSA value is available without materializing additional
127 IR, typically this is because two ownerships in 'Unique' state would have to
128 be merged manually (e.g., the result of an `arith.select` either has the
129 ownership of the then or else case depending on the condition value,
130 inserting another `arith.select` for the ownership values can perform the
131 merge and provide a 'Unique' ownership for the result), however, in the
132 general case this 'Unknown' state has to be assigned.
134 Implied by the above partial order, the pass combines two ownerships in the
137 | Ownership 1 | Ownership 2 | Combined Ownership |
138 |:--------------|:--------------|:-------------------|
139 | uninitialized | uninitialized | uninitialized |
140 | unique(X) | uninitialized | unique(X) |
141 | unique(X) | unique(X) | unique(X) |
142 | unique(X) | unique(Y) | unknown |
143 | unknown | unique | unknown |
144 | unknown | uninitialized | unknown |
145 | <td colspan=3> + symmetric cases |
147 **Collecting the list of MemRefs that potentially need to be deallocated**
149 For a given block, the list of MemRefs that potentially need to be deallocated
150 at the end of that block is computed by keeping track of all values for which
151 the block potentially takes over ownership. This includes MemRefs provided as
152 basic block arguments, interface handlers for operations like `memref.alloc` and
153 `func.call`, but also liveness information in regions with multiple basic
154 blocks. More concretely, it is computed by taking the MemRefs in the 'in' set
155 of the liveness analysis of the current basic block B, appended by the MemRef
156 block arguments and by the set of MemRefs allocated in B itself (determined by
157 the interface handlers), then subtracted (also determined by the interface
158 handlers) by the set of MemRefs deallocated in B.
160 Note that we don't have to take the intersection of the liveness 'in' set with
161 the 'out' set of the predecessor block because a value that is in the 'in' set
162 must be defined in an ancestor block that dominates all direct predecessors and
163 thus the 'in' set of this block is a subset of the 'out' sets of each
167 memrefs = filter((liveIn(block) U
168 allocated(block) U arguments(block)) \ deallocated(block), isMemRef)
171 The list of conditions for the second variadic operands list of
172 `bufferization.dealloc` is computed by querying the stored ownership value for
173 each of the MemRefs collected as described above. The ownership state is updated
174 by the interface handlers while processing the basic block.
176 **Collecting the list of MemRefs to retain**
178 Given a basic block B, the list of MemRefs that have to be retained can be
179 different for each successor block S. For the two basic blocks B and S and the
180 values passed via block arguments to the destination block S, we compute the
181 list of MemRefs that have to be retained in B by taking the MemRefs in the
182 successor operand list of the terminator and the MemRefs in the 'out' set of the
183 liveness analysis for B intersected with the 'in' set of the destination block
186 This list of retained values makes sure that we cannot run into use-after-free
187 situations even if no aliasing information is present at compile-time.
190 toRetain = filter(successorOperands + (liveOut(fromBlock) insersect
191 liveIn(toBlock)), isMemRef)
194 ## Supported interfaces
196 The pass uses liveness analysis and a few interfaces:
197 * `FunctionOpInterface`
199 * `MemoryEffectOpInterface`
200 * `RegionBranchOpInterface`
201 * `RegionBranchTerminatorOpInterface`
203 Due to insufficient information provided by the interface, it also special-cases
204 on the `cf.cond_br` operation and makes some assumptions about operations
205 implementing the `RegionBranchOpInterface` at the moment, but improving the
206 interfaces would allow us to remove those dependencies in the future.
210 The Buffer Deallocation pass has some requirements and limitations on the input
211 IR. These are checked in the beginning of the pass and errors are emitted
213 * The set of interfaces the pass operates on must be implemented (correctly).
214 E.g., if there is an operation present with a nested region, but does not
215 implement the `RegionBranchOpInterface`, an error is emitted because the
216 pass cannot know the semantics of the nested region (and does not make any
217 default assumptions on it).
218 * No explicit control-flow loops are present. Currently, only loops using
219 structural-control-flow are supported. However, this limitation could be
220 lifted in the future.
221 * Deallocation operations should not be present already. The pass should
222 handle them correctly already (at least in most cases), but it's not
223 supported yet due to insufficient testing.
224 * Terminators must implement either `RegionBranchTerminatorOpInterface` or
225 `BranchOpInterface`, but not both. Terminators with more than one successor
226 are not supported (except `cf.cond_br`). This is not a fundamental
227 limitation, but there is no use-case justifying the more complex
228 implementation at the moment.
232 The following example contains a few interesting cases:
233 * Basic block arguments are modified to also pass along the ownership
234 indicator, but not for entry blocks, where the function boundary ABI
236 * The result of `arith.select` initially has 'Unknown' assigned as ownership,
237 but once the `bufferization.dealloc` operation is inserted it is put in the
238 'retained' list (since it has uses in a later basic block) and thus the
239 'Unknown' ownership can be replaced with a 'Unique' ownership using the
240 corresponding result of the dealloc operation.
241 * The `cf.cond_br` operation has more than one successor and thus has to
242 insert two `bufferization.dealloc` operations (one for each successor).
243 While they have the same list of MemRefs to deallocate (because they perform
244 the deallocations for the same block), it must be taken into account that
245 some MemRefs remain *live* for one branch but not the other (thus set
246 intersection is performed on the *live-out* of the current block and the
247 *live-in* of the target block). Also, `cf.cond_br` supports separate
248 forwarding operands for each successor. To make sure that no MemRef is
249 deallocated twice (because there are two `bufferization.dealloc` operations
250 with the same MemRefs to deallocate), the condition operands are adjusted to
251 take the branch condition into account. While a generic lowering for such
252 terminator operations could be implemented, a specialized implementation can
253 take all the semantics of this particular operation into account and thus
254 generate a more efficient lowering.
257 func.func @example(%memref: memref<?xi8>, %select_cond: i1, %br_cond: i1) {
258 %alloc = memref.alloc() : memref<?xi8>
259 %alloca = memref.alloca() : memref<?xi8>
260 %select = arith.select %select_cond, %alloc, %alloca : memref<?xi8>
261 cf.cond_br %br_cond, ^bb1(%alloc : memref<?xi8>), ^bb1(%memref : memref<?xi8>)
262 ^bb1(%bbarg: memref<?xi8>):
263 test.copy(%bbarg, %select) : (memref<?xi8>, memref<?xi8>)
268 After running `--ownership-based-buffer-deallocation`, it looks as follows:
271 // Function boundary ABI: ownership of `%memref` will never be acquired.
272 func.func @example(%memref: memref<?xi8>, %select_cond: i1, %br_cond: i1) {
273 %false = arith.constant false
274 %true = arith.constant true
276 // The ownership of a MemRef defined by the `memref.alloc` operation is always
277 // assigned to be 'true'.
278 %alloc = memref.alloc() : memref<?xi8>
280 // The ownership of a MemRef defined by the `memref.alloca` operation is
281 // always assigned to be 'false'.
282 %alloca = memref.alloca() : memref<?xi8>
284 // The ownership of %select will be the join of the ownership of %alloc and
285 // the ownership of %alloca, i.e., of %true and %false. Because the pass does
286 // not know about the semantics of the `arith.select` operation (unless a
287 // custom handler is implemented), the ownership join will be 'Unknown'. If
288 // the materialized ownership indicator of %select is needed, either a clone
289 // has to be created for which %true is assigned as ownership or the result
290 // of a `bufferization.dealloc` where %select is in the retain list has to be
292 %select = arith.select %select_cond, %alloc, %alloca : memref<?xi8>
294 // We use `memref.extract_strided_metadata` to get the base memref since it is
295 // not allowed to pass arbitrary memrefs to `memref.dealloc`. This property is
296 // already enforced for `bufferization.dealloc`
297 %base_buffer_memref, ... = memref.extract_strided_metadata %memref
298 : memref<?xi8> -> memref<i8>, index, index, index
299 %base_buffer_alloc, ... = memref.extract_strided_metadata %alloc
300 : memref<?xi8> -> memref<i8>, index, index, index
301 %base_buffer_alloca, ... = memref.extract_strided_metadata %alloca
302 : memref<?xi8> -> memref<i8>, index, index, index
304 // The deallocation conditions need to be adjusted to incorporate the branch
305 // condition. In this example, this requires only a single negation, but might
306 // also require multiple arith.andi operations.
307 %not_br_cond = arith.xori %true, %br_cond : i1
309 // There are two dealloc operations inserted in this basic block, one per
310 // successor. Both have the same list of MemRefs to deallocate and the
311 // conditions only differ by the branch condition conjunct.
312 // Note, however, that the retained list differs. Here, both contain the
313 // %select value because it is used in both successors (since it's the same
314 // block), but the value passed via block argument differs (%memref vs.
316 %10:2 = bufferization.dealloc
317 (%base_buffer_memref, %base_buffer_alloc, %base_buffer_alloca
318 : memref<i8>, memref<i8>, memref<i8>)
319 if (%false, %br_cond, %false)
320 retain (%alloc, %select : memref<?xi8>, memref<?xi8>)
322 %11:2 = bufferization.dealloc
323 (%base_buffer_memref, %base_buffer_alloc, %base_buffer_alloca
324 : memref<i8>, memref<i8>, memref<i8>)
325 if (%false, %not_br_cond, %false)
326 retain (%memref, %select : memref<?xi8>, memref<?xi8>)
328 // Because %select is used in ^bb1 without passing it via block argument, we
329 // need to update it's ownership value here by merging the ownership values
330 // returned by the dealloc operations
331 %new_ownership = arith.select %br_cond, %10#1, %11#1 : i1
333 // The terminator is modified to pass along the ownership indicator values
334 // with each MemRef value.
335 cf.cond_br %br_cond, ^bb1(%alloc, %10#0 : memref<?xi8>, i1),
336 ^bb1(%memref, %11#0 : memref<?xi8>, i1)
338 // All non-entry basic blocks are modified to have an additional i1 argument for
339 // each MemRef value in the argument list.
340 ^bb1(%13: memref<?xi8>, %14: i1): // 2 preds: ^bb0, ^bb0
341 test.copy(%13, %select) : (memref<?xi8>, memref<?xi8>)
343 %base_buffer_13, ... = memref.extract_strided_metadata %13
344 : memref<?xi8> -> memref<i8>, index, index, index
345 %base_buffer_select, ... = memref.extract_strided_metadata %select
346 : memref<?xi8> -> memref<i8>, index, index, index
348 // Here, we don't have a retained list, because the block has no successors
349 // and the return has no operands.
350 bufferization.dealloc (%base_buffer_13, %base_buffer_select
351 : memref<i8>, memref<i8>)
352 if (%14, %new_ownership)
357 ## Buffer Deallocation Simplification Pass
359 The [semantics of the `bufferization.dealloc` operation](#bufferizationdealloc-bufferizationdeallocop)
360 provide a lot of opportunities for optimizations which can be conveniently split
361 into patterns using the greedy pattern rewriter. Some of those patterns need
362 access to additional analyses such as an analysis that can determine whether two
363 MemRef values must, may, or never originate from the same buffer allocation.
364 These patterns are collected in the Buffer Deallocation Simplification pass,
365 while patterns that don't need additional analyses are registered as part of the
366 regular canonicalizer pass. This pass is best run after
367 `--ownership-based-buffer-deallocation` followed by `--canonicalize`.
369 The pass applies patterns for the following simplifications:
370 * Remove MemRefs from retain list when guaranteed to not alias with any value
371 in the 'memref' operand list. This avoids an additional aliasing check with
373 * Split off values in the 'memref' list to new `bufferization.dealloc`
374 operations only containing this value in the 'memref' list when it is
375 guaranteed to not alias with any other value in the 'memref' list. This
376 avoids at least one aliasing check at runtime and enables using a more
377 efficient lowering for this new `bufferization.dealloc` operation.
378 * Remove values from the 'memref' operand list when it is guaranteed to alias
379 with at least one value in the 'retained' list and may not alias any other
380 value in the 'retain' list.
382 ## Lower Deallocations Pass
384 The `-lower-deallocations` pass transforms all `bufferization.dealloc`
385 operations to `memref.dealloc` operations and may also insert operations from
386 the `scf`, `func`, and `arith` dialects to make deallocations conditional and
387 check whether two MemRef values come from the same allocation at runtime (when
388 the `buffer-deallocation-simplification` pass wasn't able to determine it
391 The same lowering of the `bufferization.dealloc` operation is also part of the
392 `-convert-bufferization-to-memref` conversion pass which also lowers all the
393 other operations of the bufferization dialect.
395 We distinguish multiple cases in this lowering pass to provide an overall more
396 efficient lowering. In the general case, a library function is created to avoid
397 quadratic code size explosion (relative to the number of operands of the dealloc
398 operation). The specialized lowerings aim to avoid this library function because
399 it requires allocating auxiliary MemRefs of index values.
403 A library function is generated to avoid code-size blow-up. On a high level, the
404 base-memref of all operands is extracted as an index value and stored into
405 specifically allocated MemRefs and passed to the library function which then
406 determines whether they come from the same original allocation. This information
407 is needed to avoid double-free situations and to correctly retain the MemRef
408 values in the `retained` list.
410 **Dealloc Operation Lowering**
412 This lowering supports all features the dealloc operation has to offer. It
413 computes the base pointer of each memref (as an index), stores it in a
414 new memref helper structure and passes it to the helper function generated
415 in `buildDeallocationLibraryFunction`. The results are stored in two lists
416 (represented as MemRefs) of booleans passed as arguments. The first list
417 stores whether the corresponding condition should be deallocated, the
418 second list stores the ownership of the retained values which can be used
419 to replace the result values of the `bufferization.dealloc` operation.
423 %0:2 = bufferization.dealloc (%m0, %m1 : memref<2xf32>, memref<5xf32>)
425 retain (%r0, %r1 : memref<1xf32>, memref<2xf32>)
427 lowers to (simplified):
429 %c0 = arith.constant 0 : index
430 %c1 = arith.constant 1 : index
431 %dealloc_base_pointer_list = memref.alloc() : memref<2xindex>
432 %cond_list = memref.alloc() : memref<2xi1>
433 %retain_base_pointer_list = memref.alloc() : memref<2xindex>
434 %m0_base_pointer = memref.extract_aligned_pointer_as_index %m0
435 memref.store %m0_base_pointer, %dealloc_base_pointer_list[%c0]
436 %m1_base_pointer = memref.extract_aligned_pointer_as_index %m1
437 memref.store %m1_base_pointer, %dealloc_base_pointer_list[%c1]
438 memref.store %cond0, %cond_list[%c0]
439 memref.store %cond1, %cond_list[%c1]
440 %r0_base_pointer = memref.extract_aligned_pointer_as_index %r0
441 memref.store %r0_base_pointer, %retain_base_pointer_list[%c0]
442 %r1_base_pointer = memref.extract_aligned_pointer_as_index %r1
443 memref.store %r1_base_pointer, %retain_base_pointer_list[%c1]
444 %dyn_dealloc_base_pointer_list = memref.cast %dealloc_base_pointer_list :
445 memref<2xindex> to memref<?xindex>
446 %dyn_cond_list = memref.cast %cond_list : memref<2xi1> to memref<?xi1>
447 %dyn_retain_base_pointer_list = memref.cast %retain_base_pointer_list :
448 memref<2xindex> to memref<?xindex>
449 %dealloc_cond_out = memref.alloc() : memref<2xi1>
450 %ownership_out = memref.alloc() : memref<2xi1>
451 %dyn_dealloc_cond_out = memref.cast %dealloc_cond_out :
452 memref<2xi1> to memref<?xi1>
453 %dyn_ownership_out = memref.cast %ownership_out :
454 memref<2xi1> to memref<?xi1>
455 call @dealloc_helper(%dyn_dealloc_base_pointer_list,
456 %dyn_retain_base_pointer_list,
458 %dyn_dealloc_cond_out,
459 %dyn_ownership_out) : (...)
460 %m0_dealloc_cond = memref.load %dyn_dealloc_cond_out[%c0] : memref<2xi1>
461 scf.if %m0_dealloc_cond {
462 memref.dealloc %m0 : memref<2xf32>
464 %m1_dealloc_cond = memref.load %dyn_dealloc_cond_out[%c1] : memref<2xi1>
465 scf.if %m1_dealloc_cond {
466 memref.dealloc %m1 : memref<5xf32>
468 %r0_ownership = memref.load %dyn_ownership_out[%c0] : memref<2xi1>
469 %r1_ownership = memref.load %dyn_ownership_out[%c1] : memref<2xi1>
470 memref.dealloc %dealloc_base_pointer_list : memref<2xindex>
471 memref.dealloc %retain_base_pointer_list : memref<2xindex>
472 memref.dealloc %cond_list : memref<2xi1>
473 memref.dealloc %dealloc_cond_out : memref<2xi1>
474 memref.dealloc %ownership_out : memref<2xi1>
475 // replace %0#0 with %r0_ownership
476 // replace %0#1 with %r1_ownership
481 A library function is built per compilation unit that can be called at
482 bufferization dealloc sites to determine whether two MemRefs come from the same
483 allocation and their new ownerships.
485 The generated function takes two MemRefs of indices and three MemRefs of
486 booleans as arguments:
487 * The first argument A should contain the result of the
488 extract_aligned_pointer_as_index operation applied to the MemRefs to be
490 * The second argument B should contain the result of the
491 extract_aligned_pointer_as_index operation applied to the MemRefs to be
493 * The third argument C should contain the conditions as passed directly
494 to the deallocation operation.
495 * The fourth argument D is used to pass results to the caller. Those
496 represent the condition under which the MemRef at the corresponding
497 position in A should be deallocated.
498 * The fifth argument E is used to pass results to the caller. It
499 provides the ownership value corresponding the the MemRef at the same
502 This helper function is supposed to be called once for each
503 `bufferization.dealloc` operation to determine the deallocation need and
504 new ownership indicator for the retained values, but does not perform the
509 func.func @dealloc_helper(
510 %dyn_dealloc_base_pointer_list: memref<?xindex>,
511 %dyn_retain_base_pointer_list: memref<?xindex>,
512 %dyn_cond_list: memref<?xi1>,
513 %dyn_dealloc_cond_out: memref<?xi1>,
514 %dyn_ownership_out: memref<?xi1>) {
515 %c0 = arith.constant 0 : index
516 %c1 = arith.constant 1 : index
517 %true = arith.constant true
518 %false = arith.constant false
519 %num_dealloc_memrefs = memref.dim %dyn_dealloc_base_pointer_list, %c0
520 %num_retain_memrefs = memref.dim %dyn_retain_base_pointer_list, %c0
521 // Zero initialize result buffer.
522 scf.for %i = %c0 to %num_retain_memrefs step %c1 {
523 memref.store %false, %dyn_ownership_out[%i] : memref<?xi1>
525 scf.for %i = %c0 to %num_dealloc_memrefs step %c1 {
526 %dealloc_bp = memref.load %dyn_dealloc_base_pointer_list[%i]
527 %cond = memref.load %dyn_cond_list[%i]
528 // Check for aliasing with retained memrefs.
529 %does_not_alias_retained = scf.for %j = %c0 to %num_retain_memrefs
530 step %c1 iter_args(%does_not_alias_aggregated = %true) -> (i1) {
531 %retain_bp = memref.load %dyn_retain_base_pointer_list[%j]
532 %does_alias = arith.cmpi eq, %retain_bp, %dealloc_bp : index
534 %curr_ownership = memref.load %dyn_ownership_out[%j]
535 %updated_ownership = arith.ori %curr_ownership, %cond : i1
536 memref.store %updated_ownership, %dyn_ownership_out[%j]
538 %does_not_alias = arith.cmpi ne, %retain_bp, %dealloc_bp : index
539 %updated_aggregate = arith.andi %does_not_alias_aggregated,
541 scf.yield %updated_aggregate : i1
543 // Check for aliasing with dealloc memrefs in the list before the
544 // current one, i.e.,
545 // `fix i, forall j < i: check_aliasing(%dyn_dealloc_base_pointer[j],
546 // %dyn_dealloc_base_pointer[i])`
547 %does_not_alias_any = scf.for %j = %c0 to %i step %c1
548 iter_args(%does_not_alias_agg = %does_not_alias_retained) -> (i1) {
549 %prev_dealloc_bp = memref.load %dyn_dealloc_base_pointer_list[%j]
550 %does_not_alias = arith.cmpi ne, %prev_dealloc_bp, %dealloc_bp
551 %updated_alias_agg = arith.andi %does_not_alias_agg, %does_not_alias
552 scf.yield %updated_alias_agg : i1
554 %dealloc_cond = arith.andi %does_not_alias_any, %cond : i1
555 memref.store %dealloc_cond, %dyn_dealloc_cond_out[%i] : memref<?xi1>
561 ### Specialized Lowerings
563 Currently, there are two special lowerings for common cases to avoid the library
564 function and thus unnecessary memory load and store operations and function
567 **One memref, no retained**
569 Lower a simple case without any retained values and a single MemRef. Ideally,
570 static analysis can provide enough information such that the
571 `buffer-deallocation-simplification` pass is able to split the dealloc
572 operations up into this simple case as much as possible before running this
577 bufferization.dealloc (%arg0 : memref<2xf32>) if (%arg1)
582 memref.dealloc %arg0 : memref<2xf32>
586 In most cases, the branch condition is either constant 'true' or 'false' and can
587 thus be optimized away entirely by the canonicalizer pass.
589 **One memref, arbitrarily many retained**
591 A special case lowering for the deallocation operation with exactly one MemRef,
592 but an arbitrary number of retained values. The size of the code produced by
593 this lowering is linear to the number of retained values.
597 %0:2 = bufferization.dealloc (%m : memref<2xf32>) if (%cond)
598 retain (%r0, %r1 : memref<1xf32>, memref<2xf32>)
599 return %0#0, %0#1 : i1, i1
603 %m_base_pointer = memref.extract_aligned_pointer_as_index %m
604 %r0_base_pointer = memref.extract_aligned_pointer_as_index %r0
605 %r0_does_not_alias = arith.cmpi ne, %m_base_pointer, %r0_base_pointer
606 %r1_base_pointer = memref.extract_aligned_pointer_as_index %r1
607 %r1_does_not_alias = arith.cmpi ne, %m_base_pointer, %r1_base_pointer
608 %not_retained = arith.andi %r0_does_not_alias, %r1_does_not_alias : i1
609 %should_dealloc = arith.andi %not_retained, %cond : i1
610 scf.if %should_dealloc {
611 memref.dealloc %m : memref<2xf32>
613 %true = arith.constant true
614 %r0_does_alias = arith.xori %r0_does_not_alias, %true : i1
615 %r0_ownership = arith.andi %r0_does_alias, %cond : i1
616 %r1_does_alias = arith.xori %r1_does_not_alias, %true : i1
617 %r1_ownership = arith.andi %r1_does_alias, %cond : i1
618 return %r0_ownership, %r1_ownership : i1, i1