Recommit [NFC] Better encapsulation of llvm::Optional Storage
[llvm-complete.git] / include / llvm / Analysis / CGSCCPassManager.h
blob6d26954633a007e5645e7862240eeb33cfd635f4
1 //===- CGSCCPassManager.h - Call graph pass management ----------*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 /// \file
9 ///
10 /// This header provides classes for managing passes over SCCs of the call
11 /// graph. These passes form an important component of LLVM's interprocedural
12 /// optimizations. Because they operate on the SCCs of the call graph, and they
13 /// traverse the graph in post-order, they can effectively do pair-wise
14 /// interprocedural optimizations for all call edges in the program while
15 /// incrementally refining it and improving the context of these pair-wise
16 /// optimizations. At each call site edge, the callee has already been
17 /// optimized as much as is possible. This in turn allows very accurate
18 /// analysis of it for IPO.
19 ///
20 /// A secondary more general goal is to be able to isolate optimization on
21 /// unrelated parts of the IR module. This is useful to ensure our
22 /// optimizations are principled and don't miss oportunities where refinement
23 /// of one part of the module influence transformations in another part of the
24 /// module. But this is also useful if we want to parallelize the optimizations
25 /// across common large module graph shapes which tend to be very wide and have
26 /// large regions of unrelated cliques.
27 ///
28 /// To satisfy these goals, we use the LazyCallGraph which provides two graphs
29 /// nested inside each other (and built lazily from the bottom-up): the call
30 /// graph proper, and a reference graph. The reference graph is super set of
31 /// the call graph and is a conservative approximation of what could through
32 /// scalar or CGSCC transforms *become* the call graph. Using this allows us to
33 /// ensure we optimize functions prior to them being introduced into the call
34 /// graph by devirtualization or other technique, and thus ensures that
35 /// subsequent pair-wise interprocedural optimizations observe the optimized
36 /// form of these functions. The (potentially transitive) reference
37 /// reachability used by the reference graph is a conservative approximation
38 /// that still allows us to have independent regions of the graph.
39 ///
40 /// FIXME: There is one major drawback of the reference graph: in its naive
41 /// form it is quadratic because it contains a distinct edge for each
42 /// (potentially indirect) reference, even if are all through some common
43 /// global table of function pointers. This can be fixed in a number of ways
44 /// that essentially preserve enough of the normalization. While it isn't
45 /// expected to completely preclude the usability of this, it will need to be
46 /// addressed.
47 ///
48 ///
49 /// All of these issues are made substantially more complex in the face of
50 /// mutations to the call graph while optimization passes are being run. When
51 /// mutations to the call graph occur we want to achieve two different things:
52 ///
53 /// - We need to update the call graph in-flight and invalidate analyses
54 /// cached on entities in the graph. Because of the cache-based analysis
55 /// design of the pass manager, it is essential to have stable identities for
56 /// the elements of the IR that passes traverse, and to invalidate any
57 /// analyses cached on these elements as the mutations take place.
58 ///
59 /// - We want to preserve the incremental and post-order traversal of the
60 /// graph even as it is refined and mutated. This means we want optimization
61 /// to observe the most refined form of the call graph and to do so in
62 /// post-order.
63 ///
64 /// To address this, the CGSCC manager uses both worklists that can be expanded
65 /// by passes which transform the IR, and provides invalidation tests to skip
66 /// entries that become dead. This extra data is provided to every SCC pass so
67 /// that it can carefully update the manager's traversal as the call graph
68 /// mutates.
69 ///
70 /// We also provide support for running function passes within the CGSCC walk,
71 /// and there we provide automatic update of the call graph including of the
72 /// pass manager to reflect call graph changes that fall out naturally as part
73 /// of scalar transformations.
74 ///
75 /// The patterns used to ensure the goals of post-order visitation of the fully
76 /// refined graph:
77 ///
78 /// 1) Sink toward the "bottom" as the graph is refined. This means that any
79 /// iteration continues in some valid post-order sequence after the mutation
80 /// has altered the structure.
81 ///
82 /// 2) Enqueue in post-order, including the current entity. If the current
83 /// entity's shape changes, it and everything after it in post-order needs
84 /// to be visited to observe that shape.
85 ///
86 //===----------------------------------------------------------------------===//
88 #ifndef LLVM_ANALYSIS_CGSCCPASSMANAGER_H
89 #define LLVM_ANALYSIS_CGSCCPASSMANAGER_H
91 #include "llvm/ADT/DenseSet.h"
92 #include "llvm/ADT/PriorityWorklist.h"
93 #include "llvm/ADT/STLExtras.h"
94 #include "llvm/ADT/SmallPtrSet.h"
95 #include "llvm/ADT/SmallVector.h"
96 #include "llvm/Analysis/LazyCallGraph.h"
97 #include "llvm/IR/CallSite.h"
98 #include "llvm/IR/Function.h"
99 #include "llvm/IR/InstIterator.h"
100 #include "llvm/IR/PassManager.h"
101 #include "llvm/IR/ValueHandle.h"
102 #include "llvm/Support/Debug.h"
103 #include "llvm/Support/raw_ostream.h"
104 #include <algorithm>
105 #include <cassert>
106 #include <utility>
108 namespace llvm {
110 struct CGSCCUpdateResult;
111 class Module;
113 // Allow debug logging in this inline function.
114 #define DEBUG_TYPE "cgscc"
116 /// Extern template declaration for the analysis set for this IR unit.
117 extern template class AllAnalysesOn<LazyCallGraph::SCC>;
119 extern template class AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
121 /// The CGSCC analysis manager.
123 /// See the documentation for the AnalysisManager template for detail
124 /// documentation. This type serves as a convenient way to refer to this
125 /// construct in the adaptors and proxies used to integrate this into the larger
126 /// pass manager infrastructure.
127 using CGSCCAnalysisManager =
128 AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
130 // Explicit specialization and instantiation declarations for the pass manager.
131 // See the comments on the definition of the specialization for details on how
132 // it differs from the primary template.
133 template <>
134 PreservedAnalyses
135 PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
136 CGSCCUpdateResult &>::run(LazyCallGraph::SCC &InitialC,
137 CGSCCAnalysisManager &AM,
138 LazyCallGraph &G, CGSCCUpdateResult &UR);
139 extern template class PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager,
140 LazyCallGraph &, CGSCCUpdateResult &>;
142 /// The CGSCC pass manager.
144 /// See the documentation for the PassManager template for details. It runs
145 /// a sequence of SCC passes over each SCC that the manager is run over. This
146 /// type serves as a convenient way to refer to this construct.
147 using CGSCCPassManager =
148 PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
149 CGSCCUpdateResult &>;
151 /// An explicit specialization of the require analysis template pass.
152 template <typename AnalysisT>
153 struct RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager,
154 LazyCallGraph &, CGSCCUpdateResult &>
155 : PassInfoMixin<RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC,
156 CGSCCAnalysisManager, LazyCallGraph &,
157 CGSCCUpdateResult &>> {
158 PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
159 LazyCallGraph &CG, CGSCCUpdateResult &) {
160 (void)AM.template getResult<AnalysisT>(C, CG);
161 return PreservedAnalyses::all();
165 /// A proxy from a \c CGSCCAnalysisManager to a \c Module.
166 using CGSCCAnalysisManagerModuleProxy =
167 InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
169 /// We need a specialized result for the \c CGSCCAnalysisManagerModuleProxy so
170 /// it can have access to the call graph in order to walk all the SCCs when
171 /// invalidating things.
172 template <> class CGSCCAnalysisManagerModuleProxy::Result {
173 public:
174 explicit Result(CGSCCAnalysisManager &InnerAM, LazyCallGraph &G)
175 : InnerAM(&InnerAM), G(&G) {}
177 /// Accessor for the analysis manager.
178 CGSCCAnalysisManager &getManager() { return *InnerAM; }
180 /// Handler for invalidation of the Module.
182 /// If the proxy analysis itself is preserved, then we assume that the set of
183 /// SCCs in the Module hasn't changed. Thus any pointers to SCCs in the
184 /// CGSCCAnalysisManager are still valid, and we don't need to call \c clear
185 /// on the CGSCCAnalysisManager.
187 /// Regardless of whether this analysis is marked as preserved, all of the
188 /// analyses in the \c CGSCCAnalysisManager are potentially invalidated based
189 /// on the set of preserved analyses.
190 bool invalidate(Module &M, const PreservedAnalyses &PA,
191 ModuleAnalysisManager::Invalidator &Inv);
193 private:
194 CGSCCAnalysisManager *InnerAM;
195 LazyCallGraph *G;
198 /// Provide a specialized run method for the \c CGSCCAnalysisManagerModuleProxy
199 /// so it can pass the lazy call graph to the result.
200 template <>
201 CGSCCAnalysisManagerModuleProxy::Result
202 CGSCCAnalysisManagerModuleProxy::run(Module &M, ModuleAnalysisManager &AM);
204 // Ensure the \c CGSCCAnalysisManagerModuleProxy is provided as an extern
205 // template.
206 extern template class InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
208 extern template class OuterAnalysisManagerProxy<
209 ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph &>;
211 /// A proxy from a \c ModuleAnalysisManager to an \c SCC.
212 using ModuleAnalysisManagerCGSCCProxy =
213 OuterAnalysisManagerProxy<ModuleAnalysisManager, LazyCallGraph::SCC,
214 LazyCallGraph &>;
216 /// Support structure for SCC passes to communicate updates the call graph back
217 /// to the CGSCC pass manager infrsatructure.
219 /// The CGSCC pass manager runs SCC passes which are allowed to update the call
220 /// graph and SCC structures. This means the structure the pass manager works
221 /// on is mutating underneath it. In order to support that, there needs to be
222 /// careful communication about the precise nature and ramifications of these
223 /// updates to the pass management infrastructure.
225 /// All SCC passes will have to accept a reference to the management layer's
226 /// update result struct and use it to reflect the results of any CG updates
227 /// performed.
229 /// Passes which do not change the call graph structure in any way can just
230 /// ignore this argument to their run method.
231 struct CGSCCUpdateResult {
232 /// Worklist of the RefSCCs queued for processing.
234 /// When a pass refines the graph and creates new RefSCCs or causes them to
235 /// have a different shape or set of component SCCs it should add the RefSCCs
236 /// to this worklist so that we visit them in the refined form.
238 /// This worklist is in reverse post-order, as we pop off the back in order
239 /// to observe RefSCCs in post-order. When adding RefSCCs, clients should add
240 /// them in reverse post-order.
241 SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> &RCWorklist;
243 /// Worklist of the SCCs queued for processing.
245 /// When a pass refines the graph and creates new SCCs or causes them to have
246 /// a different shape or set of component functions it should add the SCCs to
247 /// this worklist so that we visit them in the refined form.
249 /// Note that if the SCCs are part of a RefSCC that is added to the \c
250 /// RCWorklist, they don't need to be added here as visiting the RefSCC will
251 /// be sufficient to re-visit the SCCs within it.
253 /// This worklist is in reverse post-order, as we pop off the back in order
254 /// to observe SCCs in post-order. When adding SCCs, clients should add them
255 /// in reverse post-order.
256 SmallPriorityWorklist<LazyCallGraph::SCC *, 1> &CWorklist;
258 /// The set of invalidated RefSCCs which should be skipped if they are found
259 /// in \c RCWorklist.
261 /// This is used to quickly prune out RefSCCs when they get deleted and
262 /// happen to already be on the worklist. We use this primarily to avoid
263 /// scanning the list and removing entries from it.
264 SmallPtrSetImpl<LazyCallGraph::RefSCC *> &InvalidatedRefSCCs;
266 /// The set of invalidated SCCs which should be skipped if they are found
267 /// in \c CWorklist.
269 /// This is used to quickly prune out SCCs when they get deleted and happen
270 /// to already be on the worklist. We use this primarily to avoid scanning
271 /// the list and removing entries from it.
272 SmallPtrSetImpl<LazyCallGraph::SCC *> &InvalidatedSCCs;
274 /// If non-null, the updated current \c RefSCC being processed.
276 /// This is set when a graph refinement takes place an the "current" point in
277 /// the graph moves "down" or earlier in the post-order walk. This will often
278 /// cause the "current" RefSCC to be a newly created RefSCC object and the
279 /// old one to be added to the above worklist. When that happens, this
280 /// pointer is non-null and can be used to continue processing the "top" of
281 /// the post-order walk.
282 LazyCallGraph::RefSCC *UpdatedRC;
284 /// If non-null, the updated current \c SCC being processed.
286 /// This is set when a graph refinement takes place an the "current" point in
287 /// the graph moves "down" or earlier in the post-order walk. This will often
288 /// cause the "current" SCC to be a newly created SCC object and the old one
289 /// to be added to the above worklist. When that happens, this pointer is
290 /// non-null and can be used to continue processing the "top" of the
291 /// post-order walk.
292 LazyCallGraph::SCC *UpdatedC;
294 /// A hacky area where the inliner can retain history about inlining
295 /// decisions that mutated the call graph's SCC structure in order to avoid
296 /// infinite inlining. See the comments in the inliner's CG update logic.
298 /// FIXME: Keeping this here seems like a big layering issue, we should look
299 /// for a better technique.
300 SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
301 &InlinedInternalEdges;
304 /// The core module pass which does a post-order walk of the SCCs and
305 /// runs a CGSCC pass over each one.
307 /// Designed to allow composition of a CGSCCPass(Manager) and
308 /// a ModulePassManager. Note that this pass must be run with a module analysis
309 /// manager as it uses the LazyCallGraph analysis. It will also run the
310 /// \c CGSCCAnalysisManagerModuleProxy analysis prior to running the CGSCC
311 /// pass over the module to enable a \c FunctionAnalysisManager to be used
312 /// within this run safely.
313 template <typename CGSCCPassT>
314 class ModuleToPostOrderCGSCCPassAdaptor
315 : public PassInfoMixin<ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>> {
316 public:
317 explicit ModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass)
318 : Pass(std::move(Pass)) {}
320 // We have to explicitly define all the special member functions because MSVC
321 // refuses to generate them.
322 ModuleToPostOrderCGSCCPassAdaptor(
323 const ModuleToPostOrderCGSCCPassAdaptor &Arg)
324 : Pass(Arg.Pass) {}
326 ModuleToPostOrderCGSCCPassAdaptor(ModuleToPostOrderCGSCCPassAdaptor &&Arg)
327 : Pass(std::move(Arg.Pass)) {}
329 friend void swap(ModuleToPostOrderCGSCCPassAdaptor &LHS,
330 ModuleToPostOrderCGSCCPassAdaptor &RHS) {
331 std::swap(LHS.Pass, RHS.Pass);
334 ModuleToPostOrderCGSCCPassAdaptor &
335 operator=(ModuleToPostOrderCGSCCPassAdaptor RHS) {
336 swap(*this, RHS);
337 return *this;
340 /// Runs the CGSCC pass across every SCC in the module.
341 PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342 // Setup the CGSCC analysis manager from its proxy.
343 CGSCCAnalysisManager &CGAM =
344 AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346 // Get the call graph for this module.
347 LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349 // We keep worklists to allow us to push more work onto the pass manager as
350 // the passes are run.
351 SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352 SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354 // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355 // iterating off the worklists.
356 SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357 SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359 SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360 InlinedInternalEdges;
362 CGSCCUpdateResult UR = {RCWorklist, CWorklist, InvalidRefSCCSet,
363 InvalidSCCSet, nullptr, nullptr,
364 InlinedInternalEdges};
366 // Request PassInstrumentation from analysis manager, will use it to run
367 // instrumenting callbacks for the passes later.
368 PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
370 PreservedAnalyses PA = PreservedAnalyses::all();
371 CG.buildRefSCCs();
372 for (auto RCI = CG.postorder_ref_scc_begin(),
373 RCE = CG.postorder_ref_scc_end();
374 RCI != RCE;) {
375 assert(RCWorklist.empty() &&
376 "Should always start with an empty RefSCC worklist");
377 // The postorder_ref_sccs range we are walking is lazily constructed, so
378 // we only push the first one onto the worklist. The worklist allows us
379 // to capture *new* RefSCCs created during transformations.
381 // We really want to form RefSCCs lazily because that makes them cheaper
382 // to update as the program is simplified and allows us to have greater
383 // cache locality as forming a RefSCC touches all the parts of all the
384 // functions within that RefSCC.
386 // We also eagerly increment the iterator to the next position because
387 // the CGSCC passes below may delete the current RefSCC.
388 RCWorklist.insert(&*RCI++);
390 do {
391 LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392 if (InvalidRefSCCSet.count(RC)) {
393 LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394 continue;
397 assert(CWorklist.empty() &&
398 "Should always start with an empty SCC worklist");
400 LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401 << "\n");
403 // Push the initial SCCs in reverse post-order as we'll pop off the
404 // back and so see this in post-order.
405 for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406 CWorklist.insert(&C);
408 do {
409 LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410 // Due to call graph mutations, we may have invalid SCCs or SCCs from
411 // other RefSCCs in the worklist. The invalid ones are dead and the
412 // other RefSCCs should be queued above, so we just need to skip both
413 // scenarios here.
414 if (InvalidSCCSet.count(C)) {
415 LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416 continue;
418 if (&C->getOuterRefSCC() != RC) {
419 LLVM_DEBUG(dbgs()
420 << "Skipping an SCC that is now part of some other "
421 "RefSCC...\n");
422 continue;
425 do {
426 // Check that we didn't miss any update scenario.
427 assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428 assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429 assert(&C->getOuterRefSCC() == RC &&
430 "Processing an SCC in a different RefSCC!");
432 UR.UpdatedRC = nullptr;
433 UR.UpdatedC = nullptr;
435 // Check the PassInstrumentation's BeforePass callbacks before
436 // running the pass, skip its execution completely if asked to
437 // (callback returns false).
438 if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439 continue;
441 PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
443 if (UR.InvalidatedSCCs.count(C))
444 PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445 else
446 PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
448 // Update the SCC and RefSCC if necessary.
449 C = UR.UpdatedC ? UR.UpdatedC : C;
450 RC = UR.UpdatedRC ? UR.UpdatedRC : RC;
452 // If the CGSCC pass wasn't able to provide a valid updated SCC,
453 // the current SCC may simply need to be skipped if invalid.
454 if (UR.InvalidatedSCCs.count(C)) {
455 LLVM_DEBUG(dbgs()
456 << "Skipping invalidated root or island SCC!\n");
457 break;
459 // Check that we didn't miss any update scenario.
460 assert(C->begin() != C->end() && "Cannot have an empty SCC!");
462 // We handle invalidating the CGSCC analysis manager's information
463 // for the (potentially updated) SCC here. Note that any other SCCs
464 // whose structure has changed should have been invalidated by
465 // whatever was updating the call graph. This SCC gets invalidated
466 // late as it contains the nodes that were actively being
467 // processed.
468 CGAM.invalidate(*C, PassPA);
470 // Then intersect the preserved set so that invalidation of module
471 // analyses will eventually occur when the module pass completes.
472 PA.intersect(std::move(PassPA));
474 // The pass may have restructured the call graph and refined the
475 // current SCC and/or RefSCC. We need to update our current SCC and
476 // RefSCC pointers to follow these. Also, when the current SCC is
477 // refined, re-run the SCC pass over the newly refined SCC in order
478 // to observe the most precise SCC model available. This inherently
479 // cannot cycle excessively as it only happens when we split SCCs
480 // apart, at most converging on a DAG of single nodes.
481 // FIXME: If we ever start having RefSCC passes, we'll want to
482 // iterate there too.
483 if (UR.UpdatedC)
484 LLVM_DEBUG(dbgs()
485 << "Re-running SCC passes after a refinement of the "
486 "current SCC: "
487 << *UR.UpdatedC << "\n");
489 // Note that both `C` and `RC` may at this point refer to deleted,
490 // invalid SCC and RefSCCs respectively. But we will short circuit
491 // the processing when we check them in the loop above.
492 } while (UR.UpdatedC);
493 } while (!CWorklist.empty());
495 // We only need to keep internal inlined edge information within
496 // a RefSCC, clear it to save on space and let the next time we visit
497 // any of these functions have a fresh start.
498 InlinedInternalEdges.clear();
499 } while (!RCWorklist.empty());
502 // By definition we preserve the call garph, all SCC analyses, and the
503 // analysis proxies by handling them above and in any nested pass managers.
504 PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505 PA.preserve<LazyCallGraphAnalysis>();
506 PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507 PA.preserve<FunctionAnalysisManagerModuleProxy>();
508 return PA;
511 private:
512 CGSCCPassT Pass;
515 /// A function to deduce a function pass type and wrap it in the
516 /// templated adaptor.
517 template <typename CGSCCPassT>
518 ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>
519 createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520 return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
523 /// A proxy from a \c FunctionAnalysisManager to an \c SCC.
525 /// When a module pass runs and triggers invalidation, both the CGSCC and
526 /// Function analysis manager proxies on the module get an invalidation event.
527 /// We don't want to fully duplicate responsibility for most of the
528 /// invalidation logic. Instead, this layer is only responsible for SCC-local
529 /// invalidation events. We work with the module's FunctionAnalysisManager to
530 /// invalidate function analyses.
531 class FunctionAnalysisManagerCGSCCProxy
532 : public AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy> {
533 public:
534 class Result {
535 public:
536 explicit Result(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
538 /// Accessor for the analysis manager.
539 FunctionAnalysisManager &getManager() { return *FAM; }
541 bool invalidate(LazyCallGraph::SCC &C, const PreservedAnalyses &PA,
542 CGSCCAnalysisManager::Invalidator &Inv);
544 private:
545 FunctionAnalysisManager *FAM;
548 /// Computes the \c FunctionAnalysisManager and stores it in the result proxy.
549 Result run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM, LazyCallGraph &);
551 private:
552 friend AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy>;
554 static AnalysisKey Key;
557 extern template class OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
559 /// A proxy from a \c CGSCCAnalysisManager to a \c Function.
560 using CGSCCAnalysisManagerFunctionProxy =
561 OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
563 /// Helper to update the call graph after running a function pass.
565 /// Function passes can only mutate the call graph in specific ways. This
566 /// routine provides a helper that updates the call graph in those ways
567 /// including returning whether any changes were made and populating a CG
568 /// update result struct for the overall CGSCC walk.
569 LazyCallGraph::SCC &updateCGAndAnalysisManagerForFunctionPass(
570 LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N,
571 CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR);
573 /// Adaptor that maps from a SCC to its functions.
575 /// Designed to allow composition of a FunctionPass(Manager) and
576 /// a CGSCCPassManager. Note that if this pass is constructed with a pointer
577 /// to a \c CGSCCAnalysisManager it will run the
578 /// \c FunctionAnalysisManagerCGSCCProxy analysis prior to running the function
579 /// pass over the SCC to enable a \c FunctionAnalysisManager to be used
580 /// within this run safely.
581 template <typename FunctionPassT>
582 class CGSCCToFunctionPassAdaptor
583 : public PassInfoMixin<CGSCCToFunctionPassAdaptor<FunctionPassT>> {
584 public:
585 explicit CGSCCToFunctionPassAdaptor(FunctionPassT Pass)
586 : Pass(std::move(Pass)) {}
588 // We have to explicitly define all the special member functions because MSVC
589 // refuses to generate them.
590 CGSCCToFunctionPassAdaptor(const CGSCCToFunctionPassAdaptor &Arg)
591 : Pass(Arg.Pass) {}
593 CGSCCToFunctionPassAdaptor(CGSCCToFunctionPassAdaptor &&Arg)
594 : Pass(std::move(Arg.Pass)) {}
596 friend void swap(CGSCCToFunctionPassAdaptor &LHS,
597 CGSCCToFunctionPassAdaptor &RHS) {
598 std::swap(LHS.Pass, RHS.Pass);
601 CGSCCToFunctionPassAdaptor &operator=(CGSCCToFunctionPassAdaptor RHS) {
602 swap(*this, RHS);
603 return *this;
606 /// Runs the function pass across every function in the module.
607 PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
608 LazyCallGraph &CG, CGSCCUpdateResult &UR) {
609 // Setup the function analysis manager from its proxy.
610 FunctionAnalysisManager &FAM =
611 AM.getResult<FunctionAnalysisManagerCGSCCProxy>(C, CG).getManager();
613 SmallVector<LazyCallGraph::Node *, 4> Nodes;
614 for (LazyCallGraph::Node &N : C)
615 Nodes.push_back(&N);
617 // The SCC may get split while we are optimizing functions due to deleting
618 // edges. If this happens, the current SCC can shift, so keep track of
619 // a pointer we can overwrite.
620 LazyCallGraph::SCC *CurrentC = &C;
622 LLVM_DEBUG(dbgs() << "Running function passes across an SCC: " << C
623 << "\n");
625 PreservedAnalyses PA = PreservedAnalyses::all();
626 for (LazyCallGraph::Node *N : Nodes) {
627 // Skip nodes from other SCCs. These may have been split out during
628 // processing. We'll eventually visit those SCCs and pick up the nodes
629 // there.
630 if (CG.lookupSCC(*N) != CurrentC)
631 continue;
633 Function &F = N->getFunction();
635 PassInstrumentation PI = FAM.getResult<PassInstrumentationAnalysis>(F);
636 if (!PI.runBeforePass<Function>(Pass, F))
637 continue;
639 PreservedAnalyses PassPA = Pass.run(F, FAM);
641 PI.runAfterPass<Function>(Pass, F);
643 // We know that the function pass couldn't have invalidated any other
644 // function's analyses (that's the contract of a function pass), so
645 // directly handle the function analysis manager's invalidation here.
646 FAM.invalidate(F, PassPA);
648 // Then intersect the preserved set so that invalidation of module
649 // analyses will eventually occur when the module pass completes.
650 PA.intersect(std::move(PassPA));
652 // If the call graph hasn't been preserved, update it based on this
653 // function pass. This may also update the current SCC to point to
654 // a smaller, more refined SCC.
655 auto PAC = PA.getChecker<LazyCallGraphAnalysis>();
656 if (!PAC.preserved() && !PAC.preservedSet<AllAnalysesOn<Module>>()) {
657 CurrentC = &updateCGAndAnalysisManagerForFunctionPass(CG, *CurrentC, *N,
658 AM, UR);
659 assert(
660 CG.lookupSCC(*N) == CurrentC &&
661 "Current SCC not updated to the SCC containing the current node!");
665 // By definition we preserve the proxy. And we preserve all analyses on
666 // Functions. This precludes *any* invalidation of function analyses by the
667 // proxy, but that's OK because we've taken care to invalidate analyses in
668 // the function analysis manager incrementally above.
669 PA.preserveSet<AllAnalysesOn<Function>>();
670 PA.preserve<FunctionAnalysisManagerCGSCCProxy>();
672 // We've also ensured that we updated the call graph along the way.
673 PA.preserve<LazyCallGraphAnalysis>();
675 return PA;
678 private:
679 FunctionPassT Pass;
682 /// A function to deduce a function pass type and wrap it in the
683 /// templated adaptor.
684 template <typename FunctionPassT>
685 CGSCCToFunctionPassAdaptor<FunctionPassT>
686 createCGSCCToFunctionPassAdaptor(FunctionPassT Pass) {
687 return CGSCCToFunctionPassAdaptor<FunctionPassT>(std::move(Pass));
690 /// A helper that repeats an SCC pass each time an indirect call is refined to
691 /// a direct call by that pass.
693 /// While the CGSCC pass manager works to re-visit SCCs and RefSCCs as they
694 /// change shape, we may also want to repeat an SCC pass if it simply refines
695 /// an indirect call to a direct call, even if doing so does not alter the
696 /// shape of the graph. Note that this only pertains to direct calls to
697 /// functions where IPO across the SCC may be able to compute more precise
698 /// results. For intrinsics, we assume scalar optimizations already can fully
699 /// reason about them.
701 /// This repetition has the potential to be very large however, as each one
702 /// might refine a single call site. As a consequence, in practice we use an
703 /// upper bound on the number of repetitions to limit things.
704 template <typename PassT>
705 class DevirtSCCRepeatedPass
706 : public PassInfoMixin<DevirtSCCRepeatedPass<PassT>> {
707 public:
708 explicit DevirtSCCRepeatedPass(PassT Pass, int MaxIterations)
709 : Pass(std::move(Pass)), MaxIterations(MaxIterations) {}
711 /// Runs the wrapped pass up to \c MaxIterations on the SCC, iterating
712 /// whenever an indirect call is refined.
713 PreservedAnalyses run(LazyCallGraph::SCC &InitialC, CGSCCAnalysisManager &AM,
714 LazyCallGraph &CG, CGSCCUpdateResult &UR) {
715 PreservedAnalyses PA = PreservedAnalyses::all();
716 PassInstrumentation PI =
717 AM.getResult<PassInstrumentationAnalysis>(InitialC, CG);
719 // The SCC may be refined while we are running passes over it, so set up
720 // a pointer that we can update.
721 LazyCallGraph::SCC *C = &InitialC;
723 // Collect value handles for all of the indirect call sites.
724 SmallVector<WeakTrackingVH, 8> CallHandles;
726 // Struct to track the counts of direct and indirect calls in each function
727 // of the SCC.
728 struct CallCount {
729 int Direct;
730 int Indirect;
733 // Put value handles on all of the indirect calls and return the number of
734 // direct calls for each function in the SCC.
735 auto ScanSCC = [](LazyCallGraph::SCC &C,
736 SmallVectorImpl<WeakTrackingVH> &CallHandles) {
737 assert(CallHandles.empty() && "Must start with a clear set of handles.");
739 SmallVector<CallCount, 4> CallCounts;
740 for (LazyCallGraph::Node &N : C) {
741 CallCounts.push_back({0, 0});
742 CallCount &Count = CallCounts.back();
743 for (Instruction &I : instructions(N.getFunction()))
744 if (auto CS = CallSite(&I)) {
745 if (CS.getCalledFunction()) {
746 ++Count.Direct;
747 } else {
748 ++Count.Indirect;
749 CallHandles.push_back(WeakTrackingVH(&I));
754 return CallCounts;
757 // Populate the initial call handles and get the initial call counts.
758 auto CallCounts = ScanSCC(*C, CallHandles);
760 for (int Iteration = 0;; ++Iteration) {
762 if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
763 continue;
765 PreservedAnalyses PassPA = Pass.run(*C, AM, CG, UR);
767 if (UR.InvalidatedSCCs.count(C))
768 PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
769 else
770 PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
772 // If the SCC structure has changed, bail immediately and let the outer
773 // CGSCC layer handle any iteration to reflect the refined structure.
774 if (UR.UpdatedC && UR.UpdatedC != C) {
775 PA.intersect(std::move(PassPA));
776 break;
779 // Check that we didn't miss any update scenario.
780 assert(!UR.InvalidatedSCCs.count(C) && "Processing an invalid SCC!");
781 assert(C->begin() != C->end() && "Cannot have an empty SCC!");
782 assert((int)CallCounts.size() == C->size() &&
783 "Cannot have changed the size of the SCC!");
785 // Check whether any of the handles were devirtualized.
786 auto IsDevirtualizedHandle = [&](WeakTrackingVH &CallH) {
787 if (!CallH)
788 return false;
789 auto CS = CallSite(CallH);
790 if (!CS)
791 return false;
793 // If the call is still indirect, leave it alone.
794 Function *F = CS.getCalledFunction();
795 if (!F)
796 return false;
798 LLVM_DEBUG(dbgs() << "Found devirutalized call from "
799 << CS.getParent()->getParent()->getName() << " to "
800 << F->getName() << "\n");
802 // We now have a direct call where previously we had an indirect call,
803 // so iterate to process this devirtualization site.
804 return true;
806 bool Devirt = llvm::any_of(CallHandles, IsDevirtualizedHandle);
808 // Rescan to build up a new set of handles and count how many direct
809 // calls remain. If we decide to iterate, this also sets up the input to
810 // the next iteration.
811 CallHandles.clear();
812 auto NewCallCounts = ScanSCC(*C, CallHandles);
814 // If we haven't found an explicit devirtualization already see if we
815 // have decreased the number of indirect calls and increased the number
816 // of direct calls for any function in the SCC. This can be fooled by all
817 // manner of transformations such as DCE and other things, but seems to
818 // work well in practice.
819 if (!Devirt)
820 for (int i = 0, Size = C->size(); i < Size; ++i)
821 if (CallCounts[i].Indirect > NewCallCounts[i].Indirect &&
822 CallCounts[i].Direct < NewCallCounts[i].Direct) {
823 Devirt = true;
824 break;
827 if (!Devirt) {
828 PA.intersect(std::move(PassPA));
829 break;
832 // Otherwise, if we've already hit our max, we're done.
833 if (Iteration >= MaxIterations) {
834 LLVM_DEBUG(
835 dbgs() << "Found another devirtualization after hitting the max "
836 "number of repetitions ("
837 << MaxIterations << ") on SCC: " << *C << "\n");
838 PA.intersect(std::move(PassPA));
839 break;
842 LLVM_DEBUG(
843 dbgs()
844 << "Repeating an SCC pass after finding a devirtualization in: " << *C
845 << "\n");
847 // Move over the new call counts in preparation for iterating.
848 CallCounts = std::move(NewCallCounts);
850 // Update the analysis manager with each run and intersect the total set
851 // of preserved analyses so we're ready to iterate.
852 AM.invalidate(*C, PassPA);
853 PA.intersect(std::move(PassPA));
856 // Note that we don't add any preserved entries here unlike a more normal
857 // "pass manager" because we only handle invalidation *between* iterations,
858 // not after the last iteration.
859 return PA;
862 private:
863 PassT Pass;
864 int MaxIterations;
867 /// A function to deduce a function pass type and wrap it in the
868 /// templated adaptor.
869 template <typename PassT>
870 DevirtSCCRepeatedPass<PassT> createDevirtSCCRepeatedPass(PassT Pass,
871 int MaxIterations) {
872 return DevirtSCCRepeatedPass<PassT>(std::move(Pass), MaxIterations);
875 // Clear out the debug logging macro.
876 #undef DEBUG_TYPE
878 } // end namespace llvm
880 #endif // LLVM_ANALYSIS_CGSCCPASSMANAGER_H