Bug 1910362 - Create new Nimbus helper r=aaronmt,ohorvath
[gecko.git] / xpcom / base / nsCycleCollector.cpp
blobab01fe6371559b4dfbe28e556912ac46b8d8ba89
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 //
8 // This file implements a garbage-cycle collector based on the paper
9 //
10 // Concurrent Cycle Collection in Reference Counted Systems
11 // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
13 // We are not using the concurrent or acyclic cases of that paper; so
14 // the green, red and orange colors are not used.
16 // The collector is based on tracking pointers of four colors:
18 // Black nodes are definitely live. If we ever determine a node is
19 // black, it's ok to forget about, drop from our records.
21 // White nodes are definitely garbage cycles. Once we finish with our
22 // scanning, we unlink all the white nodes and expect that by
23 // unlinking them they will self-destruct (since a garbage cycle is
24 // only keeping itself alive with internal links, by definition).
26 // Snow-white is an addition to the original algorithm. A snow-white node
27 // has reference count zero and is just waiting for deletion.
29 // Grey nodes are being scanned. Nodes that turn grey will turn
30 // either black if we determine that they're live, or white if we
31 // determine that they're a garbage cycle. After the main collection
32 // algorithm there should be no grey nodes.
34 // Purple nodes are *candidates* for being scanned. They are nodes we
35 // haven't begun scanning yet because they're not old enough, or we're
36 // still partway through the algorithm.
38 // XPCOM objects participating in garbage-cycle collection are obliged
39 // to inform us when they ought to turn purple; that is, when their
40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 // require that *after* an XPCOM object has informed us of turning
42 // purple, they will tell us when they either transition back to being
43 // black (incremented refcount) or are ultimately deleted.
45 // Incremental cycle collection
47 // Beyond the simple state machine required to implement incremental
48 // collection, the CC needs to be able to compensate for things the browser
49 // is doing during the collection. There are two kinds of problems. For each
50 // of these, there are two cases to deal with: purple-buffered C++ objects
51 // and JS objects.
53 // The first problem is that an object in the CC's graph can become garbage.
54 // This is bad because the CC touches the objects in its graph at every
55 // stage of its operation.
57 // All cycle collected C++ objects that die during a cycle collection
58 // will end up actually getting deleted by the SnowWhiteKiller. Before
59 // the SWK deletes an object, it checks if an ICC is running, and if so,
60 // if the object is in the graph. If it is, the CC clears mPointer and
61 // mParticipant so it does not point to the raw object any more. Because
62 // objects could die any time the CC returns to the mutator, any time the CC
63 // accesses a PtrInfo it must perform a null check on mParticipant to
64 // ensure the object has not gone away.
66 // JS objects don't always run finalizers, so the CC can't remove them from
67 // the graph when they die. Fortunately, JS objects can only die during a GC,
68 // so if a GC is begun during an ICC, the browser synchronously finishes off
69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 // properly, this should be rare.
72 // The second problem is that objects in the graph can be changed, say by
73 // being addrefed or released, or by having a field updated, after the object
74 // has been added to the graph. The problem is that ICC can miss a newly
75 // created reference to an object, and end up unlinking an object that is
76 // actually alive.
78 // The basic idea of the solution, from "An on-the-fly Reference Counting
79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 // object has had an additional reference to it created during the collection,
81 // and if so, don't collect it during the current collection. This avoids having
82 // to rerun the scan as in Bacon & Rajan 2001.
84 // For cycle collected C++ objects, we modify AddRef to place the object in
85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 // objects in the purple buffer as being alive, after graph building has
87 // completed. Because they are in the purple buffer, they will be suspected
88 // in the next CC, so there's no danger of leaks. This is imprecise, because
89 // we will treat as live an object that has been Released but not AddRefed
90 // during graph building, but that's probably rare enough that the additional
91 // bookkeeping overhead is not worthwhile.
93 // For JS objects, the cycle collector is only looking at gray objects. If a
94 // gray object is touched during ICC, it will be made black by UnmarkGray.
95 // Thus, if a JS object has become black during the ICC, we treat it as live.
96 // Merged JS zones have to be handled specially: we scan all zone globals.
97 // If any are black, we treat the zone as being black.
99 // Safety
101 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
102 // purple-unsafe.
104 // An nsISupports object is scan-safe if:
106 // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
107 // this operation loses ISupports identity (like nsIClassInfo).
108 // - Additionally, the operation |traverse| on the resulting
109 // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
110 // adjustment to occur (no AddRef / Release calls).
112 // A non-nsISupports ("native") object is scan-safe by explicitly
113 // providing its nsCycleCollectionParticipant.
115 // An object is purple-safe if it satisfies the following properties:
117 // - The object is scan-safe.
119 // When we receive a pointer |ptr| via
120 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
121 // can check the scan-safety, but have no way to ensure the
122 // purple-safety; objects must obey, or else the entire system falls
123 // apart. Don't involve an object in this scheme if you can't
124 // guarantee its purple-safety. The easiest way to ensure that an
125 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
127 // When we have a scannable set of purple nodes ready, we begin
128 // our walks. During the walks, the nodes we |traverse| should only
129 // feed us more scan-safe nodes, and should not adjust the refcounts
130 // of those nodes.
132 // We do not |AddRef| or |Release| any objects during scanning. We
133 // rely on the purple-safety of the roots that call |suspect| to
134 // hold, such that we will clear the pointer from the purple buffer
135 // entry to the object before it is destroyed. The pointers that are
136 // merely scan-safe we hold only for the duration of scanning, and
137 // there should be no objects released from the scan-safe set during
138 // the scan.
140 // We *do* call |Root| and |Unroot| on every white object, on
141 // either side of the calls to |Unlink|. This keeps the set of white
142 // objects alive during the unlinking.
145 #if !defined(__MINGW32__)
146 # ifdef WIN32
147 # include <crtdbg.h>
148 # include <errno.h>
149 # endif
150 #endif
152 #include "base/process_util.h"
154 #include "mozilla/ArrayUtils.h"
155 #include "mozilla/AutoRestore.h"
156 #include "mozilla/CycleCollectedJSContext.h"
157 #include "mozilla/CycleCollectedJSRuntime.h"
158 #include "mozilla/CycleCollectorStats.h"
159 #include "mozilla/DebugOnly.h"
160 #include "mozilla/HashFunctions.h"
161 #include "mozilla/HashTable.h"
162 #include "mozilla/HoldDropJSObjects.h"
163 #include "mozilla/Maybe.h"
164 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
165 #include <stdint.h>
166 #include <stdio.h>
168 #include <utility>
170 #include "js/SliceBudget.h"
171 #include "mozilla/Attributes.h"
172 #include "mozilla/Likely.h"
173 #include "mozilla/LinkedList.h"
174 #include "mozilla/MemoryReporting.h"
175 #include "mozilla/MruCache.h"
176 #include "mozilla/PoisonIOInterposer.h"
177 #include "mozilla/ProfilerLabels.h"
178 #include "mozilla/ProfilerMarkers.h"
179 #include "mozilla/SegmentedVector.h"
180 #include "mozilla/Telemetry.h"
181 #include "mozilla/ThreadLocal.h"
182 #include "mozilla/UniquePtr.h"
183 #include "mozilla/Unused.h"
184 #include "nsCycleCollectionNoteRootCallback.h"
185 #include "nsCycleCollectionParticipant.h"
186 #include "nsCycleCollector.h"
187 #include "nsDeque.h"
188 #include "nsDumpUtils.h"
189 #include "nsExceptionHandler.h"
190 #include "nsIConsoleService.h"
191 #include "nsICycleCollectorListener.h"
192 #include "nsIFile.h"
193 #include "nsIMemoryReporter.h"
194 #include "nsISerialEventTarget.h"
195 #include "nsPrintfCString.h"
196 #include "nsTArray.h"
197 #include "nsThreadUtils.h"
198 #include "nsXULAppAPI.h"
199 #include "prenv.h"
200 #include "xpcpublic.h"
202 using namespace mozilla;
204 using JS::SliceBudget;
206 struct NurseryPurpleBufferEntry {
207 void* mPtr;
208 nsCycleCollectionParticipant* mParticipant;
209 nsCycleCollectingAutoRefCnt* mRefCnt;
212 #define NURSERY_PURPLE_BUFFER_SIZE 2048
213 bool gNurseryPurpleBufferEnabled = true;
214 NurseryPurpleBufferEntry gNurseryPurpleBufferEntry[NURSERY_PURPLE_BUFFER_SIZE];
215 uint32_t gNurseryPurpleBufferEntryCount = 0;
217 void ClearNurseryPurpleBuffer();
219 static void SuspectUsingNurseryPurpleBuffer(
220 void* aPtr, nsCycleCollectionParticipant* aCp,
221 nsCycleCollectingAutoRefCnt* aRefCnt) {
222 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
223 MOZ_ASSERT(gNurseryPurpleBufferEnabled);
224 if (gNurseryPurpleBufferEntryCount == NURSERY_PURPLE_BUFFER_SIZE) {
225 ClearNurseryPurpleBuffer();
228 gNurseryPurpleBufferEntry[gNurseryPurpleBufferEntryCount] = {aPtr, aCp,
229 aRefCnt};
230 ++gNurseryPurpleBufferEntryCount;
233 // #define COLLECT_TIME_DEBUG
235 // Enable assertions that are useful for diagnosing errors in graph
236 // construction.
237 // #define DEBUG_CC_GRAPH
239 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
241 // One to do the freeing, then another to detect there is no more work to do.
242 #define NORMAL_SHUTDOWN_COLLECTIONS 2
244 // Cycle collector environment variables
246 // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
248 // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
250 // MOZ_CC_LOG_SHUTDOWN_SKIP: If set to a non-negative integer value n, then
251 // skip logging for the first n shutdown CCs. This implies MOZ_CC_LOG_SHUTDOWN.
252 // The first log or two are much larger than the rest, so it can be useful to
253 // reduce the total size of logs if you know already that the initial logs
254 // aren't interesting.
256 // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
257 // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
258 // log either. The default value is "all". This must be used with either
259 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
261 // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
262 // CCs. If set to "content", only automatically log tab CCs. If set to "all",
263 // log everything. The default value is "all". This must be used with either
264 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
266 // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
267 // logging done will be WantAllTraces, which disables
268 // various cycle collector optimizations to give a fuller picture of
269 // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
270 // The default is none.
272 // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
273 // run cycle collections at shutdown.
275 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
276 // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
277 // of nsICycleCollectorListener)
279 // MOZ_CC_DISABLE_GC_LOG: If defined, don't make a GC log whenever we make a
280 // cycle collector log. This can be useful for leaks that go away when shutdown
281 // gets slower, when the JS heap is not involved in the leak. The default is to
282 // make the GC log.
284 // Various parameters of this collector can be tuned using environment
285 // variables.
287 struct nsCycleCollectorParams {
288 bool mLogAll;
289 bool mLogShutdown;
290 bool mAllTracesAll;
291 bool mAllTracesShutdown;
292 bool mLogThisThread;
293 bool mLogGC;
294 int32_t mLogShutdownSkip = 0;
296 nsCycleCollectorParams()
297 : mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
298 mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
299 mAllTracesAll(false),
300 mAllTracesShutdown(false),
301 mLogGC(!PR_GetEnv("MOZ_CC_DISABLE_GC_LOG")) {
302 if (const char* lssEnv = PR_GetEnv("MOZ_CC_LOG_SHUTDOWN_SKIP")) {
303 mLogShutdown = true;
304 nsDependentCString lssString(lssEnv);
305 nsresult rv;
306 int32_t lss = lssString.ToInteger(&rv);
307 if (NS_SUCCEEDED(rv) && lss >= 0) {
308 mLogShutdownSkip = lss;
312 const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
313 bool threadLogging = true;
314 if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
315 if (NS_IsMainThread()) {
316 threadLogging = !strcmp(logThreadEnv, "main");
317 } else {
318 threadLogging = !strcmp(logThreadEnv, "worker");
322 const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
323 bool processLogging = true;
324 if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
325 switch (XRE_GetProcessType()) {
326 case GeckoProcessType_Default:
327 processLogging = !strcmp(logProcessEnv, "main");
328 break;
329 case GeckoProcessType_Content:
330 processLogging = !strcmp(logProcessEnv, "content");
331 break;
332 default:
333 processLogging = false;
334 break;
337 mLogThisThread = threadLogging && processLogging;
339 const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
340 if (allTracesEnv) {
341 if (!strcmp(allTracesEnv, "all")) {
342 mAllTracesAll = true;
343 } else if (!strcmp(allTracesEnv, "shutdown")) {
344 mAllTracesShutdown = true;
349 // aShutdownCount is how many shutdown CCs we've started.
350 // For non-shutdown CCs, we'll pass in 0.
351 // For the first shutdown CC, we'll pass in 1.
352 bool LogThisCC(int32_t aShutdownCount) {
353 if (mLogAll) {
354 return mLogThisThread;
356 if (aShutdownCount == 0 || !mLogShutdown) {
357 return false;
359 if (aShutdownCount <= mLogShutdownSkip) {
360 return false;
362 return mLogThisThread;
365 bool AllTracesThisCC(bool aIsShutdown) {
366 return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
369 bool LogThisGC() const { return mLogGC; }
372 #ifdef COLLECT_TIME_DEBUG
373 class TimeLog {
374 public:
375 TimeLog() : mLastCheckpoint(TimeStamp::Now()) {}
377 void Checkpoint(const char* aEvent) {
378 TimeStamp now = TimeStamp::Now();
379 double dur = (now - mLastCheckpoint).ToMilliseconds();
380 if (dur >= 0.5) {
381 printf("cc: %s took %.1fms\n", aEvent, dur);
383 mLastCheckpoint = now;
386 private:
387 TimeStamp mLastCheckpoint;
389 #else
390 class TimeLog {
391 public:
392 TimeLog() = default;
393 void Checkpoint(const char* aEvent) {}
395 #endif
397 ////////////////////////////////////////////////////////////////////////
398 // Base types
399 ////////////////////////////////////////////////////////////////////////
401 class PtrInfo;
403 class EdgePool {
404 public:
405 // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
406 // However, at the end of a block, the last two pointers are a null
407 // and then a void** pointing to the next block. This allows
408 // EdgePool::Iterators to be a single word but still capable of crossing
409 // block boundaries.
411 EdgePool() {
412 mSentinelAndBlocks[0].block = nullptr;
413 mSentinelAndBlocks[1].block = nullptr;
416 ~EdgePool() {
417 MOZ_ASSERT(!mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block,
418 "Didn't call Clear()?");
421 void Clear() {
422 EdgeBlock* b = EdgeBlocks();
423 while (b) {
424 EdgeBlock* next = b->Next();
425 delete b;
426 b = next;
429 mSentinelAndBlocks[0].block = nullptr;
430 mSentinelAndBlocks[1].block = nullptr;
433 #ifdef DEBUG
434 bool IsEmpty() {
435 return !mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block;
437 #endif
439 private:
440 struct EdgeBlock;
441 union PtrInfoOrBlock {
442 // Use a union to avoid reinterpret_cast and the ensuing
443 // potential aliasing bugs.
444 PtrInfo* ptrInfo;
445 EdgeBlock* block;
447 struct EdgeBlock {
448 enum { EdgeBlockSize = 16 * 1024 };
450 PtrInfoOrBlock mPointers[EdgeBlockSize];
451 EdgeBlock() {
452 mPointers[EdgeBlockSize - 2].block = nullptr; // sentinel
453 mPointers[EdgeBlockSize - 1].block = nullptr; // next block pointer
455 EdgeBlock*& Next() { return mPointers[EdgeBlockSize - 1].block; }
456 PtrInfoOrBlock* Start() { return &mPointers[0]; }
457 PtrInfoOrBlock* End() { return &mPointers[EdgeBlockSize - 2]; }
460 // Store the null sentinel so that we can have valid iterators
461 // before adding any edges and without adding any blocks.
462 PtrInfoOrBlock mSentinelAndBlocks[2];
464 EdgeBlock*& EdgeBlocks() { return mSentinelAndBlocks[1].block; }
465 EdgeBlock* EdgeBlocks() const { return mSentinelAndBlocks[1].block; }
467 public:
468 class Iterator {
469 public:
470 Iterator() : mPointer(nullptr) {}
471 explicit Iterator(PtrInfoOrBlock* aPointer) : mPointer(aPointer) {}
472 Iterator(const Iterator& aOther) = default;
474 Iterator& operator++() {
475 if (!mPointer->ptrInfo) {
476 // Null pointer is a sentinel for link to the next block.
477 mPointer = (mPointer + 1)->block->mPointers;
479 ++mPointer;
480 return *this;
483 PtrInfo* operator*() const {
484 if (!mPointer->ptrInfo) {
485 // Null pointer is a sentinel for link to the next block.
486 return (mPointer + 1)->block->mPointers->ptrInfo;
488 return mPointer->ptrInfo;
490 bool operator==(const Iterator& aOther) const {
491 return mPointer == aOther.mPointer;
493 bool operator!=(const Iterator& aOther) const {
494 return mPointer != aOther.mPointer;
497 #ifdef DEBUG_CC_GRAPH
498 bool Initialized() const { return mPointer != nullptr; }
499 #endif
501 private:
502 PtrInfoOrBlock* mPointer;
505 class Builder;
506 friend class Builder;
507 class Builder {
508 public:
509 explicit Builder(EdgePool& aPool)
510 : mCurrent(&aPool.mSentinelAndBlocks[0]),
511 mBlockEnd(&aPool.mSentinelAndBlocks[0]),
512 mNextBlockPtr(&aPool.EdgeBlocks()) {}
514 Iterator Mark() { return Iterator(mCurrent); }
516 void Add(PtrInfo* aEdge) {
517 if (mCurrent == mBlockEnd) {
518 EdgeBlock* b = new EdgeBlock();
519 *mNextBlockPtr = b;
520 mCurrent = b->Start();
521 mBlockEnd = b->End();
522 mNextBlockPtr = &b->Next();
524 (mCurrent++)->ptrInfo = aEdge;
527 private:
528 // mBlockEnd points to space for null sentinel
529 PtrInfoOrBlock* mCurrent;
530 PtrInfoOrBlock* mBlockEnd;
531 EdgeBlock** mNextBlockPtr;
534 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
535 size_t n = 0;
536 EdgeBlock* b = EdgeBlocks();
537 while (b) {
538 n += aMallocSizeOf(b);
539 b = b->Next();
541 return n;
545 #ifdef DEBUG_CC_GRAPH
546 # define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
547 #else
548 # define CC_GRAPH_ASSERT(b)
549 #endif
551 #define CC_TELEMETRY(_name, _value) \
552 do { \
553 if (NS_IsMainThread()) { \
554 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
555 } else { \
556 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
558 } while (0)
560 enum NodeColor { black, white, grey };
562 // This structure should be kept as small as possible; we may expect
563 // hundreds of thousands of them to be allocated and touched
564 // repeatedly during each cycle collection.
565 class PtrInfo final {
566 public:
567 // mParticipant knows a more concrete type.
568 void* mPointer;
569 nsCycleCollectionParticipant* mParticipant;
570 uint32_t mColor : 2;
571 uint32_t mInternalRefs : 30;
572 uint32_t mRefCount;
574 private:
575 EdgePool::Iterator mFirstChild;
577 static const uint32_t kInitialRefCount = UINT32_MAX - 1;
579 public:
580 PtrInfo(void* aPointer, nsCycleCollectionParticipant* aParticipant)
581 : mPointer(aPointer),
582 mParticipant(aParticipant),
583 mColor(grey),
584 mInternalRefs(0),
585 mRefCount(kInitialRefCount) {
586 MOZ_ASSERT(aParticipant);
588 // We initialize mRefCount to a large non-zero value so
589 // that it doesn't look like a JS object to the cycle collector
590 // in the case where the object dies before being traversed.
591 MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
594 // Allow NodePool::NodeBlock's constructor to compile.
595 PtrInfo()
596 : mPointer{nullptr},
597 mParticipant{nullptr},
598 mColor{0},
599 mInternalRefs{0},
600 mRefCount{0} {
601 MOZ_ASSERT_UNREACHABLE("should never be called");
604 bool IsGrayJS() const { return mRefCount == 0; }
606 bool IsBlackJS() const { return mRefCount == UINT32_MAX; }
608 bool WasTraversed() const { return mRefCount != kInitialRefCount; }
610 EdgePool::Iterator FirstChild() const {
611 CC_GRAPH_ASSERT(mFirstChild.Initialized());
612 return mFirstChild;
615 // this PtrInfo must be part of a NodePool
616 EdgePool::Iterator LastChild() const {
617 CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
618 return (this + 1)->mFirstChild;
621 void SetFirstChild(EdgePool::Iterator aFirstChild) {
622 CC_GRAPH_ASSERT(aFirstChild.Initialized());
623 mFirstChild = aFirstChild;
626 // this PtrInfo must be part of a NodePool
627 void SetLastChild(EdgePool::Iterator aLastChild) {
628 CC_GRAPH_ASSERT(aLastChild.Initialized());
629 (this + 1)->mFirstChild = aLastChild;
632 void AnnotatedReleaseAssert(bool aCondition, const char* aMessage);
635 void PtrInfo::AnnotatedReleaseAssert(bool aCondition, const char* aMessage) {
636 if (aCondition) {
637 return;
640 const char* piName = "Unknown";
641 if (mParticipant) {
642 piName = mParticipant->ClassName();
644 nsPrintfCString msg("%s, for class %s", aMessage, piName);
645 NS_WARNING(msg.get());
646 CrashReporter::RecordAnnotationNSCString(
647 CrashReporter::Annotation::CycleCollector, msg);
649 MOZ_CRASH();
653 * A structure designed to be used like a linked list of PtrInfo, except
654 * it allocates many PtrInfos at a time.
656 class NodePool {
657 private:
658 // The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit
659 // |mNext|, all without causing slop.
660 enum { NodeBlockSize = 4 * 1024 - 2 };
662 struct NodeBlock {
663 // We create and destroy NodeBlock using moz_xmalloc/free rather than new
664 // and delete to avoid calling its constructor and destructor.
665 NodeBlock() : mNext{nullptr} {
666 MOZ_ASSERT_UNREACHABLE("should never be called");
668 // Ensure NodeBlock is the right size (see the comment on NodeBlockSize
669 // above).
670 static_assert(
671 sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages
672 sizeof(NodeBlock) ==
673 131048, // 64-bit; equals 31.994 x 4 KiB pages
674 "ill-sized NodeBlock");
676 ~NodeBlock() { MOZ_ASSERT_UNREACHABLE("should never be called"); }
678 NodeBlock* mNext;
679 PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
682 public:
683 NodePool() : mBlocks(nullptr), mLast(nullptr) {}
685 ~NodePool() { MOZ_ASSERT(!mBlocks, "Didn't call Clear()?"); }
687 void Clear() {
688 NodeBlock* b = mBlocks;
689 while (b) {
690 NodeBlock* n = b->mNext;
691 free(b);
692 b = n;
695 mBlocks = nullptr;
696 mLast = nullptr;
699 #ifdef DEBUG
700 bool IsEmpty() { return !mBlocks && !mLast; }
701 #endif
703 class Builder;
704 friend class Builder;
705 class Builder {
706 public:
707 explicit Builder(NodePool& aPool)
708 : mNextBlock(&aPool.mBlocks), mNext(aPool.mLast), mBlockEnd(nullptr) {
709 MOZ_ASSERT(!aPool.mBlocks && !aPool.mLast, "pool not empty");
711 PtrInfo* Add(void* aPointer, nsCycleCollectionParticipant* aParticipant) {
712 if (mNext == mBlockEnd) {
713 NodeBlock* block = static_cast<NodeBlock*>(malloc(sizeof(NodeBlock)));
714 if (!block) {
715 return nullptr;
718 *mNextBlock = block;
719 mNext = block->mEntries;
720 mBlockEnd = block->mEntries + NodeBlockSize;
721 block->mNext = nullptr;
722 mNextBlock = &block->mNext;
724 return new (mozilla::KnownNotNull, mNext++)
725 PtrInfo(aPointer, aParticipant);
728 private:
729 NodeBlock** mNextBlock;
730 PtrInfo*& mNext;
731 PtrInfo* mBlockEnd;
734 class Enumerator;
735 friend class Enumerator;
736 class Enumerator {
737 public:
738 explicit Enumerator(NodePool& aPool)
739 : mFirstBlock(aPool.mBlocks),
740 mCurBlock(nullptr),
741 mNext(nullptr),
742 mBlockEnd(nullptr),
743 mLast(aPool.mLast) {}
745 bool IsDone() const { return mNext == mLast; }
747 bool AtBlockEnd() const { return mNext == mBlockEnd; }
749 PtrInfo* GetNext() {
750 MOZ_ASSERT(!IsDone(), "calling GetNext when done");
751 if (mNext == mBlockEnd) {
752 NodeBlock* nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
753 mNext = nextBlock->mEntries;
754 mBlockEnd = mNext + NodeBlockSize;
755 mCurBlock = nextBlock;
757 return mNext++;
760 private:
761 // mFirstBlock is a reference to allow an Enumerator to be constructed
762 // for an empty graph.
763 NodeBlock*& mFirstBlock;
764 NodeBlock* mCurBlock;
765 // mNext is the next value we want to return, unless mNext == mBlockEnd
766 // NB: mLast is a reference to allow enumerating while building!
767 PtrInfo* mNext;
768 PtrInfo* mBlockEnd;
769 PtrInfo*& mLast;
772 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
773 // We don't measure the things pointed to by mEntries[] because those
774 // pointers are non-owning.
775 size_t n = 0;
776 NodeBlock* b = mBlocks;
777 while (b) {
778 n += aMallocSizeOf(b);
779 b = b->mNext;
781 return n;
784 private:
785 NodeBlock* mBlocks;
786 PtrInfo* mLast;
789 struct PtrToNodeHashPolicy {
790 using Key = PtrInfo*;
791 using Lookup = void*;
793 static js::HashNumber hash(const Lookup& aLookup) {
794 return mozilla::HashGeneric(aLookup);
797 static bool match(const Key& aKey, const Lookup& aLookup) {
798 return aKey->mPointer == aLookup;
802 struct WeakMapping {
803 // map and key will be null if the corresponding objects are GC marked
804 PtrInfo* mMap;
805 PtrInfo* mKey;
806 PtrInfo* mKeyDelegate;
807 PtrInfo* mVal;
810 class CCGraphBuilder;
812 struct CCGraph {
813 NodePool mNodes;
814 EdgePool mEdges;
815 nsTArray<WeakMapping> mWeakMaps;
816 uint32_t mRootCount;
818 private:
819 friend CCGraphBuilder;
821 mozilla::HashSet<PtrInfo*, PtrToNodeHashPolicy> mPtrInfoMap;
823 bool mOutOfMemory;
825 static const uint32_t kInitialMapLength = 16384;
827 public:
828 CCGraph()
829 : mRootCount(0), mPtrInfoMap(kInitialMapLength), mOutOfMemory(false) {}
831 ~CCGraph() = default;
833 void Init() { MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear"); }
835 void Clear() {
836 mNodes.Clear();
837 mEdges.Clear();
838 mWeakMaps.Clear();
839 mRootCount = 0;
840 mPtrInfoMap.clearAndCompact();
841 mOutOfMemory = false;
844 #ifdef DEBUG
845 bool IsEmpty() {
846 return mNodes.IsEmpty() && mEdges.IsEmpty() && mWeakMaps.IsEmpty() &&
847 mRootCount == 0 && mPtrInfoMap.empty();
849 #endif
851 PtrInfo* FindNode(void* aPtr);
852 void RemoveObjectFromMap(void* aObject);
854 uint32_t MapCount() const { return mPtrInfoMap.count(); }
856 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
857 size_t n = 0;
859 n += mNodes.SizeOfExcludingThis(aMallocSizeOf);
860 n += mEdges.SizeOfExcludingThis(aMallocSizeOf);
862 // We don't measure what the WeakMappings point to, because the
863 // pointers are non-owning.
864 n += mWeakMaps.ShallowSizeOfExcludingThis(aMallocSizeOf);
866 n += mPtrInfoMap.shallowSizeOfExcludingThis(aMallocSizeOf);
868 return n;
872 PtrInfo* CCGraph::FindNode(void* aPtr) {
873 auto p = mPtrInfoMap.lookup(aPtr);
874 return p ? *p : nullptr;
877 void CCGraph::RemoveObjectFromMap(void* aObj) {
878 auto p = mPtrInfoMap.lookup(aObj);
879 if (p) {
880 PtrInfo* pinfo = *p;
881 pinfo->mPointer = nullptr;
882 pinfo->mParticipant = nullptr;
883 mPtrInfoMap.remove(p);
887 static nsISupports* CanonicalizeXPCOMParticipant(nsISupports* aIn) {
888 nsISupports* out = nullptr;
889 aIn->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
890 reinterpret_cast<void**>(&out));
891 return out;
894 struct nsPurpleBufferEntry {
895 nsPurpleBufferEntry(void* aObject, nsCycleCollectingAutoRefCnt* aRefCnt,
896 nsCycleCollectionParticipant* aParticipant)
897 : mObject(aObject), mRefCnt(aRefCnt), mParticipant(aParticipant) {}
899 nsPurpleBufferEntry(nsPurpleBufferEntry&& aOther)
900 : mObject(nullptr), mRefCnt(nullptr), mParticipant(nullptr) {
901 Swap(aOther);
904 void Swap(nsPurpleBufferEntry& aOther) {
905 std::swap(mObject, aOther.mObject);
906 std::swap(mRefCnt, aOther.mRefCnt);
907 std::swap(mParticipant, aOther.mParticipant);
910 void Clear() {
911 mRefCnt->RemoveFromPurpleBuffer();
912 mRefCnt = nullptr;
913 mObject = nullptr;
914 mParticipant = nullptr;
917 ~nsPurpleBufferEntry() {
918 if (mRefCnt) {
919 mRefCnt->RemoveFromPurpleBuffer();
923 void* mObject;
924 nsCycleCollectingAutoRefCnt* mRefCnt;
925 nsCycleCollectionParticipant* mParticipant; // nullptr for nsISupports
928 class nsCycleCollector;
930 struct nsPurpleBuffer {
931 private:
932 uint32_t mCount;
934 // Try to match the size of a jemalloc bucket, to minimize slop bytes.
935 // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries'
936 // Segment is 16,372 bytes.
937 // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries'
938 // Segment is 32,760 bytes.
939 static const uint32_t kEntriesPerSegment = 1365;
940 static const size_t kSegmentSize =
941 sizeof(nsPurpleBufferEntry) * kEntriesPerSegment;
942 typedef SegmentedVector<nsPurpleBufferEntry, kSegmentSize,
943 InfallibleAllocPolicy>
944 PurpleBufferVector;
945 PurpleBufferVector mEntries;
947 public:
948 nsPurpleBuffer() : mCount(0) {
949 static_assert(
950 sizeof(PurpleBufferVector::Segment) == 16372 || // 32-bit
951 sizeof(PurpleBufferVector::Segment) == 32760 || // 64-bit
952 sizeof(PurpleBufferVector::Segment) == 32744, // 64-bit Windows
953 "ill-sized nsPurpleBuffer::mEntries");
956 ~nsPurpleBuffer() = default;
958 // This method compacts mEntries.
959 template <class PurpleVisitor>
960 void VisitEntries(PurpleVisitor& aVisitor) {
961 Maybe<AutoRestore<bool>> ar;
962 if (NS_IsMainThread()) {
963 ar.emplace(gNurseryPurpleBufferEnabled);
964 gNurseryPurpleBufferEnabled = false;
965 ClearNurseryPurpleBuffer();
968 if (mEntries.IsEmpty()) {
969 return;
972 uint32_t oldLength = mEntries.Length();
973 uint32_t keptLength = 0;
974 auto revIter = mEntries.IterFromLast();
975 auto iter = mEntries.Iter();
976 // After iteration this points to the first empty entry.
977 auto firstEmptyIter = mEntries.Iter();
978 auto iterFromLastEntry = mEntries.IterFromLast();
979 for (; !iter.Done(); iter.Next()) {
980 nsPurpleBufferEntry& e = iter.Get();
981 if (e.mObject) {
982 if (!aVisitor.Visit(*this, &e)) {
983 return;
987 // Visit call above may have cleared the entry, or the entry was empty
988 // already.
989 if (!e.mObject) {
990 // Try to find a non-empty entry from the end of the vector.
991 for (; !revIter.Done(); revIter.Prev()) {
992 nsPurpleBufferEntry& otherEntry = revIter.Get();
993 if (&e == &otherEntry) {
994 break;
996 if (otherEntry.mObject) {
997 if (!aVisitor.Visit(*this, &otherEntry)) {
998 return;
1000 // Visit may have cleared otherEntry.
1001 if (otherEntry.mObject) {
1002 e.Swap(otherEntry);
1003 revIter.Prev(); // We've swapped this now empty entry.
1004 break;
1010 // Entry is non-empty even after the Visit call, ensure it is kept
1011 // in mEntries.
1012 if (e.mObject) {
1013 firstEmptyIter.Next();
1014 ++keptLength;
1017 if (&e == &revIter.Get()) {
1018 break;
1022 // There were some empty entries.
1023 if (oldLength != keptLength) {
1024 // While visiting entries, some new ones were possibly added. This can
1025 // happen during CanSkip. Move all such new entries to be after other
1026 // entries. Note, we don't call Visit on newly added entries!
1027 if (&iterFromLastEntry.Get() != &mEntries.GetLast()) {
1028 iterFromLastEntry.Next(); // Now pointing to the first added entry.
1029 auto& iterForNewEntries = iterFromLastEntry;
1030 while (!iterForNewEntries.Done()) {
1031 MOZ_ASSERT(!firstEmptyIter.Done());
1032 MOZ_ASSERT(!firstEmptyIter.Get().mObject);
1033 firstEmptyIter.Get().Swap(iterForNewEntries.Get());
1034 firstEmptyIter.Next();
1035 iterForNewEntries.Next();
1039 mEntries.PopLastN(oldLength - keptLength);
1043 void FreeBlocks() {
1044 mCount = 0;
1045 mEntries.Clear();
1048 void SelectPointers(CCGraphBuilder& aBuilder);
1050 // RemoveSkippable removes entries from the purple buffer synchronously
1051 // (1) if !aAsyncSnowWhiteFreeing and nsPurpleBufferEntry::mRefCnt is 0 or
1052 // (2) if nsXPCOMCycleCollectionParticipant::CanSkip() for the obj or
1053 // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
1054 // (4) If aRemoveChildlessNodes is true, then any nodes in the purple buffer
1055 // that will have no children in the cycle collector graph will also be
1056 // removed. CanSkip() may be run on these children.
1057 void RemoveSkippable(nsCycleCollector* aCollector, SliceBudget& aBudget,
1058 bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing,
1059 CC_ForgetSkippableCallback aCb);
1061 MOZ_ALWAYS_INLINE void Put(void* aObject, nsCycleCollectionParticipant* aCp,
1062 nsCycleCollectingAutoRefCnt* aRefCnt) {
1063 nsPurpleBufferEntry entry(aObject, aRefCnt, aCp);
1064 Unused << mEntries.Append(std::move(entry));
1065 MOZ_ASSERT(!entry.mRefCnt, "Move didn't work!");
1066 ++mCount;
1069 void Remove(nsPurpleBufferEntry* aEntry) {
1070 MOZ_ASSERT(mCount != 0, "must have entries");
1071 --mCount;
1072 aEntry->Clear();
1075 uint32_t Count() const { return mCount; }
1077 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
1078 return mEntries.SizeOfExcludingThis(aMallocSizeOf);
1082 static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
1083 nsCycleCollectionParticipant* aParti);
1085 struct SelectPointersVisitor {
1086 explicit SelectPointersVisitor(CCGraphBuilder& aBuilder)
1087 : mBuilder(aBuilder) {}
1089 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
1090 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
1091 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
1092 "SelectPointersVisitor: snow-white object in the purple buffer");
1093 if (!aEntry->mRefCnt->IsPurple() ||
1094 AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
1095 aBuffer.Remove(aEntry);
1097 return true;
1100 private:
1101 CCGraphBuilder& mBuilder;
1104 void nsPurpleBuffer::SelectPointers(CCGraphBuilder& aBuilder) {
1105 SelectPointersVisitor visitor(aBuilder);
1106 VisitEntries(visitor);
1108 MOZ_ASSERT(mCount == 0, "AddPurpleRoot failed");
1109 if (mCount == 0) {
1110 FreeBlocks();
1114 enum ccPhase {
1115 IdlePhase,
1116 GraphBuildingPhase,
1117 ScanAndCollectWhitePhase,
1118 CleanupPhase
1121 enum ccIsManual { CCIsNotManual = false, CCIsManual = true };
1123 ////////////////////////////////////////////////////////////////////////
1124 // Top level structure for the cycle collector.
1125 ////////////////////////////////////////////////////////////////////////
1127 class JSPurpleBuffer;
1129 class nsCycleCollector : public nsIMemoryReporter {
1130 public:
1131 NS_DECL_ISUPPORTS
1132 NS_DECL_NSIMEMORYREPORTER
1134 private:
1135 bool mActivelyCollecting;
1136 bool mFreeingSnowWhite;
1137 // mScanInProgress should be false when we're collecting white objects.
1138 bool mScanInProgress;
1139 CycleCollectorResults mResults;
1140 TimeStamp mCollectionStart;
1142 CycleCollectedJSRuntime* mCCJSRuntime;
1144 ccPhase mIncrementalPhase;
1145 int32_t mShutdownCount = 0;
1146 CCGraph mGraph;
1147 UniquePtr<CCGraphBuilder> mBuilder;
1148 RefPtr<nsCycleCollectorLogger> mLogger;
1150 #ifdef DEBUG
1151 nsISerialEventTarget* mEventTarget;
1152 #endif
1154 nsCycleCollectorParams mParams;
1156 uint32_t mWhiteNodeCount;
1158 CC_BeforeUnlinkCallback mBeforeUnlinkCB;
1159 CC_ForgetSkippableCallback mForgetSkippableCB;
1161 nsPurpleBuffer mPurpleBuf;
1163 uint32_t mUnmergedNeeded;
1164 uint32_t mMergedInARow;
1166 RefPtr<JSPurpleBuffer> mJSPurpleBuffer;
1168 private:
1169 virtual ~nsCycleCollector();
1171 public:
1172 nsCycleCollector();
1174 void SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime);
1175 void ClearCCJSRuntime();
1177 void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB) {
1178 CheckThreadSafety();
1179 mBeforeUnlinkCB = aBeforeUnlinkCB;
1182 void SetForgetSkippableCallback(
1183 CC_ForgetSkippableCallback aForgetSkippableCB) {
1184 CheckThreadSafety();
1185 mForgetSkippableCB = aForgetSkippableCB;
1188 void Suspect(void* aPtr, nsCycleCollectionParticipant* aCp,
1189 nsCycleCollectingAutoRefCnt* aRefCnt);
1190 void SuspectNurseryEntries();
1191 uint32_t SuspectedCount();
1192 void ForgetSkippable(SliceBudget& aBudget, bool aRemoveChildlessNodes,
1193 bool aAsyncSnowWhiteFreeing);
1194 bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
1195 bool FreeSnowWhiteWithBudget(SliceBudget& aBudget);
1197 // This method assumes its argument is already canonicalized.
1198 void RemoveObjectFromGraph(void* aPtr);
1200 void PrepareForGarbageCollection();
1201 void FinishAnyCurrentCollection(CCReason aReason);
1203 bool Collect(CCReason aReason, ccIsManual aIsManual, SliceBudget& aBudget,
1204 nsICycleCollectorListener* aManualListener,
1205 bool aPreferShorterSlices = false);
1206 MOZ_CAN_RUN_SCRIPT
1207 void Shutdown(bool aDoCollect);
1209 bool IsIdle() const { return mIncrementalPhase == IdlePhase; }
1211 void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
1212 size_t* aObjectSize, size_t* aGraphSize,
1213 size_t* aPurpleBufferSize) const;
1215 JSPurpleBuffer* GetJSPurpleBuffer();
1217 CycleCollectedJSRuntime* Runtime() { return mCCJSRuntime; }
1219 private:
1220 void CheckThreadSafety();
1221 MOZ_CAN_RUN_SCRIPT
1222 void ShutdownCollect();
1224 void FixGrayBits(bool aIsShutdown, TimeLog& aTimeLog);
1225 bool IsIncrementalGCInProgress();
1226 void FinishAnyIncrementalGCInProgress();
1227 bool ShouldMergeZones(ccIsManual aIsManual);
1229 void BeginCollection(CCReason aReason, ccIsManual aIsManual,
1230 nsICycleCollectorListener* aManualListener);
1231 void MarkRoots(SliceBudget& aBudget);
1232 void ScanRoots(bool aFullySynchGraphBuild);
1233 void ScanIncrementalRoots();
1234 void ScanWhiteNodes(bool aFullySynchGraphBuild);
1235 void ScanBlackNodes();
1236 void ScanWeakMaps();
1238 // returns whether anything was collected
1239 bool CollectWhite();
1241 void CleanupAfterCollection();
1244 NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
1247 * GraphWalker is templatized over a Visitor class that must provide
1248 * the following two methods:
1250 * bool ShouldVisitNode(PtrInfo const *pi);
1251 * void VisitNode(PtrInfo *pi);
1253 template <class Visitor>
1254 class GraphWalker {
1255 private:
1256 Visitor mVisitor;
1258 void DoWalk(nsDeque<PtrInfo>& aQueue);
1260 void CheckedPush(nsDeque<PtrInfo>& aQueue, PtrInfo* aPi) {
1261 if (!aPi) {
1262 MOZ_CRASH();
1264 if (!aQueue.Push(aPi, fallible)) {
1265 mVisitor.Failed();
1269 public:
1270 void Walk(PtrInfo* aPi);
1271 void WalkFromRoots(CCGraph& aGraph);
1272 // copy-constructing the visitor should be cheap, and less
1273 // indirection than using a reference
1274 explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
1277 ////////////////////////////////////////////////////////////////////////
1278 // The static collector struct
1279 ////////////////////////////////////////////////////////////////////////
1281 struct CollectorData {
1282 RefPtr<nsCycleCollector> mCollector;
1283 CycleCollectedJSContext* mContext;
1284 UniquePtr<mozilla::CycleCollectorStats> mStats;
1287 static MOZ_THREAD_LOCAL(CollectorData*) sCollectorData;
1289 mozilla::CycleCollectorStats* CycleCollectorStats::Get() {
1290 MOZ_ASSERT(sCollectorData.get());
1291 return sCollectorData.get()->mStats.get();
1294 ////////////////////////////////////////////////////////////////////////
1295 // Profiler & ETW markers
1296 ////////////////////////////////////////////////////////////////////////
1298 namespace geckoprofiler::markers {
1299 struct CCIntervalMarker : public mozilla::BaseMarkerType<CCIntervalMarker> {
1300 static constexpr const char* Name = "CC";
1301 static constexpr const char* Description =
1302 "Summary data for the core part of a cycle collection, possibly "
1303 "encompassing a set of incremental slices. The thread is not "
1304 "blocked for the entire major CC interval, only for the individual "
1305 "slices.";
1307 using MS = mozilla::MarkerSchema;
1308 static constexpr MS::PayloadField PayloadFields[] = {
1309 {"mReason", MS::InputType::CString, "Reason", MS::Format::String,
1310 MS::PayloadFlags::Searchable},
1311 {"mMaxSliceTime", MS::InputType::TimeDuration, "Max Slice Time",
1312 MS::Format::Duration},
1313 {"mSuspected", MS::InputType::Uint32, "Suspected Objects",
1314 MS::Format::Integer},
1315 {"mSlices", MS::InputType::Uint32, "Number of Slices",
1316 MS::Format::Integer},
1317 {"mAnyManual", MS::InputType::Boolean, "Manually Triggered",
1318 MS::Format::Integer},
1319 {"mForcedGC", MS::InputType::Boolean, "GC Forced", MS::Format::Integer},
1320 {"mMergedZones", MS::InputType::Boolean, "Zones Merged",
1321 MS::Format::Integer},
1322 {"mForgetSkippable", MS::InputType::Uint32, "Forget Skippables",
1323 MS::Format::Integer},
1324 {"mVisitedRefCounted", MS::InputType::Uint32,
1325 "Refcounted Objects Visited", MS::Format::Integer},
1326 {"mVisitedGCed", MS::InputType::Uint32, "GC Objects Visited",
1327 MS::Format::Integer},
1328 {"mFreedRefCounted", MS::InputType::Uint32, "Refcounted Objects Freed",
1329 MS::Format::Integer},
1330 {"mFreedGCed", MS::InputType::Uint32, "GC Objects Freed",
1331 MS::Format::Integer},
1332 {"mFreedJSZones", MS::InputType::Uint32, "JS Zones Freed",
1333 MS::Format::Integer},
1334 {"mRemovedPurples", MS::InputType::Uint32,
1335 "Objects Removed From Purple Buffer", MS::Format::Integer}};
1337 static constexpr MS::Location Locations[] = {MS::Location::MarkerChart,
1338 MS::Location::MarkerTable,
1339 MS::Location::TimelineMemory};
1340 static constexpr MS::ETWMarkerGroup Group = MS::ETWMarkerGroup::Memory;
1342 static void TranslateMarkerInputToSchema(
1343 void* aContext, bool aIsStart,
1344 const mozilla::ProfilerString8View& aReason,
1345 uint32_t aForgetSkippableBeforeCC, uint32_t aSuspectedAtCCStart,
1346 uint32_t aRemovedPurples, bool aForcedGC, bool aMergedZones,
1347 bool aAnyManual, uint32_t aVisitedRefCounted, uint32_t aVisitedGCed,
1348 uint32_t aFreedRefCounted, uint32_t aFreedGCed, uint32_t aFreedJSZones,
1349 uint32_t aNumSlices, const mozilla::TimeDuration& aMaxSliceTime) {
1350 uint32_t none = 0;
1351 if (aIsStart) {
1352 ETW::OutputMarkerSchema(aContext, CCIntervalMarker{}, aReason,
1353 mozilla::TimeDuration{}, aSuspectedAtCCStart,
1354 none, false, false, false,
1355 aForgetSkippableBeforeCC, none, none, none, none,
1356 none, aRemovedPurples);
1357 } else {
1358 ETW::OutputMarkerSchema(
1359 aContext, CCIntervalMarker{}, mozilla::ProfilerStringView(""),
1360 aMaxSliceTime, none, aNumSlices, aAnyManual, aForcedGC, aMergedZones,
1361 none, aVisitedRefCounted, aVisitedGCed, aFreedRefCounted, aFreedGCed,
1362 aFreedJSZones, none);
1366 static void StreamJSONMarkerData(
1367 mozilla::baseprofiler::SpliceableJSONWriter& aWriter, bool aIsStart,
1368 const mozilla::ProfilerString8View& aReason,
1369 uint32_t aForgetSkippableBeforeCC, uint32_t aSuspectedAtCCStart,
1370 uint32_t aRemovedPurples, bool aForcedGC, bool aMergedZones,
1371 bool aAnyManual, uint32_t aVisitedRefCounted, uint32_t aVisitedGCed,
1372 uint32_t aFreedRefCounted, uint32_t aFreedGCed, uint32_t aFreedJSZones,
1373 uint32_t aNumSlices, mozilla::TimeDuration aMaxSliceTime) {
1374 if (aIsStart) {
1375 aWriter.StringProperty("mReason", aReason);
1376 aWriter.IntProperty("mSuspected", aSuspectedAtCCStart);
1377 aWriter.IntProperty("mForgetSkippable", aForgetSkippableBeforeCC);
1378 aWriter.IntProperty("mRemovedPurples", aRemovedPurples);
1379 } else {
1380 aWriter.TimeDoubleMsProperty("mMaxSliceTime",
1381 aMaxSliceTime.ToMilliseconds());
1382 aWriter.IntProperty("mSlices", aNumSlices);
1384 aWriter.BoolProperty("mAnyManual", aAnyManual);
1385 aWriter.BoolProperty("mForcedGC", aForcedGC);
1386 aWriter.BoolProperty("mMergedZones", aMergedZones);
1387 aWriter.IntProperty("mVisitedRefCounted", aVisitedRefCounted);
1388 aWriter.IntProperty("mVisitedGCed", aVisitedGCed);
1389 aWriter.IntProperty("mFreedRefCounted", aFreedRefCounted);
1390 aWriter.IntProperty("mFreedGCed", aFreedGCed);
1391 aWriter.IntProperty("mFreedJSZones", aFreedJSZones);
1395 } // namespace geckoprofiler::markers
1397 ////////////////////////////////////////////////////////////////////////
1398 // Utility functions
1399 ////////////////////////////////////////////////////////////////////////
1401 static inline void ToParticipant(nsISupports* aPtr,
1402 nsXPCOMCycleCollectionParticipant** aCp) {
1403 // We use QI to move from an nsISupports to an
1404 // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1405 // object that implements traversal and unlinking logic for the nsISupports
1406 // in question.
1407 *aCp = nullptr;
1408 CallQueryInterface(aPtr, aCp);
1411 static void ToParticipant(void* aParti, nsCycleCollectionParticipant** aCp) {
1412 // If the participant is null, this is an nsISupports participant,
1413 // so we must QI to get the real participant.
1415 if (!*aCp) {
1416 nsISupports* nsparti = static_cast<nsISupports*>(aParti);
1417 MOZ_ASSERT(CanonicalizeXPCOMParticipant(nsparti) == nsparti);
1418 nsXPCOMCycleCollectionParticipant* xcp;
1419 ToParticipant(nsparti, &xcp);
1420 *aCp = xcp;
1424 template <class Visitor>
1425 MOZ_NEVER_INLINE void GraphWalker<Visitor>::Walk(PtrInfo* aPi) {
1426 nsDeque<PtrInfo> queue;
1427 CheckedPush(queue, aPi);
1428 DoWalk(queue);
1431 template <class Visitor>
1432 MOZ_NEVER_INLINE void GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph) {
1433 nsDeque<PtrInfo> queue;
1434 NodePool::Enumerator etor(aGraph.mNodes);
1435 for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
1436 CheckedPush(queue, etor.GetNext());
1438 DoWalk(queue);
1441 template <class Visitor>
1442 MOZ_NEVER_INLINE void GraphWalker<Visitor>::DoWalk(nsDeque<PtrInfo>& aQueue) {
1443 // Use a aQueue to match the breadth-first traversal used when we
1444 // built the graph, for hopefully-better locality.
1445 while (aQueue.GetSize() > 0) {
1446 PtrInfo* pi = aQueue.PopFront();
1448 if (pi->WasTraversed() && mVisitor.ShouldVisitNode(pi)) {
1449 mVisitor.VisitNode(pi);
1450 for (EdgePool::Iterator child = pi->FirstChild(),
1451 child_end = pi->LastChild();
1452 child != child_end; ++child) {
1453 CheckedPush(aQueue, *child);
1459 struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber> {
1460 CCGraphDescriber() : mAddress("0x"), mCnt(0), mType(eUnknown) {}
1462 enum Type {
1463 eRefCountedObject,
1464 eGCedObject,
1465 eGCMarkedObject,
1466 eEdge,
1467 eRoot,
1468 eGarbage,
1469 eUnknown
1472 nsCString mAddress;
1473 nsCString mName;
1474 nsCString mCompartmentOrToAddress;
1475 uint32_t mCnt;
1476 Type mType;
1479 class LogStringMessageAsync : public DiscardableRunnable {
1480 public:
1481 explicit LogStringMessageAsync(const nsAString& aMsg)
1482 : mozilla::DiscardableRunnable("LogStringMessageAsync"), mMsg(aMsg) {}
1484 NS_IMETHOD Run() override {
1485 nsCOMPtr<nsIConsoleService> cs =
1486 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1487 if (cs) {
1488 cs->LogStringMessage(mMsg.get());
1490 return NS_OK;
1493 private:
1494 nsString mMsg;
1497 class nsCycleCollectorLogSinkToFile final : public nsICycleCollectorLogSink {
1498 public:
1499 NS_DECL_ISUPPORTS
1501 explicit nsCycleCollectorLogSinkToFile(bool aLogGC)
1502 : mProcessIdentifier(base::GetCurrentProcId()), mCCLog("cc-edges") {
1503 if (aLogGC) {
1504 mGCLog.emplace("gc-edges");
1508 NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier) override {
1509 aIdentifier = mFilenameIdentifier;
1510 return NS_OK;
1513 NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier) override {
1514 mFilenameIdentifier = aIdentifier;
1515 return NS_OK;
1518 NS_IMETHOD GetProcessIdentifier(int32_t* aIdentifier) override {
1519 *aIdentifier = mProcessIdentifier;
1520 return NS_OK;
1523 NS_IMETHOD SetProcessIdentifier(int32_t aIdentifier) override {
1524 mProcessIdentifier = aIdentifier;
1525 return NS_OK;
1528 NS_IMETHOD GetGcLog(nsIFile** aPath) override {
1529 if (mGCLog.isNothing()) {
1530 return NS_ERROR_UNEXPECTED;
1532 NS_IF_ADDREF(*aPath = mGCLog.ref().mFile);
1533 return NS_OK;
1536 NS_IMETHOD GetCcLog(nsIFile** aPath) override {
1537 NS_IF_ADDREF(*aPath = mCCLog.mFile);
1538 return NS_OK;
1541 NS_IMETHOD Open(FILE** aGCLog, FILE** aCCLog) override {
1542 nsresult rv;
1544 if (mCCLog.mStream) {
1545 return NS_ERROR_UNEXPECTED;
1548 if (mGCLog.isSome()) {
1549 if (mGCLog.ref().mStream) {
1550 return NS_ERROR_UNEXPECTED;
1553 rv = OpenLog(&mGCLog.ref());
1554 NS_ENSURE_SUCCESS(rv, rv);
1555 *aGCLog = mGCLog.ref().mStream;
1556 } else {
1557 *aGCLog = nullptr;
1560 rv = OpenLog(&mCCLog);
1561 NS_ENSURE_SUCCESS(rv, rv);
1562 *aCCLog = mCCLog.mStream;
1564 return NS_OK;
1567 NS_IMETHOD CloseGCLog() override {
1568 if (mGCLog.isNothing()) {
1569 return NS_OK;
1571 if (!mGCLog.ref().mStream) {
1572 return NS_ERROR_UNEXPECTED;
1574 CloseLog(&mGCLog.ref(), u"Garbage"_ns);
1575 return NS_OK;
1578 NS_IMETHOD CloseCCLog() override {
1579 if (!mCCLog.mStream) {
1580 return NS_ERROR_UNEXPECTED;
1582 CloseLog(&mCCLog, u"Cycle"_ns);
1583 return NS_OK;
1586 private:
1587 ~nsCycleCollectorLogSinkToFile() {
1588 if (mGCLog.isSome() && mGCLog.ref().mStream) {
1589 MozillaUnRegisterDebugFILE(mGCLog.ref().mStream);
1590 fclose(mGCLog.ref().mStream);
1592 if (mCCLog.mStream) {
1593 MozillaUnRegisterDebugFILE(mCCLog.mStream);
1594 fclose(mCCLog.mStream);
1598 struct FileInfo {
1599 const char* const mPrefix;
1600 nsCOMPtr<nsIFile> mFile;
1601 FILE* mStream;
1603 explicit FileInfo(const char* aPrefix)
1604 : mPrefix(aPrefix), mStream(nullptr) {}
1608 * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1609 * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1610 * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1611 * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1612 * on.
1614 already_AddRefed<nsIFile> CreateTempFile(const char* aPrefix) {
1615 nsPrintfCString filename("%s.%d%s%s.log", aPrefix, mProcessIdentifier,
1616 mFilenameIdentifier.IsEmpty() ? "" : ".",
1617 NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
1619 // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1620 // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1621 // here because OpenTempFile uses an in/out param and getter_AddRefs
1622 // wouldn't work.
1623 nsIFile* logFile = nullptr;
1624 if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1625 Unused << NS_WARN_IF(
1626 NS_FAILED(NS_NewNativeLocalFile(nsCString(env), &logFile)));
1629 // On Android or B2G, this function will open a file named
1630 // aFilename under a memory-reporting-specific folder
1631 // (/data/local/tmp/memory-reports). Otherwise, it will open a
1632 // file named aFilename under "NS_OS_TEMP_DIR".
1633 nsresult rv =
1634 nsDumpUtils::OpenTempFile(filename, &logFile, "memory-reports"_ns);
1635 if (NS_FAILED(rv)) {
1636 NS_IF_RELEASE(logFile);
1637 return nullptr;
1640 return dont_AddRef(logFile);
1643 nsresult OpenLog(FileInfo* aLog) {
1644 // Initially create the log in a file starting with "incomplete-".
1645 // We'll move the file and strip off the "incomplete-" once the dump
1646 // completes. (We do this because we don't want scripts which poll
1647 // the filesystem looking for GC/CC dumps to grab a file before we're
1648 // finished writing to it.)
1649 nsAutoCString incomplete;
1650 incomplete += "incomplete-";
1651 incomplete += aLog->mPrefix;
1652 MOZ_ASSERT(!aLog->mFile);
1653 aLog->mFile = CreateTempFile(incomplete.get());
1654 if (NS_WARN_IF(!aLog->mFile)) {
1655 return NS_ERROR_UNEXPECTED;
1658 MOZ_ASSERT(!aLog->mStream);
1659 nsresult rv = aLog->mFile->OpenANSIFileDesc("w", &aLog->mStream);
1660 if (NS_WARN_IF(NS_FAILED(rv))) {
1661 return NS_ERROR_UNEXPECTED;
1663 MozillaRegisterDebugFILE(aLog->mStream);
1664 return NS_OK;
1667 nsresult CloseLog(FileInfo* aLog, const nsAString& aCollectorKind) {
1668 MOZ_ASSERT(aLog->mStream);
1669 MOZ_ASSERT(aLog->mFile);
1671 MozillaUnRegisterDebugFILE(aLog->mStream);
1672 fclose(aLog->mStream);
1673 aLog->mStream = nullptr;
1675 // Strip off "incomplete-".
1676 nsCOMPtr<nsIFile> logFileFinalDestination = CreateTempFile(aLog->mPrefix);
1677 if (NS_WARN_IF(!logFileFinalDestination)) {
1678 return NS_ERROR_UNEXPECTED;
1681 nsAutoString logFileFinalDestinationName;
1682 logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
1683 if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) {
1684 return NS_ERROR_UNEXPECTED;
1687 aLog->mFile->MoveTo(/* directory */ nullptr, logFileFinalDestinationName);
1689 // Save the file path.
1690 aLog->mFile = logFileFinalDestination;
1692 // Log to the error console.
1693 nsAutoString logPath;
1694 logFileFinalDestination->GetPath(logPath);
1695 nsAutoString msg =
1696 aCollectorKind + u" Collector log dumped to "_ns + logPath;
1698 // We don't want any JS to run between ScanRoots and CollectWhite calls,
1699 // and since ScanRoots calls this method, better to log the message
1700 // asynchronously.
1701 RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
1702 NS_DispatchToCurrentThread(log);
1703 return NS_OK;
1706 int32_t mProcessIdentifier;
1707 nsString mFilenameIdentifier;
1708 Maybe<FileInfo> mGCLog;
1709 FileInfo mCCLog;
1712 NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile, nsICycleCollectorLogSink)
1714 class nsCycleCollectorLogger final : public nsICycleCollectorListener {
1715 ~nsCycleCollectorLogger() { ClearDescribers(); }
1717 public:
1718 explicit nsCycleCollectorLogger(bool aLogGC)
1719 : mLogSink(nsCycleCollector_createLogSink(aLogGC)),
1720 mWantAllTraces(false),
1721 mDisableLog(false),
1722 mWantAfterProcessing(false),
1723 mCCLog(nullptr) {}
1725 NS_DECL_ISUPPORTS
1727 void SetAllTraces() { mWantAllTraces = true; }
1729 bool IsAllTraces() { return mWantAllTraces; }
1731 NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener) override {
1732 SetAllTraces();
1733 NS_ADDREF(*aListener = this);
1734 return NS_OK;
1737 NS_IMETHOD GetWantAllTraces(bool* aAllTraces) override {
1738 *aAllTraces = mWantAllTraces;
1739 return NS_OK;
1742 NS_IMETHOD GetDisableLog(bool* aDisableLog) override {
1743 *aDisableLog = mDisableLog;
1744 return NS_OK;
1747 NS_IMETHOD SetDisableLog(bool aDisableLog) override {
1748 mDisableLog = aDisableLog;
1749 return NS_OK;
1752 NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing) override {
1753 *aWantAfterProcessing = mWantAfterProcessing;
1754 return NS_OK;
1757 NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing) override {
1758 mWantAfterProcessing = aWantAfterProcessing;
1759 return NS_OK;
1762 NS_IMETHOD GetLogSink(nsICycleCollectorLogSink** aLogSink) override {
1763 NS_ADDREF(*aLogSink = mLogSink);
1764 return NS_OK;
1767 NS_IMETHOD SetLogSink(nsICycleCollectorLogSink* aLogSink) override {
1768 if (!aLogSink) {
1769 return NS_ERROR_INVALID_ARG;
1771 mLogSink = aLogSink;
1772 return NS_OK;
1775 nsresult Begin() {
1776 nsresult rv;
1778 mCurrentAddress.AssignLiteral("0x");
1779 ClearDescribers();
1780 if (mDisableLog) {
1781 return NS_OK;
1784 FILE* gcLog;
1785 rv = mLogSink->Open(&gcLog, &mCCLog);
1786 NS_ENSURE_SUCCESS(rv, rv);
1787 // Dump the JS heap.
1788 if (gcLog) {
1789 CollectorData* data = sCollectorData.get();
1790 if (data && data->mContext) {
1791 data->mContext->Runtime()->DumpJSHeap(gcLog);
1793 rv = mLogSink->CloseGCLog();
1794 NS_ENSURE_SUCCESS(rv, rv);
1796 fprintf(mCCLog, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
1797 return NS_OK;
1799 void NoteRefCountedObject(uint64_t aAddress, uint32_t aRefCount,
1800 const char* aObjectDescription) {
1801 if (!mDisableLog) {
1802 fprintf(mCCLog, "%p [rc=%u] %s\n", (void*)aAddress, aRefCount,
1803 aObjectDescription);
1805 if (mWantAfterProcessing) {
1806 CCGraphDescriber* d = new CCGraphDescriber();
1807 mDescribers.insertBack(d);
1808 mCurrentAddress.AssignLiteral("0x");
1809 mCurrentAddress.AppendInt(aAddress, 16);
1810 d->mType = CCGraphDescriber::eRefCountedObject;
1811 d->mAddress = mCurrentAddress;
1812 d->mCnt = aRefCount;
1813 d->mName.Append(aObjectDescription);
1816 void NoteGCedObject(uint64_t aAddress, bool aMarked,
1817 const char* aObjectDescription,
1818 uint64_t aCompartmentAddress) {
1819 if (!mDisableLog) {
1820 fprintf(mCCLog, "%p [gc%s] %s\n", (void*)aAddress,
1821 aMarked ? ".marked" : "", aObjectDescription);
1823 if (mWantAfterProcessing) {
1824 CCGraphDescriber* d = new CCGraphDescriber();
1825 mDescribers.insertBack(d);
1826 mCurrentAddress.AssignLiteral("0x");
1827 mCurrentAddress.AppendInt(aAddress, 16);
1828 d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject
1829 : CCGraphDescriber::eGCedObject;
1830 d->mAddress = mCurrentAddress;
1831 d->mName.Append(aObjectDescription);
1832 if (aCompartmentAddress) {
1833 d->mCompartmentOrToAddress.AssignLiteral("0x");
1834 d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
1835 } else {
1836 d->mCompartmentOrToAddress.SetIsVoid(true);
1840 void NoteEdge(uint64_t aToAddress, const char* aEdgeName) {
1841 if (!mDisableLog) {
1842 fprintf(mCCLog, "> %p %s\n", (void*)aToAddress, aEdgeName);
1844 if (mWantAfterProcessing) {
1845 CCGraphDescriber* d = new CCGraphDescriber();
1846 mDescribers.insertBack(d);
1847 d->mType = CCGraphDescriber::eEdge;
1848 d->mAddress = mCurrentAddress;
1849 d->mCompartmentOrToAddress.AssignLiteral("0x");
1850 d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
1851 d->mName.Append(aEdgeName);
1854 void NoteWeakMapEntry(uint64_t aMap, uint64_t aKey, uint64_t aKeyDelegate,
1855 uint64_t aValue) {
1856 if (!mDisableLog) {
1857 fprintf(mCCLog, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1858 (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
1860 // We don't support after-processing for weak map entries.
1862 void NoteIncrementalRoot(uint64_t aAddress) {
1863 if (!mDisableLog) {
1864 fprintf(mCCLog, "IncrementalRoot %p\n", (void*)aAddress);
1866 // We don't support after-processing for incremental roots.
1868 void BeginResults() {
1869 if (!mDisableLog) {
1870 fputs("==========\n", mCCLog);
1873 void DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges) {
1874 if (!mDisableLog) {
1875 fprintf(mCCLog, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
1877 if (mWantAfterProcessing) {
1878 CCGraphDescriber* d = new CCGraphDescriber();
1879 mDescribers.insertBack(d);
1880 d->mType = CCGraphDescriber::eRoot;
1881 d->mAddress.AppendInt(aAddress, 16);
1882 d->mCnt = aKnownEdges;
1885 void DescribeGarbage(uint64_t aAddress) {
1886 if (!mDisableLog) {
1887 fprintf(mCCLog, "%p [garbage]\n", (void*)aAddress);
1889 if (mWantAfterProcessing) {
1890 CCGraphDescriber* d = new CCGraphDescriber();
1891 mDescribers.insertBack(d);
1892 d->mType = CCGraphDescriber::eGarbage;
1893 d->mAddress.AppendInt(aAddress, 16);
1896 void End() {
1897 if (!mDisableLog) {
1898 mCCLog = nullptr;
1899 Unused << NS_WARN_IF(NS_FAILED(mLogSink->CloseCCLog()));
1902 NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
1903 bool* aCanContinue) override {
1904 if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing)) {
1905 return NS_ERROR_UNEXPECTED;
1907 CCGraphDescriber* d = mDescribers.popFirst();
1908 if (d) {
1909 switch (d->mType) {
1910 case CCGraphDescriber::eRefCountedObject:
1911 aHandler->NoteRefCountedObject(d->mAddress, d->mCnt, d->mName);
1912 break;
1913 case CCGraphDescriber::eGCedObject:
1914 case CCGraphDescriber::eGCMarkedObject:
1915 aHandler->NoteGCedObject(
1916 d->mAddress, d->mType == CCGraphDescriber::eGCMarkedObject,
1917 d->mName, d->mCompartmentOrToAddress);
1918 break;
1919 case CCGraphDescriber::eEdge:
1920 aHandler->NoteEdge(d->mAddress, d->mCompartmentOrToAddress, d->mName);
1921 break;
1922 case CCGraphDescriber::eRoot:
1923 aHandler->DescribeRoot(d->mAddress, d->mCnt);
1924 break;
1925 case CCGraphDescriber::eGarbage:
1926 aHandler->DescribeGarbage(d->mAddress);
1927 break;
1928 case CCGraphDescriber::eUnknown:
1929 MOZ_ASSERT_UNREACHABLE("CCGraphDescriber::eUnknown");
1930 break;
1932 delete d;
1934 if (!(*aCanContinue = !mDescribers.isEmpty())) {
1935 mCurrentAddress.AssignLiteral("0x");
1937 return NS_OK;
1939 NS_IMETHOD AsLogger(nsCycleCollectorLogger** aRetVal) override {
1940 RefPtr<nsCycleCollectorLogger> rval = this;
1941 rval.forget(aRetVal);
1942 return NS_OK;
1945 private:
1946 void ClearDescribers() {
1947 CCGraphDescriber* d;
1948 while ((d = mDescribers.popFirst())) {
1949 delete d;
1953 nsCOMPtr<nsICycleCollectorLogSink> mLogSink;
1954 bool mWantAllTraces;
1955 bool mDisableLog;
1956 bool mWantAfterProcessing;
1957 nsCString mCurrentAddress;
1958 mozilla::LinkedList<CCGraphDescriber> mDescribers;
1959 FILE* mCCLog;
1962 NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
1964 already_AddRefed<nsICycleCollectorListener> nsCycleCollector_createLogger() {
1965 nsCOMPtr<nsICycleCollectorListener> logger =
1966 new nsCycleCollectorLogger(/* aLogGC = */ true);
1967 return logger.forget();
1970 static bool GCThingIsGrayCCThing(JS::GCCellPtr thing) {
1971 return JS::IsCCTraceKind(thing.kind()) && JS::GCThingIsMarkedGrayInCC(thing);
1974 static bool ValueIsGrayCCThing(const JS::Value& value) {
1975 return JS::IsCCTraceKind(value.traceKind()) &&
1976 JS::GCThingIsMarkedGray(value.toGCCellPtr());
1979 ////////////////////////////////////////////////////////////////////////
1980 // Bacon & Rajan's |MarkRoots| routine.
1981 ////////////////////////////////////////////////////////////////////////
1983 class CCGraphBuilder final : public nsCycleCollectionTraversalCallback,
1984 public nsCycleCollectionNoteRootCallback {
1985 private:
1986 CCGraph& mGraph;
1987 CycleCollectorResults& mResults;
1988 NodePool::Builder mNodeBuilder;
1989 EdgePool::Builder mEdgeBuilder;
1990 MOZ_INIT_OUTSIDE_CTOR PtrInfo* mCurrPi;
1991 nsCycleCollectionParticipant* mJSParticipant;
1992 nsCycleCollectionParticipant* mJSZoneParticipant;
1993 nsCString mNextEdgeName;
1994 RefPtr<nsCycleCollectorLogger> mLogger;
1995 bool mMergeZones;
1996 UniquePtr<NodePool::Enumerator> mCurrNode;
1997 uint32_t mNoteChildCount;
1999 struct PtrInfoCache : public MruCache<void*, PtrInfo*, PtrInfoCache, 491> {
2000 static HashNumber Hash(const void* aKey) { return HashGeneric(aKey); }
2001 static bool Match(const void* aKey, const PtrInfo* aVal) {
2002 return aVal->mPointer == aKey;
2006 PtrInfoCache mGraphCache;
2008 public:
2009 CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
2010 CycleCollectedJSRuntime* aCCRuntime,
2011 nsCycleCollectorLogger* aLogger, bool aMergeZones);
2012 virtual ~CCGraphBuilder();
2014 bool WantAllTraces() const {
2015 return nsCycleCollectionNoteRootCallback::WantAllTraces();
2018 bool AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti);
2020 // This is called when all roots have been added to the graph, to prepare for
2021 // BuildGraph().
2022 void DoneAddingRoots();
2024 // Do some work traversing nodes in the graph. Returns true if this graph
2025 // building is finished.
2026 bool BuildGraph(SliceBudget& aBudget);
2028 void RemoveCachedEntry(void* aPtr) { mGraphCache.Remove(aPtr); }
2030 private:
2031 PtrInfo* AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant);
2032 PtrInfo* AddWeakMapNode(JS::GCCellPtr aThing);
2033 PtrInfo* AddWeakMapNode(JSObject* aObject);
2035 void SetFirstChild() { mCurrPi->SetFirstChild(mEdgeBuilder.Mark()); }
2037 void SetLastChild() { mCurrPi->SetLastChild(mEdgeBuilder.Mark()); }
2039 public:
2040 // nsCycleCollectionNoteRootCallback methods.
2041 NS_IMETHOD_(void)
2042 NoteXPCOMRoot(nsISupports* aRoot,
2043 nsCycleCollectionParticipant* aParticipant) override;
2044 NS_IMETHOD_(void) NoteJSRoot(JSObject* aRoot) override;
2045 NS_IMETHOD_(void)
2046 NoteNativeRoot(void* aRoot,
2047 nsCycleCollectionParticipant* aParticipant) override;
2048 NS_IMETHOD_(void)
2049 NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey, JSObject* aKdelegate,
2050 JS::GCCellPtr aVal) override;
2051 // This is used to create synthetic non-refcounted references to
2052 // nsXPCWrappedJS from their wrapped JS objects. No map is needed, because
2053 // the SubjectToFinalization list is like a known-black weak map, and
2054 // no delegate is needed because the keys are all unwrapped objects.
2055 NS_IMETHOD_(void)
2056 NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
2057 nsCycleCollectionParticipant* aValParticipant) override;
2059 // nsCycleCollectionTraversalCallback methods.
2060 NS_IMETHOD_(void)
2061 DescribeRefCountedNode(nsrefcnt aRefCount, const char* aObjName) override;
2062 NS_IMETHOD_(void)
2063 DescribeGCedNode(bool aIsMarked, const char* aObjName,
2064 uint64_t aCompartmentAddress) override;
2066 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
2067 NS_IMETHOD_(void) NoteJSChild(JS::GCCellPtr aThing) override;
2068 NS_IMETHOD_(void)
2069 NoteNativeChild(void* aChild,
2070 nsCycleCollectionParticipant* aParticipant) override;
2071 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override;
2073 private:
2074 NS_IMETHOD_(void)
2075 NoteRoot(void* aRoot, nsCycleCollectionParticipant* aParticipant) {
2076 MOZ_ASSERT(aRoot);
2077 MOZ_ASSERT(aParticipant);
2079 if (!aParticipant->CanSkipInCC(aRoot) || MOZ_UNLIKELY(WantAllTraces())) {
2080 AddNode(aRoot, aParticipant);
2084 NS_IMETHOD_(void)
2085 NoteChild(void* aChild, nsCycleCollectionParticipant* aCp,
2086 nsCString& aEdgeName) {
2087 PtrInfo* childPi = AddNode(aChild, aCp);
2088 if (!childPi) {
2089 return;
2091 mEdgeBuilder.Add(childPi);
2092 if (mLogger) {
2093 mLogger->NoteEdge((uint64_t)aChild, aEdgeName.get());
2095 ++childPi->mInternalRefs;
2098 JS::Zone* MergeZone(JS::GCCellPtr aGcthing) {
2099 if (!mMergeZones) {
2100 return nullptr;
2102 JS::Zone* zone = JS::GetTenuredGCThingZone(aGcthing);
2103 if (js::IsSystemZone(zone)) {
2104 return nullptr;
2106 return zone;
2110 CCGraphBuilder::CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
2111 CycleCollectedJSRuntime* aCCRuntime,
2112 nsCycleCollectorLogger* aLogger,
2113 bool aMergeZones)
2114 : mGraph(aGraph),
2115 mResults(aResults),
2116 mNodeBuilder(aGraph.mNodes),
2117 mEdgeBuilder(aGraph.mEdges),
2118 mJSParticipant(nullptr),
2119 mJSZoneParticipant(nullptr),
2120 mLogger(aLogger),
2121 mMergeZones(aMergeZones),
2122 mNoteChildCount(0) {
2123 // 4096 is an allocation bucket size.
2124 static_assert(sizeof(CCGraphBuilder) <= 4096,
2125 "Don't create too large CCGraphBuilder objects");
2127 if (aCCRuntime) {
2128 mJSParticipant = aCCRuntime->GCThingParticipant();
2129 mJSZoneParticipant = aCCRuntime->ZoneParticipant();
2132 if (mLogger) {
2133 mFlags |= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
2134 if (mLogger->IsAllTraces()) {
2135 mFlags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
2136 mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
2140 mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
2142 MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
2143 nsCycleCollectionTraversalCallback::WantAllTraces());
2146 CCGraphBuilder::~CCGraphBuilder() = default;
2148 PtrInfo* CCGraphBuilder::AddNode(void* aPtr,
2149 nsCycleCollectionParticipant* aParticipant) {
2150 if (mGraph.mOutOfMemory) {
2151 return nullptr;
2154 PtrInfoCache::Entry cached = mGraphCache.Lookup(aPtr);
2155 if (cached) {
2156 #ifdef DEBUG
2157 if (cached.Data()->mParticipant != aParticipant) {
2158 auto* parti1 = cached.Data()->mParticipant;
2159 auto* parti2 = aParticipant;
2160 NS_WARNING(
2161 nsPrintfCString("cached participant: %s; AddNode participant: %s\n",
2162 parti1 ? parti1->ClassName() : "null",
2163 parti2 ? parti2->ClassName() : "null")
2164 .get());
2166 #endif
2167 MOZ_ASSERT(cached.Data()->mParticipant == aParticipant,
2168 "nsCycleCollectionParticipant shouldn't change!");
2169 return cached.Data();
2172 PtrInfo* result;
2173 auto p = mGraph.mPtrInfoMap.lookupForAdd(aPtr);
2174 if (!p) {
2175 // New entry
2176 result = mNodeBuilder.Add(aPtr, aParticipant);
2177 if (!result) {
2178 return nullptr;
2181 if (!mGraph.mPtrInfoMap.add(p, result)) {
2182 // `result` leaks here, but we can't free it because it's
2183 // pool-allocated within NodePool.
2184 mGraph.mOutOfMemory = true;
2185 MOZ_ASSERT(false, "OOM while building cycle collector graph");
2186 return nullptr;
2189 } else {
2190 result = *p;
2191 MOZ_ASSERT(result->mParticipant == aParticipant,
2192 "nsCycleCollectionParticipant shouldn't change!");
2195 cached.Set(result);
2197 return result;
2200 bool CCGraphBuilder::AddPurpleRoot(void* aRoot,
2201 nsCycleCollectionParticipant* aParti) {
2202 ToParticipant(aRoot, &aParti);
2204 if (WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
2205 PtrInfo* pinfo = AddNode(aRoot, aParti);
2206 if (!pinfo) {
2207 return false;
2211 return true;
2214 void CCGraphBuilder::DoneAddingRoots() {
2215 // We've finished adding roots, and everything in the graph is a root.
2216 mGraph.mRootCount = mGraph.MapCount();
2218 mCurrNode = MakeUnique<NodePool::Enumerator>(mGraph.mNodes);
2221 MOZ_NEVER_INLINE bool CCGraphBuilder::BuildGraph(SliceBudget& aBudget) {
2222 MOZ_ASSERT(mCurrNode);
2224 while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
2225 mNoteChildCount = 0;
2227 PtrInfo* pi = mCurrNode->GetNext();
2228 if (!pi) {
2229 MOZ_CRASH();
2232 mCurrPi = pi;
2234 // We need to call SetFirstChild() even on deleted nodes, to set their
2235 // firstChild() that may be read by a prior non-deleted neighbor.
2236 SetFirstChild();
2238 if (pi->mParticipant) {
2239 nsresult rv = pi->mParticipant->TraverseNativeAndJS(pi->mPointer, *this);
2240 MOZ_RELEASE_ASSERT(!NS_FAILED(rv),
2241 "Cycle collector Traverse method failed");
2244 if (mCurrNode->AtBlockEnd()) {
2245 SetLastChild();
2248 aBudget.step(mNoteChildCount + 1);
2251 if (!mCurrNode->IsDone()) {
2252 return false;
2255 if (mGraph.mRootCount > 0) {
2256 SetLastChild();
2259 mCurrNode = nullptr;
2261 return true;
2264 NS_IMETHODIMP_(void)
2265 CCGraphBuilder::NoteXPCOMRoot(nsISupports* aRoot,
2266 nsCycleCollectionParticipant* aParticipant) {
2267 MOZ_ASSERT(aRoot == CanonicalizeXPCOMParticipant(aRoot));
2269 #ifdef DEBUG
2270 nsXPCOMCycleCollectionParticipant* cp;
2271 ToParticipant(aRoot, &cp);
2272 MOZ_ASSERT(aParticipant == cp);
2273 #endif
2275 NoteRoot(aRoot, aParticipant);
2278 NS_IMETHODIMP_(void)
2279 CCGraphBuilder::NoteJSRoot(JSObject* aRoot) {
2280 if (JS::Zone* zone = MergeZone(JS::GCCellPtr(aRoot))) {
2281 NoteRoot(zone, mJSZoneParticipant);
2282 } else {
2283 NoteRoot(aRoot, mJSParticipant);
2287 NS_IMETHODIMP_(void)
2288 CCGraphBuilder::NoteNativeRoot(void* aRoot,
2289 nsCycleCollectionParticipant* aParticipant) {
2290 NoteRoot(aRoot, aParticipant);
2293 NS_IMETHODIMP_(void)
2294 CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount,
2295 const char* aObjName) {
2296 mCurrPi->AnnotatedReleaseAssert(aRefCount != 0,
2297 "CCed refcounted object has zero refcount");
2298 mCurrPi->AnnotatedReleaseAssert(
2299 aRefCount != UINT32_MAX,
2300 "CCed refcounted object has overflowing refcount");
2302 mResults.mVisitedRefCounted++;
2304 if (mLogger) {
2305 mLogger->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, aRefCount,
2306 aObjName);
2309 mCurrPi->mRefCount = aRefCount;
2312 NS_IMETHODIMP_(void)
2313 CCGraphBuilder::DescribeGCedNode(bool aIsMarked, const char* aObjName,
2314 uint64_t aCompartmentAddress) {
2315 uint32_t refCount = aIsMarked ? UINT32_MAX : 0;
2316 mResults.mVisitedGCed++;
2318 if (mLogger) {
2319 mLogger->NoteGCedObject((uint64_t)mCurrPi->mPointer, aIsMarked, aObjName,
2320 aCompartmentAddress);
2323 mCurrPi->mRefCount = refCount;
2326 NS_IMETHODIMP_(void)
2327 CCGraphBuilder::NoteXPCOMChild(nsISupports* aChild) {
2328 nsCString edgeName;
2329 if (WantDebugInfo()) {
2330 edgeName.Assign(mNextEdgeName);
2331 mNextEdgeName.Truncate();
2333 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2334 return;
2337 ++mNoteChildCount;
2339 nsXPCOMCycleCollectionParticipant* cp;
2340 ToParticipant(aChild, &cp);
2341 if (cp && (!cp->CanSkipThis(aChild) || WantAllTraces())) {
2342 NoteChild(aChild, cp, edgeName);
2346 NS_IMETHODIMP_(void)
2347 CCGraphBuilder::NoteNativeChild(void* aChild,
2348 nsCycleCollectionParticipant* aParticipant) {
2349 nsCString edgeName;
2350 if (WantDebugInfo()) {
2351 edgeName.Assign(mNextEdgeName);
2352 mNextEdgeName.Truncate();
2354 if (!aChild) {
2355 return;
2358 ++mNoteChildCount;
2360 MOZ_ASSERT(aParticipant, "Need a nsCycleCollectionParticipant!");
2361 if (!aParticipant->CanSkipThis(aChild) || WantAllTraces()) {
2362 NoteChild(aChild, aParticipant, edgeName);
2366 NS_IMETHODIMP_(void)
2367 CCGraphBuilder::NoteJSChild(JS::GCCellPtr aChild) {
2368 if (!aChild) {
2369 return;
2372 ++mNoteChildCount;
2374 nsCString edgeName;
2375 if (MOZ_UNLIKELY(WantDebugInfo())) {
2376 edgeName.Assign(mNextEdgeName);
2377 mNextEdgeName.Truncate();
2380 if (GCThingIsGrayCCThing(aChild) || MOZ_UNLIKELY(WantAllTraces())) {
2381 if (JS::Zone* zone = MergeZone(aChild)) {
2382 NoteChild(zone, mJSZoneParticipant, edgeName);
2383 } else {
2384 NoteChild(aChild.asCell(), mJSParticipant, edgeName);
2389 NS_IMETHODIMP_(void)
2390 CCGraphBuilder::NoteNextEdgeName(const char* aName) {
2391 if (WantDebugInfo()) {
2392 mNextEdgeName = aName;
2396 PtrInfo* CCGraphBuilder::AddWeakMapNode(JS::GCCellPtr aNode) {
2397 MOZ_ASSERT(aNode, "Weak map node should be non-null.");
2399 if (!GCThingIsGrayCCThing(aNode) && !WantAllTraces()) {
2400 return nullptr;
2403 if (JS::Zone* zone = MergeZone(aNode)) {
2404 return AddNode(zone, mJSZoneParticipant);
2406 return AddNode(aNode.asCell(), mJSParticipant);
2409 PtrInfo* CCGraphBuilder::AddWeakMapNode(JSObject* aObject) {
2410 return AddWeakMapNode(JS::GCCellPtr(aObject));
2413 NS_IMETHODIMP_(void)
2414 CCGraphBuilder::NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2415 JSObject* aKdelegate, JS::GCCellPtr aVal) {
2416 // Don't try to optimize away the entry here, as we've already attempted to
2417 // do that in TraceWeakMapping in nsXPConnect.
2418 WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2419 mapping->mMap = aMap ? AddWeakMapNode(aMap) : nullptr;
2420 mapping->mKey = aKey ? AddWeakMapNode(aKey) : nullptr;
2421 mapping->mKeyDelegate =
2422 aKdelegate ? AddWeakMapNode(aKdelegate) : mapping->mKey;
2423 mapping->mVal = aVal ? AddWeakMapNode(aVal) : nullptr;
2425 if (mLogger) {
2426 mLogger->NoteWeakMapEntry((uint64_t)aMap, aKey ? aKey.unsafeAsInteger() : 0,
2427 (uint64_t)aKdelegate,
2428 aVal ? aVal.unsafeAsInteger() : 0);
2432 NS_IMETHODIMP_(void)
2433 CCGraphBuilder::NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
2434 nsCycleCollectionParticipant* aValParticipant) {
2435 MOZ_ASSERT(aKey, "Don't call NoteWeakMapping with a null key");
2436 MOZ_ASSERT(aVal, "Don't call NoteWeakMapping with a null value");
2437 WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2438 mapping->mMap = nullptr;
2439 mapping->mKey = AddWeakMapNode(aKey);
2440 mapping->mKeyDelegate = mapping->mKey;
2441 MOZ_ASSERT(js::UncheckedUnwrapWithoutExpose(aKey) == aKey);
2442 mapping->mVal = AddNode(aVal, aValParticipant);
2444 if (mLogger) {
2445 mLogger->NoteWeakMapEntry(0, (uint64_t)aKey, 0, (uint64_t)aVal);
2449 static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
2450 nsCycleCollectionParticipant* aParti) {
2451 return aBuilder.AddPurpleRoot(aRoot, aParti);
2454 // MayHaveChild() will be false after a Traverse if the object does
2455 // not have any children the CC will visit.
2456 class ChildFinder : public nsCycleCollectionTraversalCallback {
2457 public:
2458 ChildFinder() : mMayHaveChild(false) {}
2460 // The logic of the Note*Child functions must mirror that of their
2461 // respective functions in CCGraphBuilder.
2462 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
2463 NS_IMETHOD_(void)
2464 NoteNativeChild(void* aChild, nsCycleCollectionParticipant* aHelper) override;
2465 NS_IMETHOD_(void) NoteJSChild(JS::GCCellPtr aThing) override;
2467 NS_IMETHOD_(void)
2468 NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
2469 nsCycleCollectionParticipant* aValParticipant) override {}
2471 NS_IMETHOD_(void)
2472 DescribeRefCountedNode(nsrefcnt aRefcount, const char* aObjname) override {}
2473 NS_IMETHOD_(void)
2474 DescribeGCedNode(bool aIsMarked, const char* aObjname,
2475 uint64_t aCompartmentAddress) override {}
2476 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override {}
2477 bool MayHaveChild() { return mMayHaveChild; }
2479 private:
2480 bool mMayHaveChild;
2483 NS_IMETHODIMP_(void)
2484 ChildFinder::NoteXPCOMChild(nsISupports* aChild) {
2485 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2486 return;
2488 nsXPCOMCycleCollectionParticipant* cp;
2489 ToParticipant(aChild, &cp);
2490 if (cp && !cp->CanSkip(aChild, true)) {
2491 mMayHaveChild = true;
2495 NS_IMETHODIMP_(void)
2496 ChildFinder::NoteNativeChild(void* aChild,
2497 nsCycleCollectionParticipant* aHelper) {
2498 if (!aChild) {
2499 return;
2501 MOZ_ASSERT(aHelper, "Native child must have a participant");
2502 if (!aHelper->CanSkip(aChild, true)) {
2503 mMayHaveChild = true;
2507 NS_IMETHODIMP_(void)
2508 ChildFinder::NoteJSChild(JS::GCCellPtr aChild) {
2509 if (aChild && JS::GCThingIsMarkedGray(aChild)) {
2510 mMayHaveChild = true;
2514 static bool MayHaveChild(void* aObj, nsCycleCollectionParticipant* aCp) {
2515 ChildFinder cf;
2516 aCp->TraverseNativeAndJS(aObj, cf);
2517 return cf.MayHaveChild();
2520 // JSPurpleBuffer keeps references to GCThings which might affect the
2521 // next cycle collection. It is owned only by itself and during unlink its
2522 // self reference is broken down and the object ends up killing itself.
2523 // If GC happens before CC, references to GCthings and the self reference are
2524 // removed.
2525 class JSPurpleBuffer {
2526 ~JSPurpleBuffer() {
2527 MOZ_ASSERT(mValues.IsEmpty());
2528 MOZ_ASSERT(mObjects.IsEmpty());
2531 public:
2532 explicit JSPurpleBuffer(RefPtr<JSPurpleBuffer>& aReferenceToThis)
2533 : mReferenceToThis(aReferenceToThis),
2534 mValues(kSegmentSize),
2535 mObjects(kSegmentSize) {
2536 mReferenceToThis = this;
2537 mozilla::HoldJSObjects(this);
2540 void Destroy() {
2541 RefPtr<JSPurpleBuffer> referenceToThis;
2542 mReferenceToThis.swap(referenceToThis);
2543 mValues.Clear();
2544 mObjects.Clear();
2545 mozilla::DropJSObjects(this);
2548 NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
2549 NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
2551 RefPtr<JSPurpleBuffer>& mReferenceToThis;
2553 // These are raw pointers instead of Heap<T> because we only need Heap<T> for
2554 // pointers which may point into the nursery. The purple buffer never contains
2555 // pointers to the nursery because nursery gcthings can never be gray and only
2556 // gray things can be inserted into the purple buffer.
2557 static const size_t kSegmentSize = 512;
2558 SegmentedVector<JS::Value, kSegmentSize, InfallibleAllocPolicy> mValues;
2559 SegmentedVector<JSObject*, kSegmentSize, InfallibleAllocPolicy> mObjects;
2562 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
2564 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
2565 tmp->Destroy();
2566 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2568 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
2569 CycleCollectionNoteChild(cb, tmp, "self");
2570 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2572 #define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
2574 for (auto iter = tmp->_field.Iter(); !iter.Done(); iter.Next()) { \
2575 js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
2576 &iter.Get(), aCallbacks, #_field, aClosure); \
2580 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
2581 NS_TRACE_SEGMENTED_ARRAY(mValues, JS::Value)
2582 NS_TRACE_SEGMENTED_ARRAY(mObjects, JSObject*)
2583 NS_IMPL_CYCLE_COLLECTION_TRACE_END
2585 class SnowWhiteKiller : public TraceCallbacks {
2586 struct SnowWhiteObject {
2587 void* mPointer;
2588 nsCycleCollectionParticipant* mParticipant;
2589 nsCycleCollectingAutoRefCnt* mRefCnt;
2592 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
2593 static const size_t kSegmentSize = sizeof(void*) * 1024;
2594 typedef SegmentedVector<SnowWhiteObject, kSegmentSize, InfallibleAllocPolicy>
2595 ObjectsVector;
2597 public:
2598 SnowWhiteKiller(nsCycleCollector* aCollector, SliceBudget* aBudget)
2599 : mCollector(aCollector),
2600 mObjects(kSegmentSize),
2601 mBudget(aBudget),
2602 mSawSnowWhiteObjects(false) {
2603 MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
2606 explicit SnowWhiteKiller(nsCycleCollector* aCollector)
2607 : SnowWhiteKiller(aCollector, nullptr) {}
2609 ~SnowWhiteKiller() {
2610 for (auto iter = mObjects.Iter(); !iter.Done(); iter.Next()) {
2611 SnowWhiteObject& o = iter.Get();
2612 MaybeKillObject(o);
2616 private:
2617 void MaybeKillObject(SnowWhiteObject& aObject) {
2618 if (!aObject.mRefCnt->get() && !aObject.mRefCnt->IsInPurpleBuffer()) {
2619 mCollector->RemoveObjectFromGraph(aObject.mPointer);
2620 aObject.mRefCnt->stabilizeForDeletion();
2622 JS::AutoEnterCycleCollection autocc(mCollector->Runtime()->Runtime());
2623 aObject.mParticipant->Trace(aObject.mPointer, *this, nullptr);
2625 aObject.mParticipant->DeleteCycleCollectable(aObject.mPointer);
2629 public:
2630 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2631 if (mBudget) {
2632 if (mBudget->isOverBudget()) {
2633 return false;
2635 mBudget->step();
2638 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
2639 if (!aEntry->mRefCnt->get()) {
2640 mSawSnowWhiteObjects = true;
2641 void* o = aEntry->mObject;
2642 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2643 ToParticipant(o, &cp);
2644 SnowWhiteObject swo = {o, cp, aEntry->mRefCnt};
2645 if (!mBudget) {
2646 mObjects.InfallibleAppend(swo);
2648 aBuffer.Remove(aEntry);
2649 if (mBudget) {
2650 MaybeKillObject(swo);
2653 return true;
2656 bool HasSnowWhiteObjects() const { return !mObjects.IsEmpty(); }
2658 bool SawSnowWhiteObjects() const { return mSawSnowWhiteObjects; }
2660 virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
2661 void* aClosure) const override {
2662 const JS::Value& val = aValue->unbarrieredGet();
2663 if (val.isGCThing() && ValueIsGrayCCThing(val)) {
2664 MOZ_ASSERT(!js::gc::IsInsideNursery(val.toGCThing()));
2665 mCollector->GetJSPurpleBuffer()->mValues.InfallibleAppend(val);
2669 virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
2670 void* aClosure) const override {}
2672 void AppendJSObjectToPurpleBuffer(JSObject* obj) const {
2673 if (obj && JS::ObjectIsMarkedGray(obj)) {
2674 MOZ_ASSERT(JS::ObjectIsTenured(obj));
2675 mCollector->GetJSPurpleBuffer()->mObjects.InfallibleAppend(obj);
2679 virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
2680 void* aClosure) const override {
2681 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGet());
2684 virtual void Trace(nsWrapperCache* aWrapperCache, const char* aName,
2685 void* aClosure) const override {
2686 AppendJSObjectToPurpleBuffer(aWrapperCache->GetWrapperPreserveColor());
2689 virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
2690 void* aClosure) const override {
2691 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGetPtr());
2694 virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
2695 void* aClosure) const override {}
2697 virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
2698 void* aClosure) const override {}
2700 virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
2701 void* aClosure) const override {}
2703 private:
2704 RefPtr<nsCycleCollector> mCollector;
2705 ObjectsVector mObjects;
2706 SliceBudget* mBudget;
2707 bool mSawSnowWhiteObjects;
2710 class RemoveSkippableVisitor : public SnowWhiteKiller {
2711 public:
2712 RemoveSkippableVisitor(nsCycleCollector* aCollector, SliceBudget& aBudget,
2713 bool aRemoveChildlessNodes,
2714 bool aAsyncSnowWhiteFreeing,
2715 CC_ForgetSkippableCallback aCb)
2716 : SnowWhiteKiller(aCollector),
2717 mBudget(aBudget),
2718 mRemoveChildlessNodes(aRemoveChildlessNodes),
2719 mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing),
2720 mDispatchedDeferredDeletion(false),
2721 mCallback(aCb) {}
2723 ~RemoveSkippableVisitor() {
2724 // Note, we must call the callback before SnowWhiteKiller calls
2725 // DeleteCycleCollectable!
2726 if (mCallback) {
2727 mCallback();
2729 if (HasSnowWhiteObjects()) {
2730 // Effectively a continuation.
2731 nsCycleCollector_dispatchDeferredDeletion(true);
2735 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2736 if (mBudget.isOverBudget()) {
2737 return false;
2740 // CanSkip calls can be a bit slow, so increase the likelihood that
2741 // isOverBudget actually checks whether we're over the time budget.
2742 mBudget.step(5);
2743 MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
2744 if (!aEntry->mRefCnt->get()) {
2745 if (!mAsyncSnowWhiteFreeing) {
2746 SnowWhiteKiller::Visit(aBuffer, aEntry);
2747 } else if (!mDispatchedDeferredDeletion) {
2748 mDispatchedDeferredDeletion = true;
2749 nsCycleCollector_dispatchDeferredDeletion(false);
2751 return true;
2753 void* o = aEntry->mObject;
2754 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2755 ToParticipant(o, &cp);
2756 if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
2757 (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
2758 return true;
2760 aBuffer.Remove(aEntry);
2761 return true;
2764 private:
2765 SliceBudget& mBudget;
2766 bool mRemoveChildlessNodes;
2767 bool mAsyncSnowWhiteFreeing;
2768 bool mDispatchedDeferredDeletion;
2769 CC_ForgetSkippableCallback mCallback;
2772 void nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
2773 SliceBudget& aBudget,
2774 bool aRemoveChildlessNodes,
2775 bool aAsyncSnowWhiteFreeing,
2776 CC_ForgetSkippableCallback aCb) {
2777 RemoveSkippableVisitor visitor(aCollector, aBudget, aRemoveChildlessNodes,
2778 aAsyncSnowWhiteFreeing, aCb);
2779 VisitEntries(visitor);
2782 bool nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer) {
2783 CheckThreadSafety();
2785 if (mFreeingSnowWhite) {
2786 return false;
2789 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_FreeSnowWhite);
2791 AutoRestore<bool> ar(mFreeingSnowWhite);
2792 mFreeingSnowWhite = true;
2794 bool hadSnowWhiteObjects = false;
2795 do {
2796 SnowWhiteKiller visitor(this);
2797 mPurpleBuf.VisitEntries(visitor);
2798 hadSnowWhiteObjects = hadSnowWhiteObjects || visitor.HasSnowWhiteObjects();
2799 if (!visitor.HasSnowWhiteObjects()) {
2800 break;
2802 } while (aUntilNoSWInPurpleBuffer);
2803 return hadSnowWhiteObjects;
2806 bool nsCycleCollector::FreeSnowWhiteWithBudget(SliceBudget& aBudget) {
2807 CheckThreadSafety();
2809 if (mFreeingSnowWhite) {
2810 return false;
2813 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_FreeSnowWhite);
2814 AutoRestore<bool> ar(mFreeingSnowWhite);
2815 mFreeingSnowWhite = true;
2817 SnowWhiteKiller visitor(this, &aBudget);
2818 mPurpleBuf.VisitEntries(visitor);
2819 return visitor.SawSnowWhiteObjects();
2823 void nsCycleCollector::ForgetSkippable(SliceBudget& aBudget,
2824 bool aRemoveChildlessNodes,
2825 bool aAsyncSnowWhiteFreeing) {
2826 CheckThreadSafety();
2828 if (mFreeingSnowWhite) {
2829 return;
2832 // If we remove things from the purple buffer during graph building, we may
2833 // lose track of an object that was mutated during graph building.
2834 MOZ_ASSERT(IsIdle());
2836 if (mCCJSRuntime) {
2837 mCCJSRuntime->PrepareForForgetSkippable();
2839 MOZ_ASSERT(
2840 !mScanInProgress,
2841 "Don't forget skippable or free snow-white while scan is in progress.");
2842 mPurpleBuf.RemoveSkippable(this, aBudget, aRemoveChildlessNodes,
2843 aAsyncSnowWhiteFreeing, mForgetSkippableCB);
2846 MOZ_NEVER_INLINE void nsCycleCollector::MarkRoots(SliceBudget& aBudget) {
2847 JS::AutoAssertNoGC nogc;
2848 TimeLog timeLog;
2849 AutoRestore<bool> ar(mScanInProgress);
2850 MOZ_RELEASE_ASSERT(!mScanInProgress);
2851 mScanInProgress = true;
2852 MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
2854 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_BuildGraph);
2855 JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
2856 bool doneBuilding = mBuilder->BuildGraph(aBudget);
2858 if (!doneBuilding) {
2859 timeLog.Checkpoint("MarkRoots()");
2860 return;
2863 mBuilder = nullptr;
2864 mIncrementalPhase = ScanAndCollectWhitePhase;
2865 timeLog.Checkpoint("MarkRoots()");
2868 ////////////////////////////////////////////////////////////////////////
2869 // Bacon & Rajan's |ScanRoots| routine.
2870 ////////////////////////////////////////////////////////////////////////
2872 struct ScanBlackVisitor {
2873 ScanBlackVisitor(uint32_t& aWhiteNodeCount, bool& aFailed)
2874 : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed) {}
2876 bool ShouldVisitNode(PtrInfo const* aPi) { return aPi->mColor != black; }
2878 MOZ_NEVER_INLINE void VisitNode(PtrInfo* aPi) {
2879 if (aPi->mColor == white) {
2880 --mWhiteNodeCount;
2882 aPi->mColor = black;
2885 void Failed() { mFailed = true; }
2887 private:
2888 uint32_t& mWhiteNodeCount;
2889 bool& mFailed;
2892 static void FloodBlackNode(uint32_t& aWhiteNodeCount, bool& aFailed,
2893 PtrInfo* aPi) {
2894 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(aWhiteNodeCount, aFailed))
2895 .Walk(aPi);
2896 MOZ_ASSERT(aPi->mColor == black || !aPi->WasTraversed(),
2897 "FloodBlackNode should make aPi black");
2900 // Iterate over the WeakMaps. If we mark anything while iterating
2901 // over the WeakMaps, we must iterate over all of the WeakMaps again.
2902 void nsCycleCollector::ScanWeakMaps() {
2903 bool anyChanged;
2904 bool failed = false;
2905 do {
2906 anyChanged = false;
2907 for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
2908 WeakMapping* wm = &mGraph.mWeakMaps[i];
2910 // If any of these are null, the original object was marked black.
2911 uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
2912 uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
2913 uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
2914 uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
2916 MOZ_ASSERT(mColor != grey, "Uncolored weak map");
2917 MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
2918 MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
2919 MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
2921 if (mColor == black && kColor != black && kdColor == black) {
2922 FloodBlackNode(mWhiteNodeCount, failed, wm->mKey);
2923 anyChanged = true;
2926 if (mColor == black && kColor == black && vColor != black) {
2927 FloodBlackNode(mWhiteNodeCount, failed, wm->mVal);
2928 anyChanged = true;
2931 } while (anyChanged);
2933 MOZ_ASSERT(!failed, "Ran out of memory in ScanWeakMaps");
2936 // Flood black from any objects in the purple buffer that are in the CC graph.
2937 class PurpleScanBlackVisitor {
2938 public:
2939 PurpleScanBlackVisitor(CCGraph& aGraph, nsCycleCollectorLogger* aLogger,
2940 uint32_t& aCount, bool& aFailed)
2941 : mGraph(aGraph), mLogger(aLogger), mCount(aCount), mFailed(aFailed) {}
2943 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2944 MOZ_ASSERT(aEntry->mObject,
2945 "Entries with null mObject shouldn't be in the purple buffer.");
2946 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
2947 "Snow-white objects shouldn't be in the purple buffer.");
2949 void* obj = aEntry->mObject;
2951 MOZ_ASSERT(
2952 aEntry->mParticipant ||
2953 CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj)) == obj,
2954 "Suspect nsISupports pointer must be canonical");
2956 PtrInfo* pi = mGraph.FindNode(obj);
2957 if (!pi) {
2958 return true;
2960 MOZ_ASSERT(pi->mParticipant,
2961 "No dead objects should be in the purple buffer.");
2962 if (MOZ_UNLIKELY(mLogger)) {
2963 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
2965 if (pi->mColor == black) {
2966 return true;
2968 FloodBlackNode(mCount, mFailed, pi);
2969 return true;
2972 private:
2973 CCGraph& mGraph;
2974 RefPtr<nsCycleCollectorLogger> mLogger;
2975 uint32_t& mCount;
2976 bool& mFailed;
2979 // Objects that have been stored somewhere since the start of incremental graph
2980 // building must be treated as live for this cycle collection, because we may
2981 // not have accurate information about who holds references to them.
2982 void nsCycleCollector::ScanIncrementalRoots() {
2983 TimeLog timeLog;
2985 // Reference counted objects:
2986 // We cleared the purple buffer at the start of the current ICC, so if a
2987 // refcounted object is purple, it may have been AddRef'd during the current
2988 // ICC. (It may also have only been released.) If that is the case, we cannot
2989 // be sure that the set of things pointing to the object in the CC graph
2990 // is accurate. Therefore, for safety, we treat any purple objects as being
2991 // live during the current CC. We don't remove anything from the purple
2992 // buffer here, so these objects will be suspected and freed in the next CC
2993 // if they are garbage.
2994 bool failed = false;
2995 PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mLogger,
2996 mWhiteNodeCount, failed);
2997 mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
2998 timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
3000 bool hasJSRuntime = !!mCCJSRuntime;
3001 nsCycleCollectionParticipant* jsParticipant =
3002 hasJSRuntime ? mCCJSRuntime->GCThingParticipant() : nullptr;
3003 nsCycleCollectionParticipant* zoneParticipant =
3004 hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
3005 bool hasLogger = !!mLogger;
3007 NodePool::Enumerator etor(mGraph.mNodes);
3008 while (!etor.IsDone()) {
3009 PtrInfo* pi = etor.GetNext();
3011 // As an optimization, if an object has already been determined to be live,
3012 // don't consider it further. We can't do this if there is a listener,
3013 // because the listener wants to know the complete set of incremental roots.
3014 if (pi->mColor == black && MOZ_LIKELY(!hasLogger)) {
3015 continue;
3018 // Garbage collected objects:
3019 // If a GCed object was added to the graph with a refcount of zero, and is
3020 // now marked black by the GC, it was probably gray before and was exposed
3021 // to active JS, so it may have been stored somewhere, so it needs to be
3022 // treated as live.
3023 if (pi->IsGrayJS() && MOZ_LIKELY(hasJSRuntime)) {
3024 // If the object is still marked gray by the GC, nothing could have gotten
3025 // hold of it, so it isn't an incremental root.
3026 if (pi->mParticipant == jsParticipant) {
3027 JS::GCCellPtr ptr(pi->mPointer, JS::GCThingTraceKind(pi->mPointer));
3028 if (GCThingIsGrayCCThing(ptr)) {
3029 continue;
3031 } else if (pi->mParticipant == zoneParticipant) {
3032 JS::Zone* zone = static_cast<JS::Zone*>(pi->mPointer);
3033 if (js::ZoneGlobalsAreAllGray(zone)) {
3034 continue;
3036 } else {
3037 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
3039 } else if (!pi->mParticipant && pi->WasTraversed()) {
3040 // Dead traversed refcounted objects:
3041 // If the object was traversed, it must have been alive at the start of
3042 // the CC, and thus had a positive refcount. It is dead now, so its
3043 // refcount must have decreased at some point during the CC. Therefore,
3044 // it would be in the purple buffer if it wasn't dead, so treat it as an
3045 // incremental root.
3047 // This should not cause leaks because as the object died it should have
3048 // released anything it held onto, which will add them to the purple
3049 // buffer, which will cause them to be considered in the next CC.
3050 } else {
3051 continue;
3054 // At this point, pi must be an incremental root.
3056 // If there's a listener, tell it about this root. We don't bother with the
3057 // optimization of skipping the Walk() if pi is black: it will just return
3058 // without doing anything and there's no need to make this case faster.
3059 if (MOZ_UNLIKELY(hasLogger) && pi->mPointer) {
3060 // Dead objects aren't logged. See bug 1031370.
3061 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
3064 FloodBlackNode(mWhiteNodeCount, failed, pi);
3067 timeLog.Checkpoint("ScanIncrementalRoots::fix nodes");
3068 NS_ASSERTION(!failed, "Ran out of memory in ScanIncrementalRoots");
3071 // Mark nodes white and make sure their refcounts are ok.
3072 // No nodes are marked black during this pass to ensure that refcount
3073 // checking is run on all nodes not marked black by ScanIncrementalRoots.
3074 void nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild) {
3075 NodePool::Enumerator nodeEnum(mGraph.mNodes);
3076 while (!nodeEnum.IsDone()) {
3077 PtrInfo* pi = nodeEnum.GetNext();
3078 if (pi->mColor == black) {
3079 // Incremental roots can be in a nonsensical state, so don't
3080 // check them. This will miss checking nodes that are merely
3081 // reachable from incremental roots.
3082 MOZ_ASSERT(!aFullySynchGraphBuild,
3083 "In a synch CC, no nodes should be marked black early on.");
3084 continue;
3086 MOZ_ASSERT(pi->mColor == grey);
3088 if (!pi->WasTraversed()) {
3089 // This node was deleted before it was traversed, so there's no reason
3090 // to look at it.
3091 MOZ_ASSERT(!pi->mParticipant,
3092 "Live nodes should all have been traversed");
3093 continue;
3096 if (pi->mInternalRefs == pi->mRefCount || pi->IsGrayJS()) {
3097 pi->mColor = white;
3098 ++mWhiteNodeCount;
3099 continue;
3102 pi->AnnotatedReleaseAssert(
3103 pi->mInternalRefs <= pi->mRefCount,
3104 "More references to an object than its refcount");
3106 // This node will get marked black in the next pass.
3110 // Any remaining grey nodes that haven't already been deleted must be alive,
3111 // so mark them and their children black. Any nodes that are black must have
3112 // already had their children marked black, so there's no need to look at them
3113 // again. This pass may turn some white nodes to black.
3114 void nsCycleCollector::ScanBlackNodes() {
3115 bool failed = false;
3116 NodePool::Enumerator nodeEnum(mGraph.mNodes);
3117 while (!nodeEnum.IsDone()) {
3118 PtrInfo* pi = nodeEnum.GetNext();
3119 if (pi->mColor == grey && pi->WasTraversed()) {
3120 FloodBlackNode(mWhiteNodeCount, failed, pi);
3123 NS_ASSERTION(!failed, "Ran out of memory in ScanBlackNodes");
3126 void nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild) {
3127 JS::AutoAssertNoGC nogc;
3128 AutoRestore<bool> ar(mScanInProgress);
3129 MOZ_RELEASE_ASSERT(!mScanInProgress);
3130 mScanInProgress = true;
3131 mWhiteNodeCount = 0;
3132 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3134 JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
3136 if (!aFullySynchGraphBuild) {
3137 ScanIncrementalRoots();
3140 TimeLog timeLog;
3141 ScanWhiteNodes(aFullySynchGraphBuild);
3142 timeLog.Checkpoint("ScanRoots::ScanWhiteNodes");
3144 ScanBlackNodes();
3145 timeLog.Checkpoint("ScanRoots::ScanBlackNodes");
3147 // Scanning weak maps must be done last.
3148 ScanWeakMaps();
3149 timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
3151 if (mLogger) {
3152 mLogger->BeginResults();
3154 NodePool::Enumerator etor(mGraph.mNodes);
3155 while (!etor.IsDone()) {
3156 PtrInfo* pi = etor.GetNext();
3157 if (!pi->WasTraversed()) {
3158 continue;
3160 switch (pi->mColor) {
3161 case black:
3162 if (!pi->IsGrayJS() && !pi->IsBlackJS() &&
3163 pi->mInternalRefs != pi->mRefCount) {
3164 mLogger->DescribeRoot((uint64_t)pi->mPointer, pi->mInternalRefs);
3166 break;
3167 case white:
3168 mLogger->DescribeGarbage((uint64_t)pi->mPointer);
3169 break;
3170 case grey:
3171 MOZ_ASSERT(false, "All traversed objects should be black or white");
3172 break;
3176 mLogger->End();
3177 mLogger = nullptr;
3178 timeLog.Checkpoint("ScanRoots::listener");
3182 ////////////////////////////////////////////////////////////////////////
3183 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
3184 ////////////////////////////////////////////////////////////////////////
3186 bool nsCycleCollector::CollectWhite() {
3187 // Explanation of "somewhat modified": we have no way to collect the
3188 // set of whites "all at once", we have to ask each of them to drop
3189 // their outgoing links and assume this will cause the garbage cycle
3190 // to *mostly* self-destruct (except for the reference we continue
3191 // to hold).
3193 // To do this "safely" we must make sure that the white nodes we're
3194 // operating on are stable for the duration of our operation. So we
3195 // make 3 sets of calls to language runtimes:
3197 // - Root(whites), which should pin the whites in memory.
3198 // - Unlink(whites), which drops outgoing links on each white.
3199 // - Unroot(whites), which returns the whites to normal GC.
3201 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
3202 static const size_t kSegmentSize = sizeof(void*) * 1024;
3203 SegmentedVector<PtrInfo*, kSegmentSize, InfallibleAllocPolicy> whiteNodes(
3204 kSegmentSize);
3205 TimeLog timeLog;
3207 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3209 uint32_t numWhiteNodes = 0;
3210 uint32_t numWhiteGCed = 0;
3211 uint32_t numWhiteJSZones = 0;
3214 JS::AutoAssertNoGC nogc;
3215 bool hasJSRuntime = !!mCCJSRuntime;
3216 nsCycleCollectionParticipant* zoneParticipant =
3217 hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
3219 NodePool::Enumerator etor(mGraph.mNodes);
3220 while (!etor.IsDone()) {
3221 PtrInfo* pinfo = etor.GetNext();
3222 if (pinfo->mColor == white && pinfo->mParticipant) {
3223 if (pinfo->IsGrayJS()) {
3224 MOZ_ASSERT(mCCJSRuntime);
3225 ++numWhiteGCed;
3226 JS::Zone* zone;
3227 if (MOZ_UNLIKELY(pinfo->mParticipant == zoneParticipant)) {
3228 ++numWhiteJSZones;
3229 zone = static_cast<JS::Zone*>(pinfo->mPointer);
3230 } else {
3231 JS::GCCellPtr ptr(pinfo->mPointer,
3232 JS::GCThingTraceKind(pinfo->mPointer));
3233 zone = JS::GetTenuredGCThingZone(ptr);
3235 mCCJSRuntime->AddZoneWaitingForGC(zone);
3236 } else {
3237 whiteNodes.InfallibleAppend(pinfo);
3238 pinfo->mParticipant->Root(pinfo->mPointer);
3239 ++numWhiteNodes;
3245 mResults.mFreedRefCounted += numWhiteNodes;
3246 mResults.mFreedGCed += numWhiteGCed;
3247 mResults.mFreedJSZones += numWhiteJSZones;
3249 timeLog.Checkpoint("CollectWhite::Root");
3251 if (mBeforeUnlinkCB) {
3252 mBeforeUnlinkCB();
3253 timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
3256 // Unlink() can trigger a GC, so do not touch any JS or anything
3257 // else not in whiteNodes after here.
3259 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3260 PtrInfo* pinfo = iter.Get();
3261 MOZ_ASSERT(pinfo->mParticipant,
3262 "Unlink shouldn't see objects removed from graph.");
3263 pinfo->mParticipant->Unlink(pinfo->mPointer);
3264 #ifdef DEBUG
3265 if (mCCJSRuntime) {
3266 mCCJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
3268 #endif
3270 timeLog.Checkpoint("CollectWhite::Unlink");
3272 JS::AutoAssertNoGC nogc;
3273 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3274 PtrInfo* pinfo = iter.Get();
3275 MOZ_ASSERT(pinfo->mParticipant,
3276 "Unroot shouldn't see objects removed from graph.");
3277 pinfo->mParticipant->Unroot(pinfo->mPointer);
3279 timeLog.Checkpoint("CollectWhite::Unroot");
3281 nsCycleCollector_dispatchDeferredDeletion(false, true);
3282 timeLog.Checkpoint("CollectWhite::dispatchDeferredDeletion");
3284 mIncrementalPhase = CleanupPhase;
3286 return numWhiteNodes > 0 || numWhiteGCed > 0 || numWhiteJSZones > 0;
3289 ////////////////////////
3290 // Memory reporting
3291 ////////////////////////
3293 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
3295 NS_IMETHODIMP
3296 nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
3297 nsISupports* aData, bool aAnonymize) {
3298 size_t objectSize, graphSize, purpleBufferSize;
3299 SizeOfIncludingThis(CycleCollectorMallocSizeOf, &objectSize, &graphSize,
3300 &purpleBufferSize);
3302 if (objectSize > 0) {
3303 MOZ_COLLECT_REPORT("explicit/cycle-collector/collector-object", KIND_HEAP,
3304 UNITS_BYTES, objectSize,
3305 "Memory used for the cycle collector object itself.");
3308 if (graphSize > 0) {
3309 MOZ_COLLECT_REPORT(
3310 "explicit/cycle-collector/graph", KIND_HEAP, UNITS_BYTES, graphSize,
3311 "Memory used for the cycle collector's graph. This should be zero when "
3312 "the collector is idle.");
3315 if (purpleBufferSize > 0) {
3316 MOZ_COLLECT_REPORT("explicit/cycle-collector/purple-buffer", KIND_HEAP,
3317 UNITS_BYTES, purpleBufferSize,
3318 "Memory used for the cycle collector's purple buffer.");
3321 return NS_OK;
3324 ////////////////////////////////////////////////////////////////////////
3325 // Collector implementation
3326 ////////////////////////////////////////////////////////////////////////
3328 nsCycleCollector::nsCycleCollector()
3329 : mActivelyCollecting(false),
3330 mFreeingSnowWhite(false),
3331 mScanInProgress(false),
3332 mCCJSRuntime(nullptr),
3333 mIncrementalPhase(IdlePhase),
3334 #ifdef DEBUG
3335 mEventTarget(GetCurrentSerialEventTarget()),
3336 #endif
3337 mWhiteNodeCount(0),
3338 mBeforeUnlinkCB(nullptr),
3339 mForgetSkippableCB(nullptr),
3340 mUnmergedNeeded(0),
3341 mMergedInARow(0) {
3344 nsCycleCollector::~nsCycleCollector() {
3345 MOZ_ASSERT(!mJSPurpleBuffer, "Didn't call JSPurpleBuffer::Destroy?");
3347 UnregisterWeakMemoryReporter(this);
3350 void nsCycleCollector::SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime) {
3351 MOZ_RELEASE_ASSERT(
3352 !mCCJSRuntime,
3353 "Multiple registrations of CycleCollectedJSRuntime in cycle collector");
3354 mCCJSRuntime = aCCRuntime;
3356 if (!NS_IsMainThread()) {
3357 return;
3360 // We can't register as a reporter in nsCycleCollector() because that runs
3361 // before the memory reporter manager is initialized. So we do it here
3362 // instead.
3363 RegisterWeakMemoryReporter(this);
3366 void nsCycleCollector::ClearCCJSRuntime() {
3367 MOZ_RELEASE_ASSERT(mCCJSRuntime,
3368 "Clearing CycleCollectedJSRuntime in cycle collector "
3369 "before a runtime was registered");
3370 mCCJSRuntime = nullptr;
3373 #ifdef DEBUG
3374 static bool HasParticipant(void* aPtr, nsCycleCollectionParticipant* aParti) {
3375 if (aParti) {
3376 return true;
3379 nsXPCOMCycleCollectionParticipant* xcp;
3380 ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
3381 return xcp != nullptr;
3383 #endif
3385 MOZ_ALWAYS_INLINE void nsCycleCollector::Suspect(
3386 void* aPtr, nsCycleCollectionParticipant* aParti,
3387 nsCycleCollectingAutoRefCnt* aRefCnt) {
3388 CheckThreadSafety();
3390 // Don't call AddRef or Release of a CCed object in a Traverse() method.
3391 MOZ_ASSERT(!mScanInProgress,
3392 "Attempted to call Suspect() while a scan was in progress");
3394 if (MOZ_UNLIKELY(mScanInProgress)) {
3395 return;
3398 MOZ_ASSERT(aPtr, "Don't suspect null pointers");
3400 MOZ_ASSERT(HasParticipant(aPtr, aParti),
3401 "Suspected nsISupports pointer must QI to "
3402 "nsXPCOMCycleCollectionParticipant");
3404 MOZ_ASSERT(aParti || CanonicalizeXPCOMParticipant(
3405 static_cast<nsISupports*>(aPtr)) == aPtr,
3406 "Suspect nsISupports pointer must be canonical");
3408 mPurpleBuf.Put(aPtr, aParti, aRefCnt);
3411 void nsCycleCollector::SuspectNurseryEntries() {
3412 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3413 while (gNurseryPurpleBufferEntryCount) {
3414 NurseryPurpleBufferEntry& entry =
3415 gNurseryPurpleBufferEntry[--gNurseryPurpleBufferEntryCount];
3416 if (!entry.mRefCnt->IsPurple() && IsIdle()) {
3417 entry.mRefCnt->RemoveFromPurpleBuffer();
3418 } else {
3419 mPurpleBuf.Put(entry.mPtr, entry.mParticipant, entry.mRefCnt);
3424 void nsCycleCollector::CheckThreadSafety() {
3425 #ifdef DEBUG
3426 MOZ_ASSERT(mEventTarget->IsOnCurrentThread());
3427 #endif
3430 // The cycle collector uses the mark bitmap to discover what JS objects are
3431 // reachable only from XPConnect roots that might participate in cycles. We ask
3432 // the JS runtime whether we need to force a GC before this CC. It should only
3433 // be true when UnmarkGray has run out of stack. We also force GCs on shutdown
3434 // to collect cycles involving both DOM and JS, and in WantAllTraces CCs to
3435 // prevent hijinks from ForgetSkippable and compartmental GCs.
3436 void nsCycleCollector::FixGrayBits(bool aIsShutdown, TimeLog& aTimeLog) {
3437 CheckThreadSafety();
3439 if (!mCCJSRuntime) {
3440 return;
3443 // If we're not forcing a GC anyways due to shutdown or an all traces CC,
3444 // check to see if we still need to do one to fix the gray bits.
3445 if (!(aIsShutdown || (mLogger && mLogger->IsAllTraces()))) {
3446 mCCJSRuntime->FixWeakMappingGrayBits();
3447 aTimeLog.Checkpoint("FixWeakMappingGrayBits");
3449 bool needGC = !mCCJSRuntime->AreGCGrayBitsValid();
3450 // Only do a telemetry ping for non-shutdown CCs.
3451 CC_TELEMETRY(_NEED_GC, needGC);
3452 if (!needGC) {
3453 return;
3457 mResults.mForcedGC = true;
3459 uint32_t count = 0;
3460 do {
3461 if (aIsShutdown) {
3462 mCCJSRuntime->GarbageCollect(JS::GCOptions::Shutdown,
3463 JS::GCReason::SHUTDOWN_CC);
3464 } else {
3465 mCCJSRuntime->GarbageCollect(JS::GCOptions::Normal,
3466 JS::GCReason::CC_FORCED);
3469 mCCJSRuntime->FixWeakMappingGrayBits();
3471 // It's possible that FixWeakMappingGrayBits will hit OOM when unmarking
3472 // gray and we will have to go round again. The second time there should not
3473 // be any weak mappings to fix up so the loop body should run at most twice.
3474 MOZ_RELEASE_ASSERT(count < 2);
3475 count++;
3476 } while (!mCCJSRuntime->AreGCGrayBitsValid());
3478 aTimeLog.Checkpoint("FixGrayBits");
3481 bool nsCycleCollector::IsIncrementalGCInProgress() {
3482 return mCCJSRuntime && JS::IsIncrementalGCInProgress(mCCJSRuntime->Runtime());
3485 void nsCycleCollector::FinishAnyIncrementalGCInProgress() {
3486 if (IsIncrementalGCInProgress()) {
3487 NS_WARNING("Finishing incremental GC in progress during CC");
3488 JSContext* cx = CycleCollectedJSContext::Get()->Context();
3489 JS::PrepareForIncrementalGC(cx);
3490 JS::FinishIncrementalGC(cx, JS::GCReason::CC_FORCED);
3494 void nsCycleCollector::CleanupAfterCollection() {
3495 TimeLog timeLog;
3496 MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
3497 MOZ_RELEASE_ASSERT(!mScanInProgress);
3498 mGraph.Clear();
3499 timeLog.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
3501 FreeSnowWhite(true);
3502 timeLog.Checkpoint("Collect::FreeSnowWhite");
3504 TimeStamp endTime = TimeStamp::Now();
3505 uint32_t interval = (uint32_t)((endTime - mCollectionStart).ToMilliseconds());
3506 #ifdef COLLECT_TIME_DEBUG
3507 printf("cc: total cycle collector time was %ums in %u slices\n", interval,
3508 mResults.mNumSlices);
3509 printf(
3510 "cc: visited %u ref counted and %u GCed objects, freed %d ref counted "
3511 "and %d GCed objects",
3512 mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3513 mResults.mFreedRefCounted, mResults.mFreedGCed);
3514 uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
3515 if (numVisited > 1000) {
3516 uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
3517 printf(" (%d%%)", 100 * numFreed / numVisited);
3519 printf(".\ncc: \n");
3520 #endif
3522 CC_TELEMETRY(, interval);
3523 CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
3524 CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
3525 CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
3526 timeLog.Checkpoint("CleanupAfterCollection::telemetry");
3528 PROFILER_MARKER(
3529 "CC", GCCC, MarkerOptions(MarkerTiming::IntervalEnd(endTime)),
3530 CCIntervalMarker, /* aIsStart */ false, nullptr, 0, 0, 0,
3531 mResults.mForcedGC, mResults.mMergedZones, mResults.mAnyManual,
3532 mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3533 mResults.mFreedRefCounted, mResults.mFreedGCed, mResults.mFreedJSZones,
3534 mResults.mNumSlices, sCollectorData.get()->mStats->mMaxSliceTime);
3536 if (mCCJSRuntime) {
3537 mCCJSRuntime->FinalizeDeferredThings(
3538 mResults.mAnyManual ? CycleCollectedJSRuntime::FinalizeNow
3539 : CycleCollectedJSRuntime::FinalizeIncrementally);
3540 mCCJSRuntime->EndCycleCollectionCallback(mResults);
3541 timeLog.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
3543 mIncrementalPhase = IdlePhase;
3546 void nsCycleCollector::ShutdownCollect() {
3547 FinishAnyIncrementalGCInProgress();
3548 CycleCollectedJSContext* ccJSContext = CycleCollectedJSContext::Get();
3549 JS::ShutdownAsyncTasks(ccJSContext->Context());
3551 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3552 uint32_t i;
3553 bool collectedAny = true;
3554 for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS && collectedAny; ++i) {
3555 collectedAny = Collect(CCReason::SHUTDOWN, ccIsManual::CCIsManual,
3556 unlimitedBudget, nullptr);
3557 // Run any remaining tasks that may have been enqueued via RunInStableState
3558 // or DispatchToMicroTask. These can hold alive CCed objects, and we want to
3559 // clear them out before we run the CC again or finish shutting down.
3560 ccJSContext->PerformMicroTaskCheckPoint(true);
3561 ccJSContext->ProcessStableStateQueue();
3564 // This warning would happen very frequently, so don't do it unless we're
3565 // logging this CC, so we might care about how many CCs there are.
3566 NS_WARNING_ASSERTION(
3567 !mParams.LogThisCC(mShutdownCount) || i < NORMAL_SHUTDOWN_COLLECTIONS,
3568 "Extra shutdown CC");
3571 static void PrintPhase(const char* aPhase) {
3572 #ifdef DEBUG_PHASES
3573 printf("cc: begin %s on %s\n", aPhase,
3574 NS_IsMainThread() ? "mainthread" : "worker");
3575 #endif
3578 bool nsCycleCollector::Collect(CCReason aReason, ccIsManual aIsManual,
3579 SliceBudget& aBudget,
3580 nsICycleCollectorListener* aManualListener,
3581 bool aPreferShorterSlices) {
3582 AUTO_PROFILER_LABEL_RELEVANT_FOR_JS("Incremental CC", GCCC);
3584 CheckThreadSafety();
3586 // This can legitimately happen in a few cases. See bug 383651.
3587 if (mActivelyCollecting || mFreeingSnowWhite) {
3588 return false;
3590 mActivelyCollecting = true;
3592 MOZ_ASSERT(!IsIncrementalGCInProgress());
3594 bool startedIdle = IsIdle();
3595 bool collectedAny = false;
3597 // If the CC started idle, it will call BeginCollection, which
3598 // will do FreeSnowWhite, so it doesn't need to be done here.
3600 // If we're in CleanupPhase, we want to clear the graph before
3601 // FreeSnowWhite runs, so that we don't need to remove objects from the graph
3602 // one by one. CleanupAfterCollection will call FreeSnowWhite.
3603 if (!startedIdle && mIncrementalPhase != CleanupPhase) {
3604 TimeLog timeLog;
3605 FreeSnowWhite(true);
3606 timeLog.Checkpoint("Collect::FreeSnowWhite");
3609 if (aIsManual == ccIsManual::CCIsManual) {
3610 mResults.mAnyManual = true;
3613 ++mResults.mNumSlices;
3615 bool continueSlice = aBudget.isUnlimited() || !aPreferShorterSlices;
3616 do {
3617 switch (mIncrementalPhase) {
3618 case IdlePhase:
3619 PrintPhase("BeginCollection");
3620 BeginCollection(aReason, aIsManual, aManualListener);
3621 break;
3622 case GraphBuildingPhase:
3623 PrintPhase("MarkRoots");
3624 MarkRoots(aBudget);
3626 // Only continue this slice if we're running synchronously or the
3627 // next phase will probably be short, to reduce the max pause for this
3628 // collection.
3629 // (There's no need to check if we've finished graph building, because
3630 // if we haven't, we've already exceeded our budget, and will finish
3631 // this slice anyways.)
3632 continueSlice = aBudget.isUnlimited() ||
3633 (mResults.mNumSlices < 3 && !aPreferShorterSlices);
3634 break;
3635 case ScanAndCollectWhitePhase:
3636 // We do ScanRoots and CollectWhite in a single slice to ensure
3637 // that we won't unlink a live object if a weak reference is
3638 // promoted to a strong reference after ScanRoots has finished.
3639 // See bug 926533.
3641 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_ScanRoots);
3642 PrintPhase("ScanRoots");
3643 ScanRoots(startedIdle);
3646 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_CollectWhite);
3647 PrintPhase("CollectWhite");
3648 collectedAny = CollectWhite();
3650 break;
3651 case CleanupPhase:
3652 PrintPhase("CleanupAfterCollection");
3653 CleanupAfterCollection();
3654 continueSlice = false;
3655 break;
3657 if (continueSlice) {
3658 aBudget.forceCheck();
3659 continueSlice = !aBudget.isOverBudget();
3661 } while (continueSlice);
3663 // Clear mActivelyCollecting here to ensure that a recursive call to
3664 // Collect() does something.
3665 mActivelyCollecting = false;
3667 if (aIsManual && !startedIdle) {
3668 // We were in the middle of an incremental CC (using its own listener).
3669 // Somebody has forced a CC, so after having finished out the current CC,
3670 // run the CC again using the new listener.
3671 MOZ_ASSERT(IsIdle());
3672 if (Collect(aReason, ccIsManual::CCIsManual, aBudget, aManualListener)) {
3673 collectedAny = true;
3677 MOZ_ASSERT_IF(aIsManual == CCIsManual, IsIdle());
3679 return collectedAny;
3682 // Any JS objects we have in the graph could die when we GC, but we
3683 // don't want to abandon the current CC, because the graph contains
3684 // information about purple roots. So we synchronously finish off
3685 // the current CC.
3686 void nsCycleCollector::PrepareForGarbageCollection() {
3687 if (IsIdle()) {
3688 MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
3689 MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
3690 if (mJSPurpleBuffer) {
3691 mJSPurpleBuffer->Destroy();
3693 return;
3696 FinishAnyCurrentCollection(CCReason::GC_WAITING);
3699 void nsCycleCollector::FinishAnyCurrentCollection(CCReason aReason) {
3700 if (IsIdle()) {
3701 return;
3704 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3705 PrintPhase("FinishAnyCurrentCollection");
3706 // Use CCIsNotManual because we only want to finish the CC in progress.
3707 Collect(aReason, ccIsManual::CCIsNotManual, unlimitedBudget, nullptr);
3709 // It is only okay for Collect() to have failed to finish the
3710 // current CC if we're reentering the CC at some point past
3711 // graph building. We need to be past the point where the CC will
3712 // look at JS objects so that it is safe to GC.
3713 MOZ_ASSERT(IsIdle() || (mActivelyCollecting &&
3714 mIncrementalPhase != GraphBuildingPhase),
3715 "Reentered CC during graph building");
3718 // Don't merge too many times in a row, and do at least a minimum
3719 // number of unmerged CCs in a row.
3720 static const uint32_t kMinConsecutiveUnmerged = 3;
3721 static const uint32_t kMaxConsecutiveMerged = 3;
3723 bool nsCycleCollector::ShouldMergeZones(ccIsManual aIsManual) {
3724 if (!mCCJSRuntime) {
3725 return false;
3728 MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
3729 MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
3731 if (mMergedInARow == kMaxConsecutiveMerged) {
3732 MOZ_ASSERT(mUnmergedNeeded == 0);
3733 mUnmergedNeeded = kMinConsecutiveUnmerged;
3736 if (mUnmergedNeeded > 0) {
3737 mUnmergedNeeded--;
3738 mMergedInARow = 0;
3739 return false;
3742 if (aIsManual == CCIsNotManual && mCCJSRuntime->UsefulToMergeZones()) {
3743 mMergedInARow++;
3744 return true;
3745 } else {
3746 mMergedInARow = 0;
3747 return false;
3751 void nsCycleCollector::BeginCollection(
3752 CCReason aReason, ccIsManual aIsManual,
3753 nsICycleCollectorListener* aManualListener) {
3754 TimeLog timeLog;
3755 MOZ_ASSERT(IsIdle());
3756 MOZ_RELEASE_ASSERT(!mScanInProgress);
3758 mCollectionStart = TimeStamp::Now();
3760 if (mCCJSRuntime) {
3761 mCCJSRuntime->BeginCycleCollectionCallback(aReason);
3762 timeLog.Checkpoint("BeginCycleCollectionCallback()");
3765 bool isShutdown = (aReason == CCReason::SHUTDOWN);
3766 if (isShutdown) {
3767 mShutdownCount += 1;
3770 // Set up the listener for this CC.
3771 MOZ_ASSERT_IF(isShutdown, !aManualListener);
3772 MOZ_ASSERT(!mLogger, "Forgot to clear a previous listener?");
3774 if (aManualListener) {
3775 aManualListener->AsLogger(getter_AddRefs(mLogger));
3778 aManualListener = nullptr;
3779 if (!mLogger && mParams.LogThisCC(mShutdownCount)) {
3780 mLogger = new nsCycleCollectorLogger(mParams.LogThisGC());
3781 if (mParams.AllTracesThisCC(isShutdown)) {
3782 mLogger->SetAllTraces();
3786 CycleCollectorResults ignoredResults;
3787 mozilla::CycleCollectorStats* stats = sCollectorData.get()->mStats.get();
3788 PROFILER_MARKER(
3789 "CC", GCCC, MarkerOptions(MarkerTiming::IntervalStart(mCollectionStart)),
3790 CCIntervalMarker,
3791 /* aIsStart */ true,
3792 ProfilerString8View::WrapNullTerminatedString(CCReasonToString(aReason)),
3793 stats->mForgetSkippableBeforeCC, stats->mSuspected,
3794 stats->mRemovedPurples, ignoredResults.mForcedGC,
3795 ignoredResults.mMergedZones, ignoredResults.mAnyManual,
3796 ignoredResults.mVisitedRefCounted, ignoredResults.mVisitedGCed,
3797 ignoredResults.mFreedRefCounted, ignoredResults.mFreedGCed,
3798 ignoredResults.mFreedJSZones, ignoredResults.mNumSlices, TimeDuration());
3800 // BeginCycleCollectionCallback() might have started an IGC, and we need
3801 // to finish it before we run FixGrayBits.
3802 FinishAnyIncrementalGCInProgress();
3803 timeLog.Checkpoint("Pre-FixGrayBits finish IGC");
3805 FixGrayBits(isShutdown, timeLog);
3806 if (mCCJSRuntime) {
3807 mCCJSRuntime->CheckGrayBits();
3810 FreeSnowWhite(true);
3811 timeLog.Checkpoint("BeginCollection FreeSnowWhite");
3813 if (mLogger && NS_FAILED(mLogger->Begin())) {
3814 mLogger = nullptr;
3817 // FreeSnowWhite could potentially have started an IGC, which we need
3818 // to finish before we look at any JS roots.
3819 FinishAnyIncrementalGCInProgress();
3820 timeLog.Checkpoint("Post-FreeSnowWhite finish IGC");
3822 // Set up the data structures for building the graph.
3823 JS::AutoAssertNoGC nogc;
3824 JS::AutoEnterCycleCollection autocc(mCCJSRuntime->Runtime());
3825 mGraph.Init();
3826 mResults.Init();
3827 mResults.mSuspectedAtCCStart = SuspectedCount();
3828 mResults.mAnyManual = aIsManual;
3829 bool mergeZones = ShouldMergeZones(aIsManual);
3830 mResults.mMergedZones = mergeZones;
3832 MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
3833 mBuilder = MakeUnique<CCGraphBuilder>(mGraph, mResults, mCCJSRuntime, mLogger,
3834 mergeZones);
3835 timeLog.Checkpoint("BeginCollection prepare graph builder");
3837 if (mCCJSRuntime) {
3838 mCCJSRuntime->TraverseRoots(*mBuilder);
3839 timeLog.Checkpoint("mJSContext->TraverseRoots()");
3842 AutoRestore<bool> ar(mScanInProgress);
3843 MOZ_RELEASE_ASSERT(!mScanInProgress);
3844 mScanInProgress = true;
3845 mPurpleBuf.SelectPointers(*mBuilder);
3846 timeLog.Checkpoint("SelectPointers()");
3848 mBuilder->DoneAddingRoots();
3849 mIncrementalPhase = GraphBuildingPhase;
3852 uint32_t nsCycleCollector::SuspectedCount() {
3853 CheckThreadSafety();
3854 if (NS_IsMainThread()) {
3855 return gNurseryPurpleBufferEntryCount + mPurpleBuf.Count();
3858 return mPurpleBuf.Count();
3861 void nsCycleCollector::Shutdown(bool aDoCollect) {
3862 CheckThreadSafety();
3864 if (NS_IsMainThread()) {
3865 gNurseryPurpleBufferEnabled = false;
3868 // Always delete snow white objects.
3869 FreeSnowWhite(true);
3871 if (aDoCollect) {
3872 ShutdownCollect();
3875 if (mJSPurpleBuffer) {
3876 mJSPurpleBuffer->Destroy();
3880 void nsCycleCollector::RemoveObjectFromGraph(void* aObj) {
3881 if (IsIdle()) {
3882 return;
3885 mGraph.RemoveObjectFromMap(aObj);
3886 if (mBuilder) {
3887 mBuilder->RemoveCachedEntry(aObj);
3891 void nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
3892 size_t* aObjectSize,
3893 size_t* aGraphSize,
3894 size_t* aPurpleBufferSize) const {
3895 *aObjectSize = aMallocSizeOf(this);
3897 *aGraphSize = mGraph.SizeOfExcludingThis(aMallocSizeOf);
3899 *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
3901 // These fields are deliberately not measured:
3902 // - mCCJSRuntime: because it's non-owning and measured by JS reporters.
3903 // - mParams: because it only contains scalars.
3906 JSPurpleBuffer* nsCycleCollector::GetJSPurpleBuffer() {
3907 if (!mJSPurpleBuffer) {
3908 // The Release call here confuses the GC analysis.
3909 JS::AutoSuppressGCAnalysis nogc;
3910 // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3911 // that it ends up in the normal purple buffer. That happens when
3912 // nsRefPtr goes out of the scope and calls Release.
3913 RefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
3915 return mJSPurpleBuffer;
3918 ////////////////////////////////////////////////////////////////////////
3919 // Module public API (exported in nsCycleCollector.h)
3920 // Just functions that redirect into the singleton, once it's built.
3921 ////////////////////////////////////////////////////////////////////////
3923 void nsCycleCollector_registerJSContext(CycleCollectedJSContext* aCx) {
3924 CollectorData* data = sCollectorData.get();
3926 // We should have started the cycle collector by now.
3927 MOZ_ASSERT(data);
3928 MOZ_ASSERT(data->mCollector);
3929 // But we shouldn't already have a context.
3930 MOZ_ASSERT(!data->mContext);
3932 data->mContext = aCx;
3933 data->mCollector->SetCCJSRuntime(aCx->Runtime());
3936 void nsCycleCollector_forgetJSContext() {
3937 CollectorData* data = sCollectorData.get();
3939 // We should have started the cycle collector by now.
3940 MOZ_ASSERT(data);
3941 // And we shouldn't have already forgotten our context.
3942 MOZ_ASSERT(data->mContext);
3944 // But it may have shutdown already.
3945 if (data->mCollector) {
3946 data->mCollector->ClearCCJSRuntime();
3947 data->mContext = nullptr;
3948 } else {
3949 data->mContext = nullptr;
3950 delete data;
3951 sCollectorData.set(nullptr);
3955 /* static */
3956 CycleCollectedJSContext* CycleCollectedJSContext::Get() {
3957 CollectorData* data = sCollectorData.get();
3958 if (data) {
3959 return data->mContext;
3961 return nullptr;
3964 MOZ_NEVER_INLINE static void SuspectAfterShutdown(
3965 void* aPtr, nsCycleCollectionParticipant* aCp,
3966 nsCycleCollectingAutoRefCnt* aRefCnt, bool* aShouldDelete) {
3967 if (aRefCnt->get() == 0) {
3968 if (!aShouldDelete) {
3969 // The CC is shut down, so we can't be in the middle of an ICC.
3970 ToParticipant(aPtr, &aCp);
3971 aRefCnt->stabilizeForDeletion();
3972 aCp->DeleteCycleCollectable(aPtr);
3973 } else {
3974 *aShouldDelete = true;
3976 } else {
3977 // Make sure we'll get called again.
3978 aRefCnt->RemoveFromPurpleBuffer();
3982 void NS_CycleCollectorSuspect3(void* aPtr, nsCycleCollectionParticipant* aCp,
3983 nsCycleCollectingAutoRefCnt* aRefCnt,
3984 bool* aShouldDelete) {
3985 if ((
3986 #ifdef HAVE_64BIT_BUILD
3987 aRefCnt->IsOnMainThread() ||
3988 #endif
3989 NS_IsMainThread()) &&
3990 gNurseryPurpleBufferEnabled) {
3991 // The next time the object is passed to the purple buffer, we can do faster
3992 // IsOnMainThread() check.
3993 aRefCnt->SetIsOnMainThread();
3994 SuspectUsingNurseryPurpleBuffer(aPtr, aCp, aRefCnt);
3995 return;
3998 CollectorData* data = sCollectorData.get();
4000 // This assertion will happen if you AddRef or Release a cycle collected
4001 // object on a thread that does not have an active cycle collector.
4002 // This can happen in a few situations:
4003 // 1. We never cycle collect on this thread. (The cycle collector is only
4004 // run on the main thread and DOM worker threads.)
4005 // 2. The cycle collector hasn't been initialized on this thread yet.
4006 // 3. The cycle collector has already been shut down on this thread.
4007 MOZ_DIAGNOSTIC_ASSERT(
4008 data,
4009 "Cycle collected object used on a thread without a cycle collector.");
4011 if (MOZ_LIKELY(data->mCollector)) {
4012 data->mCollector->Suspect(aPtr, aCp, aRefCnt);
4013 return;
4015 SuspectAfterShutdown(aPtr, aCp, aRefCnt, aShouldDelete);
4018 void ClearNurseryPurpleBuffer() {
4019 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
4020 CollectorData* data = sCollectorData.get();
4021 MOZ_ASSERT(data);
4022 MOZ_ASSERT(data->mCollector);
4023 data->mCollector->SuspectNurseryEntries();
4026 uint32_t nsCycleCollector_suspectedCount() {
4027 CollectorData* data = sCollectorData.get();
4029 // We should have started the cycle collector by now.
4030 MOZ_ASSERT(data);
4032 if (!data->mCollector) {
4033 return 0;
4036 return data->mCollector->SuspectedCount();
4039 bool nsCycleCollector_init() {
4040 #ifdef DEBUG
4041 static bool sInitialized;
4043 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
4044 MOZ_ASSERT(!sInitialized, "Called twice!?");
4045 sInitialized = true;
4046 #endif
4048 return sCollectorData.init();
4051 void nsCycleCollector_startup() {
4052 if (sCollectorData.get()) {
4053 MOZ_CRASH();
4056 CollectorData* data = new CollectorData;
4057 data->mCollector = new nsCycleCollector();
4058 data->mContext = nullptr;
4059 data->mStats.reset(new mozilla::CycleCollectorStats());
4061 sCollectorData.set(data);
4064 void nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB) {
4065 CollectorData* data = sCollectorData.get();
4067 // We should have started the cycle collector by now.
4068 MOZ_ASSERT(data);
4069 MOZ_ASSERT(data->mCollector);
4071 data->mCollector->SetBeforeUnlinkCallback(aCB);
4074 void nsCycleCollector_setForgetSkippableCallback(
4075 CC_ForgetSkippableCallback aCB) {
4076 CollectorData* data = sCollectorData.get();
4078 // We should have started the cycle collector by now.
4079 MOZ_ASSERT(data);
4080 MOZ_ASSERT(data->mCollector);
4082 data->mCollector->SetForgetSkippableCallback(aCB);
4085 void nsCycleCollector_forgetSkippable(TimeStamp aStartTime,
4086 JS::SliceBudget& aBudget, bool aInIdle,
4087 bool aRemoveChildlessNodes,
4088 bool aAsyncSnowWhiteFreeing) {
4089 CollectorData* data = sCollectorData.get();
4091 // We should have started the cycle collector by now.
4092 MOZ_ASSERT(data);
4093 MOZ_ASSERT(data->mCollector);
4095 TimeLog timeLog;
4096 uint32_t purpleBefore = data->mCollector->SuspectedCount();
4097 data->mCollector->ForgetSkippable(aBudget, aRemoveChildlessNodes,
4098 aAsyncSnowWhiteFreeing);
4099 timeLog.Checkpoint("ForgetSkippable()");
4100 uint32_t purpleAfter = data->mCollector->SuspectedCount();
4102 data->mStats->AfterForgetSkippable(aStartTime, TimeStamp::Now(),
4103 purpleBefore - purpleAfter, aInIdle);
4106 void nsCycleCollector_dispatchDeferredDeletion(bool aContinuation,
4107 bool aPurge) {
4108 CycleCollectedJSRuntime* rt = CycleCollectedJSRuntime::Get();
4109 if (rt) {
4110 rt->DispatchDeferredDeletion(aContinuation, aPurge);
4114 bool nsCycleCollector_doDeferredDeletion() {
4115 CollectorData* data = sCollectorData.get();
4117 // We should have started the cycle collector by now.
4118 MOZ_ASSERT(data);
4119 MOZ_ASSERT(data->mCollector);
4120 MOZ_ASSERT(data->mContext);
4122 return data->mCollector->FreeSnowWhite(false);
4125 bool nsCycleCollector_doDeferredDeletionWithBudget(SliceBudget& aBudget) {
4126 CollectorData* data = sCollectorData.get();
4128 // We should have started the cycle collector by now.
4129 MOZ_ASSERT(data);
4130 MOZ_ASSERT(data->mCollector);
4131 MOZ_ASSERT(data->mContext);
4133 return data->mCollector->FreeSnowWhiteWithBudget(aBudget);
4136 already_AddRefed<nsICycleCollectorLogSink> nsCycleCollector_createLogSink(
4137 bool aLogGC) {
4138 nsCOMPtr<nsICycleCollectorLogSink> sink =
4139 new nsCycleCollectorLogSinkToFile(aLogGC);
4140 return sink.forget();
4143 bool nsCycleCollector_collect(CCReason aReason,
4144 nsICycleCollectorListener* aManualListener) {
4145 CollectorData* data = sCollectorData.get();
4147 // We should have started the cycle collector by now.
4148 MOZ_ASSERT(data);
4149 MOZ_ASSERT(data->mCollector);
4151 AUTO_PROFILER_LABEL("nsCycleCollector_collect", GCCC);
4153 SliceBudget unlimitedBudget = SliceBudget::unlimited();
4154 return data->mCollector->Collect(aReason, ccIsManual::CCIsManual,
4155 unlimitedBudget, aManualListener);
4158 void nsCycleCollector_collectSlice(SliceBudget& budget, CCReason aReason,
4159 bool aPreferShorterSlices) {
4160 CollectorData* data = sCollectorData.get();
4162 // We should have started the cycle collector by now.
4163 MOZ_ASSERT(data);
4164 MOZ_ASSERT(data->mCollector);
4166 AUTO_PROFILER_LABEL("nsCycleCollector_collectSlice", GCCC);
4168 data->mCollector->Collect(aReason, ccIsManual::CCIsNotManual, budget, nullptr,
4169 aPreferShorterSlices);
4172 void nsCycleCollector_prepareForGarbageCollection() {
4173 CollectorData* data = sCollectorData.get();
4175 MOZ_ASSERT(data);
4177 if (!data->mCollector) {
4178 return;
4181 data->mCollector->PrepareForGarbageCollection();
4184 void nsCycleCollector_finishAnyCurrentCollection() {
4185 CollectorData* data = sCollectorData.get();
4187 MOZ_ASSERT(data);
4189 if (!data->mCollector) {
4190 return;
4193 data->mCollector->FinishAnyCurrentCollection(CCReason::API);
4196 void nsCycleCollector_shutdown(bool aDoCollect) {
4197 CollectorData* data = sCollectorData.get();
4199 if (data) {
4200 MOZ_ASSERT(data->mCollector);
4201 AUTO_PROFILER_LABEL("nsCycleCollector_shutdown", OTHER);
4204 RefPtr<nsCycleCollector> collector = data->mCollector;
4205 collector->Shutdown(aDoCollect);
4206 data->mCollector = nullptr;
4209 data->mStats.reset();
4211 if (!data->mContext) {
4212 delete data;
4213 sCollectorData.set(nullptr);