Changed mft_h264_decoder's API to match with video_decode_engine.h. Also changed...
[chromium-blink-merge.git] / base / tracked_objects.h
blob5392b9ac518c11f702ab934723f6e13a9da6c65a
1 // Copyright (c) 2006-2008 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file.
5 #ifndef BASE_TRACKED_OBJECTS_H_
6 #define BASE_TRACKED_OBJECTS_H_
7 #pragma once
9 #include <map>
10 #include <string>
11 #include <vector>
13 #include "base/lock.h"
14 #include "base/task.h"
15 #include "base/thread_local_storage.h"
16 #include "base/tracked.h"
18 // TrackedObjects provides a database of stats about objects (generally Tasks)
19 // that are tracked. Tracking means their birth, death, duration, birth thread,
20 // death thread, and birth place are recorded. This data is carefully spread
21 // across a series of objects so that the counts and times can be rapidly
22 // updated without (usually) having to lock the data, and hence there is usually
23 // very little contention caused by the tracking. The data can be viewed via
24 // the about:objects URL, with a variety of sorting and filtering choices.
26 // Theese classes serve as the basis of a profiler of sorts for the Tasks
27 // system. As a result, design decisions were made to maximize speed, by
28 // minimizing recurring allocation/deallocation, lock contention and data
29 // copying. In the "stable" state, which is reached relatively quickly, there
30 // is no separate marginal allocation cost associated with construction or
31 // destruction of tracked objects, no locks are generally employed, and probably
32 // the largest computational cost is associated with obtaining start and stop
33 // times for instances as they are created and destroyed. The introduction of
34 // worker threads had a slight impact on this approach, and required use of some
35 // locks when accessing data from the worker threads.
37 // The following describes the lifecycle of tracking an instance.
39 // First off, when the instance is created, the FROM_HERE macro is expanded
40 // to specify the birth place (file, line, function) where the instance was
41 // created. That data is used to create a transient Location instance
42 // encapsulating the above triple of information. The strings (like __FILE__)
43 // are passed around by reference, with the assumption that they are static, and
44 // will never go away. This ensures that the strings can be dealt with as atoms
45 // with great efficiency (i.e., copying of strings is never needed, and
46 // comparisons for equality can be based on pointer comparisons).
48 // Next, a Births instance is created for use ONLY on the thread where this
49 // instance was created. That Births instance records (in a base class
50 // BirthOnThread) references to the static data provided in a Location instance,
51 // as well as a pointer specifying the thread on which the birth takes place.
52 // Hence there is at most one Births instance for each Location on each thread.
53 // The derived Births class contains slots for recording statistics about all
54 // instances born at the same location. Statistics currently include only the
55 // count of instances constructed.
56 // Since the base class BirthOnThread contains only constant data, it can be
57 // freely accessed by any thread at any time (i.e., only the statistic needs to
58 // be handled carefully, and it is ONLY read or written by the birth thread).
60 // Having now either constructed or found the Births instance described above, a
61 // pointer to the Births instance is then embedded in a base class of the
62 // instance we're tracking (usually a Task). This fact alone is very useful in
63 // debugging, when there is a question of where an instance came from. In
64 // addition, the birth time is also embedded in the base class Tracked (see
65 // tracked.h), and used to later evaluate the lifetime duration.
66 // As a result of the above embedding, we can (for any tracked instance) find
67 // out its location of birth, and thread of birth, without using any locks, as
68 // all that data is constant across the life of the process.
70 // The amount of memory used in the above data structures depends on how many
71 // threads there are, and how many Locations of construction there are.
72 // Fortunately, we don't use memory that is the product of those two counts, but
73 // rather we only need one Births instance for each thread that constructs an
74 // instance at a Location. In many cases, instances (such as Tasks) are only
75 // created on one thread, so the memory utilization is actually fairly
76 // restrained.
78 // Lastly, when an instance is deleted, the final tallies of statistics are
79 // carefully accumulated. That tallying wrties into slots (members) in a
80 // collection of DeathData instances. For each birth place Location that is
81 // destroyed on a thread, there is a DeathData instance to record the additional
82 // death count, as well as accumulate the lifetime duration of the instance as
83 // it is destroyed (dies). By maintaining a single place to aggregate this
84 // addition *only* for the given thread, we avoid the need to lock such
85 // DeathData instances.
87 // With the above lifecycle description complete, the major remaining detail is
88 // explaining how each thread maintains a list of DeathData instances, and of
89 // Births instances, and is able to avoid additional (redundant/unnecessary)
90 // allocations.
92 // Each thread maintains a list of data items specific to that thread in a
93 // ThreadData instance (for that specific thread only). The two critical items
94 // are lists of DeathData and Births instances. These lists are maintained in
95 // STL maps, which are indexed by Location. As noted earlier, we can compare
96 // locations very efficiently as we consider the underlying data (file,
97 // function, line) to be atoms, and hence pointer comparison is used rather than
98 // (slow) string comparisons.
100 // To provide a mechanism for iterating over all "known threads," which means
101 // threads that have recorded a birth or a death, we create a singly linked list
102 // of ThreadData instances. Each such instance maintains a pointer to the next
103 // one. A static member of ThreadData provides a pointer to the first_ item on
104 // this global list, and access to that first_ item requires the use of a lock_.
105 // When new ThreadData instances is added to the global list, it is pre-pended,
106 // which ensures that any prior acquisition of the list is valid (i.e., the
107 // holder can iterate over it without fear of it changing, or the necessity of
108 // using an additional lock. Iterations are actually pretty rare (used
109 // primarilly for cleanup, or snapshotting data for display), so this lock has
110 // very little global performance impact.
112 // The above description tries to define the high performance (run time)
113 // portions of these classes. After gathering statistics, calls instigated
114 // by visiting about:objects will assemble and aggregate data for display. The
115 // following data structures are used for producing such displays. They are
116 // not performance critical, and their only major constraint is that they should
117 // be able to run concurrently with ongoing augmentation of the birth and death
118 // data.
120 // For a given birth location, information about births are spread across data
121 // structures that are asynchronously changing on various threads. For display
122 // purposes, we need to construct Snapshot instances for each combination of
123 // birth thread, death thread, and location, along with the count of such
124 // lifetimes. We gather such data into a Snapshot instances, so that such
125 // instances can be sorted and aggregated (and remain frozen during our
126 // processing). Snapshot instances use pointers to constant portions of the
127 // birth and death datastructures, but have local (frozen) copies of the actual
128 // statistics (birth count, durations, etc. etc.).
130 // A DataCollector is a container object that holds a set of Snapshots. A
131 // DataCollector can be passed from thread to thread, and each thread
132 // contributes to it by adding or updating Snapshot instances. DataCollector
133 // instances are thread safe containers which are passed to various threads to
134 // accumulate all Snapshot instances.
136 // After an array of Snapshots instances are colleted into a DataCollector, they
137 // need to be sorted, and possibly aggregated (example: how many threads are in
138 // a specific consecutive set of Snapshots? What was the total birth count for
139 // that set? etc.). Aggregation instances collect running sums of any set of
140 // snapshot instances, and are used to print sub-totals in an about:objects
141 // page.
143 // TODO(jar): I need to store DataCollections, and provide facilities for taking
144 // the difference between two gathered DataCollections. For now, I'm just
145 // adding a hack that Reset()'s to zero all counts and stats. This is also
146 // done in a slighly thread-unsafe fashion, as the reseting is done
147 // asynchronously relative to ongoing updates, and worse yet, some data fields
148 // are 64bit quantities, and are not atomicly accessed (reset or incremented
149 // etc.). For basic profiling, this will work "most of the time," and should be
150 // sufficient... but storing away DataCollections is the "right way" to do this.
152 class MessageLoop;
155 namespace tracked_objects {
157 //------------------------------------------------------------------------------
158 // For a specific thread, and a specific birth place, the collection of all
159 // death info (with tallies for each death thread, to prevent access conflicts).
160 class ThreadData;
161 class BirthOnThread {
162 public:
163 explicit BirthOnThread(const Location& location);
165 const Location location() const { return location_; }
166 const ThreadData* birth_thread() const { return birth_thread_; }
168 private:
169 // File/lineno of birth. This defines the essence of the type, as the context
170 // of the birth (construction) often tell what the item is for. This field
171 // is const, and hence safe to access from any thread.
172 const Location location_;
174 // The thread that records births into this object. Only this thread is
175 // allowed to access birth_count_ (which changes over time).
176 const ThreadData* birth_thread_; // The thread this birth took place on.
178 DISALLOW_COPY_AND_ASSIGN(BirthOnThread);
181 //------------------------------------------------------------------------------
182 // A class for accumulating counts of births (without bothering with a map<>).
184 class Births: public BirthOnThread {
185 public:
186 explicit Births(const Location& location);
188 int birth_count() const { return birth_count_; }
190 // When we have a birth we update the count for this BirhPLace.
191 void RecordBirth() { ++birth_count_; }
193 // When a birthplace is changed (updated), we need to decrement the counter
194 // for the old instance.
195 void ForgetBirth() { --birth_count_; } // We corrected a birth place.
197 // Hack to quickly reset all counts to zero.
198 void Clear() { birth_count_ = 0; }
200 private:
201 // The number of births on this thread for our location_.
202 int birth_count_;
204 DISALLOW_COPY_AND_ASSIGN(Births);
207 //------------------------------------------------------------------------------
208 // Basic info summarizing multiple destructions of an object with a single
209 // birthplace (fixed Location). Used both on specific threads, and also used
210 // in snapshots when integrating assembled data.
212 class DeathData {
213 public:
214 // Default initializer.
215 DeathData() : count_(0), square_duration_(0) {}
217 // When deaths have not yet taken place, and we gather data from all the
218 // threads, we create DeathData stats that tally the number of births without
219 // a corrosponding death.
220 explicit DeathData(int count) : count_(count), square_duration_(0) {}
222 void RecordDeath(const base::TimeDelta& duration);
224 // Metrics accessors.
225 int count() const { return count_; }
226 base::TimeDelta life_duration() const { return life_duration_; }
227 int64 square_duration() const { return square_duration_; }
228 int AverageMsDuration() const;
229 double StandardDeviation() const;
231 // Accumulate metrics from other into this.
232 void AddDeathData(const DeathData& other);
234 // Simple print of internal state.
235 void Write(std::string* output) const;
237 // Reset all tallies to zero.
238 void Clear();
240 private:
241 int count_; // Number of destructions.
242 base::TimeDelta life_duration_; // Sum of all lifetime durations.
243 int64 square_duration_; // Sum of squares in milliseconds.
246 //------------------------------------------------------------------------------
247 // A temporary collection of data that can be sorted and summarized. It is
248 // gathered (carefully) from many threads. Instances are held in arrays and
249 // processed, filtered, and rendered.
250 // The source of this data was collected on many threads, and is asynchronously
251 // changing. The data in this instance is not asynchronously changing.
253 class Snapshot {
254 public:
255 // When snapshotting a full life cycle set (birth-to-death), use this:
256 Snapshot(const BirthOnThread& birth_on_thread, const ThreadData& death_thread,
257 const DeathData& death_data);
259 // When snapshotting a birth, with no death yet, use this:
260 Snapshot(const BirthOnThread& birth_on_thread, int count);
263 const ThreadData* birth_thread() const { return birth_->birth_thread(); }
264 const Location location() const { return birth_->location(); }
265 const BirthOnThread& birth() const { return *birth_; }
266 const ThreadData* death_thread() const {return death_thread_; }
267 const DeathData& death_data() const { return death_data_; }
268 const std::string DeathThreadName() const;
270 int count() const { return death_data_.count(); }
271 base::TimeDelta life_duration() const { return death_data_.life_duration(); }
272 int64 square_duration() const { return death_data_.square_duration(); }
273 int AverageMsDuration() const { return death_data_.AverageMsDuration(); }
275 void Write(std::string* output) const;
277 void Add(const Snapshot& other);
279 private:
280 const BirthOnThread* birth_; // Includes Location and birth_thread.
281 const ThreadData* death_thread_;
282 DeathData death_data_;
284 //------------------------------------------------------------------------------
285 // DataCollector is a container class for Snapshot and BirthOnThread count
286 // items. It protects the gathering under locks, so that it could be called via
287 // Posttask on any threads, or passed to all the target threads in parallel.
289 class DataCollector {
290 public:
291 typedef std::vector<Snapshot> Collection;
293 // Construct with a list of how many threads should contribute. This helps us
294 // determine (in the async case) when we are done with all contributions.
295 DataCollector();
297 // Add all stats from the indicated thread into our arrays. This function is
298 // mutex protected, and *could* be called from any threads (although current
299 // implementation serialized calls to Append).
300 void Append(const ThreadData& thread_data);
302 // After the accumulation phase, the following accessor is used to process the
303 // data.
304 Collection* collection();
306 // After collection of death data is complete, we can add entries for all the
307 // remaining living objects.
308 void AddListOfLivingObjects();
310 private:
311 // This instance may be provided to several threads to contribute data. The
312 // following counter tracks how many more threads will contribute. When it is
313 // zero, then all asynchronous contributions are complete, and locked access
314 // is no longer needed.
315 int count_of_contributing_threads_;
317 // The array that we collect data into.
318 Collection collection_;
320 // The total number of births recorded at each location for which we have not
321 // seen a death count.
322 typedef std::map<const BirthOnThread*, int> BirthCount;
323 BirthCount global_birth_count_;
325 Lock accumulation_lock_; // Protects access during accumulation phase.
327 DISALLOW_COPY_AND_ASSIGN(DataCollector);
330 //------------------------------------------------------------------------------
331 // Aggregation contains summaries (totals and subtotals) of groups of Snapshot
332 // instances to provide printing of these collections on a single line.
334 class Aggregation: public DeathData {
335 public:
336 Aggregation() : birth_count_(0) {}
338 void AddDeathSnapshot(const Snapshot& snapshot);
339 void AddBirths(const Births& births);
340 void AddBirth(const BirthOnThread& birth);
341 void AddBirthPlace(const Location& location);
342 void Write(std::string* output) const;
343 void Clear();
345 private:
346 int birth_count_;
347 std::map<std::string, int> birth_files_;
348 std::map<Location, int> locations_;
349 std::map<const ThreadData*, int> birth_threads_;
350 DeathData death_data_;
351 std::map<const ThreadData*, int> death_threads_;
353 DISALLOW_COPY_AND_ASSIGN(Aggregation);
356 //------------------------------------------------------------------------------
357 // Comparator is a class that supports the comparison of Snapshot instances.
358 // An instance is actually a list of chained Comparitors, that can provide for
359 // arbitrary ordering. The path portion of an about:objects URL is translated
360 // into such a chain, which is then used to order Snapshot instances in a
361 // vector. It orders them into groups (for aggregation), and can also order
362 // instances within the groups (for detailed rendering of the instances in an
363 // aggregation).
365 class Comparator {
366 public:
367 // Selector enum is the token identifier for each parsed keyword, most of
368 // which specify a sort order.
369 // Since it is not meaningful to sort more than once on a specific key, we
370 // use bitfields to accumulate what we have sorted on so far.
371 enum Selector {
372 // Sort orders.
373 NIL = 0,
374 BIRTH_THREAD = 1,
375 DEATH_THREAD = 2,
376 BIRTH_FILE = 4,
377 BIRTH_FUNCTION = 8,
378 BIRTH_LINE = 16,
379 COUNT = 32,
380 AVERAGE_DURATION = 64,
381 TOTAL_DURATION = 128,
383 // Imediate action keywords.
384 RESET_ALL_DATA = -1,
387 explicit Comparator();
389 // Reset the comparator to a NIL selector. Clear() and recursively delete any
390 // tiebreaker_ entries. NOTE: We can't use a standard destructor, because
391 // the sort algorithm makes copies of this object, and then deletes them,
392 // which would cause problems (either we'd make expensive deep copies, or we'd
393 // do more thna one delete on a tiebreaker_.
394 void Clear();
396 // The less() operator for sorting the array via std::sort().
397 bool operator()(const Snapshot& left, const Snapshot& right) const;
399 void Sort(DataCollector::Collection* collection) const;
401 // Check to see if the items are sort equivalents (should be aggregated).
402 bool Equivalent(const Snapshot& left, const Snapshot& right) const;
404 // Check to see if all required fields are present in the given sample.
405 bool Acceptable(const Snapshot& sample) const;
407 // A comparator can be refined by specifying what to do if the selected basis
408 // for comparison is insufficient to establish an ordering. This call adds
409 // the indicated attribute as the new "least significant" basis of comparison.
410 void SetTiebreaker(Selector selector, const std::string& required);
412 // Indicate if this instance is set up to sort by the given Selector, thereby
413 // putting that information in the SortGrouping, so it is not needed in each
414 // printed line.
415 bool IsGroupedBy(Selector selector) const;
417 // Using the tiebreakers as set above, we mostly get an ordering, which
418 // equivalent groups. If those groups are displayed (rather than just being
419 // aggregated, then the following is used to order them (within the group).
420 void SetSubgroupTiebreaker(Selector selector);
422 // Translate a keyword and restriction in URL path to a selector for sorting.
423 void ParseKeyphrase(const std::string& key_phrase);
425 // Parse a query in an about:objects URL to decide on sort ordering.
426 bool ParseQuery(const std::string& query);
428 // Output a header line that can be used to indicated what items will be
429 // collected in the group. It lists all (potentially) tested attributes and
430 // their values (in the sample item).
431 bool WriteSortGrouping(const Snapshot& sample, std::string* output) const;
433 // Output a sample, with SortGroup details not displayed.
434 void WriteSnapshot(const Snapshot& sample, std::string* output) const;
436 private:
437 // The selector directs this instance to compare based on the specified
438 // members of the tested elements.
439 enum Selector selector_;
441 // For filtering into acceptable and unacceptable snapshot instance, the
442 // following is required to be a substring of the selector_ field.
443 std::string required_;
445 // If this instance can't decide on an ordering, we can consult a tie-breaker
446 // which may have a different basis of comparison.
447 Comparator* tiebreaker_;
449 // We or together all the selectors we sort on (not counting sub-group
450 // selectors), so that we can tell if we've decided to group on any given
451 // criteria.
452 int combined_selectors_;
454 // Some tiebreakrs are for subgroup ordering, and not for basic ordering (in
455 // preparation for aggregation). The subgroup tiebreakers are not consulted
456 // when deciding if two items are in equivalent groups. This flag tells us
457 // to ignore the tiebreaker when doing Equivalent() testing.
458 bool use_tiebreaker_for_sort_only_;
462 //------------------------------------------------------------------------------
463 // For each thread, we have a ThreadData that stores all tracking info generated
464 // on this thread. This prevents the need for locking as data accumulates.
466 class ThreadData {
467 public:
468 typedef std::map<Location, Births*> BirthMap;
469 typedef std::map<const Births*, DeathData> DeathMap;
471 ThreadData();
473 // Using Thread Local Store, find the current instance for collecting data.
474 // If an instance does not exist, construct one (and remember it for use on
475 // this thread.
476 // If shutdown has already started, and we don't yet have an instance, then
477 // return null.
478 static ThreadData* current();
480 // For a given about:objects URL, develop resulting HTML, and append to
481 // output.
482 static void WriteHTML(const std::string& query, std::string* output);
484 // For a given accumulated array of results, use the comparator to sort and
485 // subtotal, writing the results to the output.
486 static void WriteHTMLTotalAndSubtotals(
487 const DataCollector::Collection& match_array,
488 const Comparator& comparator, std::string* output);
490 // In this thread's data, record a new birth.
491 Births* TallyABirth(const Location& location);
493 // Find a place to record a death on this thread.
494 void TallyADeath(const Births& lifetimes, const base::TimeDelta& duration);
496 // (Thread safe) Get start of list of instances.
497 static ThreadData* first();
498 // Iterate through the null terminated list of instances.
499 ThreadData* next() const { return next_; }
501 MessageLoop* message_loop() const { return message_loop_; }
502 const std::string ThreadName() const;
504 // Using our lock, make a copy of the specified maps. These calls may arrive
505 // from non-local threads, and are used to quickly scan data from all threads
506 // in order to build an HTML page for about:objects.
507 void SnapshotBirthMap(BirthMap *output) const;
508 void SnapshotDeathMap(DeathMap *output) const;
510 // Hack: asynchronously clear all birth counts and death tallies data values
511 // in all ThreadData instances. The numerical (zeroing) part is done without
512 // use of a locks or atomics exchanges, and may (for int64 values) produce
513 // bogus counts VERY rarely.
514 static void ResetAllThreadData();
516 // Using our lock to protect the iteration, Clear all birth and death data.
517 void Reset();
519 // Using the "known list of threads" gathered during births and deaths, the
520 // following attempts to run the given function once all all such threads.
521 // Note that the function can only be run on threads which have a message
522 // loop!
523 static void RunOnAllThreads(void (*Func)());
525 // Set internal status_ to either become ACTIVE, or later, to be SHUTDOWN,
526 // based on argument being true or false respectively.
527 // IF tracking is not compiled in, this function will return false.
528 static bool StartTracking(bool status);
529 static bool IsActive();
531 #ifdef OS_WIN
532 // WARNING: ONLY call this function when all MessageLoops are still intact for
533 // all registered threads. IF you call it later, you will crash.
534 // Note: You don't need to call it at all, and you can wait till you are
535 // single threaded (again) to do the cleanup via
536 // ShutdownSingleThreadedCleanup().
537 // Start the teardown (shutdown) process in a multi-thread mode by disabling
538 // further additions to thread database on all threads. First it makes a
539 // local (locked) change to prevent any more threads from registering. Then
540 // it Posts a Task to all registered threads to be sure they are aware that no
541 // more accumulation can take place.
542 static void ShutdownMultiThreadTracking();
543 #endif
545 // WARNING: ONLY call this function when you are running single threaded
546 // (again) and all message loops and threads have terminated. Until that
547 // point some threads may still attempt to write into our data structures.
548 // Delete recursively all data structures, starting with the list of
549 // ThreadData instances.
550 static void ShutdownSingleThreadedCleanup();
552 private:
553 // Current allowable states of the tracking system. The states always
554 // proceed towards SHUTDOWN, and never go backwards.
555 enum Status {
556 UNINITIALIZED,
557 ACTIVE,
558 SHUTDOWN,
561 #if defined(OS_WIN)
562 class ThreadSafeDownCounter;
563 class RunTheStatic;
564 #endif
566 // Each registered thread is called to set status_ to SHUTDOWN.
567 // This is done redundantly on every registered thread because it is not
568 // protected by a mutex. Running on all threads guarantees we get the
569 // notification into the memory cache of all possible threads.
570 static void ShutdownDisablingFurtherTracking();
572 // We use thread local store to identify which ThreadData to interact with.
573 static TLSSlot tls_index_;
575 // Link to the most recently created instance (starts a null terminated list).
576 static ThreadData* first_;
577 // Protection for access to first_.
578 static Lock list_lock_;
580 // We set status_ to SHUTDOWN when we shut down the tracking service. This
581 // setting is redundantly established by all participating threads so that we
582 // are *guaranteed* (without locking) that all threads can "see" the status
583 // and avoid additional calls into the service.
584 static Status status_;
586 // Link to next instance (null terminated list). Used to globally track all
587 // registered instances (corresponds to all registered threads where we keep
588 // data).
589 ThreadData* next_;
591 // The message loop where tasks needing to access this instance's private data
592 // should be directed. Since some threads have no message loop, some
593 // instances have data that can't be (safely) modified externally.
594 MessageLoop* message_loop_;
596 // A map used on each thread to keep track of Births on this thread.
597 // This map should only be accessed on the thread it was constructed on.
598 // When a snapshot is needed, this structure can be locked in place for the
599 // duration of the snapshotting activity.
600 BirthMap birth_map_;
602 // Similar to birth_map_, this records informations about death of tracked
603 // instances (i.e., when a tracked instance was destroyed on this thread).
604 // It is locked before changing, and hence other threads may access it by
605 // locking before reading it.
606 DeathMap death_map_;
608 // Lock to protect *some* access to BirthMap and DeathMap. The maps are
609 // regularly read and written on this thread, but may only be read from other
610 // threads. To support this, we acquire this lock if we are writing from this
611 // thread, or reading from another thread. For reading from this thread we
612 // don't need a lock, as there is no potential for a conflict since the
613 // writing is only done from this thread.
614 mutable Lock lock_;
616 DISALLOW_COPY_AND_ASSIGN(ThreadData);
620 //------------------------------------------------------------------------------
621 // Provide simple way to to start global tracking, and to tear down tracking
622 // when done. Note that construction and destruction of this object must be
623 // done when running in threaded mode (before spawning a lot of threads
624 // for construction, and after shutting down all the threads for destruction).
626 // To prevent grabbing thread local store resources time and again if someone
627 // chooses to try to re-run the browser many times, we maintain global state and
628 // only allow the tracking system to be started up at most once, and shutdown
629 // at most once. See bug 31344 for an example.
631 class AutoTracking {
632 public:
633 AutoTracking() {
634 if (state_ != kNeverBeenRun)
635 return;
636 ThreadData::StartTracking(true);
637 state_ = kRunning;
640 ~AutoTracking() {
641 #ifndef NDEBUG
642 if (state_ != kRunning)
643 return;
644 // Don't call these in a Release build: they just waste time.
645 // The following should ONLY be called when in single threaded mode. It is
646 // unsafe to do this cleanup if other threads are still active.
647 // It is also very unnecessary, so I'm only doing this in debug to satisfy
648 // purify (if we need to!).
649 ThreadData::ShutdownSingleThreadedCleanup();
650 state_ = kTornDownAndStopped;
651 #endif
654 private:
655 enum State {
656 kNeverBeenRun,
657 kRunning,
658 kTornDownAndStopped,
660 static State state_;
662 DISALLOW_COPY_AND_ASSIGN(AutoTracking);
666 } // namespace tracked_objects
668 #endif // BASE_TRACKED_OBJECTS_H_