tracked_objects.h revision 201ade2fbba22bfb27ae029f4d23fca6ded109a0
1// Copyright (c) 2006-2008 The Chromium Authors. All rights reserved. 2// Use of this source code is governed by a BSD-style license that can be 3// found in the LICENSE file. 4 5#ifndef BASE_TRACKED_OBJECTS_H_ 6#define BASE_TRACKED_OBJECTS_H_ 7#pragma once 8 9#include <map> 10#include <string> 11#include <vector> 12 13#include "base/lock.h" 14#include "base/thread_local_storage.h" 15#include "base/tracked.h" 16 17// TrackedObjects provides a database of stats about objects (generally Tasks) 18// that are tracked. Tracking means their birth, death, duration, birth thread, 19// death thread, and birth place are recorded. This data is carefully spread 20// across a series of objects so that the counts and times can be rapidly 21// updated without (usually) having to lock the data, and hence there is usually 22// very little contention caused by the tracking. The data can be viewed via 23// the about:tasks URL, with a variety of sorting and filtering choices. 24// 25// These classes serve as the basis of a profiler of sorts for the Tasks system. 26// As a result, design decisions were made to maximize speed, by minimizing 27// recurring allocation/deallocation, lock contention and data copying. In the 28// "stable" state, which is reached relatively quickly, there is no separate 29// marginal allocation cost associated with construction or destruction of 30// tracked objects, no locks are generally employed, and probably the largest 31// computational cost is associated with obtaining start and stop times for 32// instances as they are created and destroyed. The introduction of worker 33// threads had a slight impact on this approach, and required use of some locks 34// when accessing data from the worker threads. 35// 36// The following describes the lifecycle of tracking an instance. 37// 38// First off, when the instance is created, the FROM_HERE macro is expanded 39// to specify the birth place (file, line, function) where the instance was 40// created. That data is used to create a transient Location instance 41// encapsulating the above triple of information. The strings (like __FILE__) 42// are passed around by reference, with the assumption that they are static, and 43// will never go away. This ensures that the strings can be dealt with as atoms 44// with great efficiency (i.e., copying of strings is never needed, and 45// comparisons for equality can be based on pointer comparisons). 46// 47// Next, a Births instance is created for use ONLY on the thread where this 48// instance was created. That Births instance records (in a base class 49// BirthOnThread) references to the static data provided in a Location instance, 50// as well as a pointer specifying the thread on which the birth takes place. 51// Hence there is at most one Births instance for each Location on each thread. 52// The derived Births class contains slots for recording statistics about all 53// instances born at the same location. Statistics currently include only the 54// count of instances constructed. 55// Since the base class BirthOnThread contains only constant data, it can be 56// freely accessed by any thread at any time (i.e., only the statistic needs to 57// be handled carefully, and it is ONLY read or written by the birth thread). 58// 59// Having now either constructed or found the Births instance described above, a 60// pointer to the Births instance is then embedded in a base class of the 61// instance we're tracking (usually a Task). This fact alone is very useful in 62// debugging, when there is a question of where an instance came from. In 63// addition, the birth time is also embedded in the base class Tracked (see 64// tracked.h), and used to later evaluate the lifetime duration. 65// As a result of the above embedding, we can (for any tracked instance) find 66// out its location of birth, and thread of birth, without using any locks, as 67// all that data is constant across the life of the process. 68// 69// The amount of memory used in the above data structures depends on how many 70// threads there are, and how many Locations of construction there are. 71// Fortunately, we don't use memory that is the product of those two counts, but 72// rather we only need one Births instance for each thread that constructs an 73// instance at a Location. In many cases, instances (such as Tasks) are only 74// created on one thread, so the memory utilization is actually fairly 75// restrained. 76// 77// Lastly, when an instance is deleted, the final tallies of statistics are 78// carefully accumulated. That tallying wrties into slots (members) in a 79// collection of DeathData instances. For each birth place Location that is 80// destroyed on a thread, there is a DeathData instance to record the additional 81// death count, as well as accumulate the lifetime duration of the instance as 82// it is destroyed (dies). By maintaining a single place to aggregate this 83// addition *only* for the given thread, we avoid the need to lock such 84// DeathData instances. 85// 86// With the above lifecycle description complete, the major remaining detail is 87// explaining how each thread maintains a list of DeathData instances, and of 88// Births instances, and is able to avoid additional (redundant/unnecessary) 89// allocations. 90// 91// Each thread maintains a list of data items specific to that thread in a 92// ThreadData instance (for that specific thread only). The two critical items 93// are lists of DeathData and Births instances. These lists are maintained in 94// STL maps, which are indexed by Location. As noted earlier, we can compare 95// locations very efficiently as we consider the underlying data (file, 96// function, line) to be atoms, and hence pointer comparison is used rather than 97// (slow) string comparisons. 98// 99// To provide a mechanism for iterating over all "known threads," which means 100// threads that have recorded a birth or a death, we create a singly linked list 101// of ThreadData instances. Each such instance maintains a pointer to the next 102// one. A static member of ThreadData provides a pointer to the first_ item on 103// this global list, and access to that first_ item requires the use of a lock_. 104// When new ThreadData instances is added to the global list, it is pre-pended, 105// which ensures that any prior acquisition of the list is valid (i.e., the 106// holder can iterate over it without fear of it changing, or the necessity of 107// using an additional lock. Iterations are actually pretty rare (used 108// primarilly for cleanup, or snapshotting data for display), so this lock has 109// very little global performance impact. 110// 111// The above description tries to define the high performance (run time) 112// portions of these classes. After gathering statistics, calls instigated 113// by visiting about:tasks will assemble and aggregate data for display. The 114// following data structures are used for producing such displays. They are 115// not performance critical, and their only major constraint is that they should 116// be able to run concurrently with ongoing augmentation of the birth and death 117// data. 118// 119// For a given birth location, information about births are spread across data 120// structures that are asynchronously changing on various threads. For display 121// purposes, we need to construct Snapshot instances for each combination of 122// birth thread, death thread, and location, along with the count of such 123// lifetimes. We gather such data into a Snapshot instances, so that such 124// instances can be sorted and aggregated (and remain frozen during our 125// processing). Snapshot instances use pointers to constant portions of the 126// birth and death datastructures, but have local (frozen) copies of the actual 127// statistics (birth count, durations, etc. etc.). 128// 129// A DataCollector is a container object that holds a set of Snapshots. A 130// DataCollector can be passed from thread to thread, and each thread 131// contributes to it by adding or updating Snapshot instances. DataCollector 132// instances are thread safe containers which are passed to various threads to 133// accumulate all Snapshot instances. 134// 135// After an array of Snapshots instances are colleted into a DataCollector, they 136// need to be sorted, and possibly aggregated (example: how many threads are in 137// a specific consecutive set of Snapshots? What was the total birth count for 138// that set? etc.). Aggregation instances collect running sums of any set of 139// snapshot instances, and are used to print sub-totals in an about:tasks page. 140// 141// TODO(jar): I need to store DataCollections, and provide facilities for taking 142// the difference between two gathered DataCollections. For now, I'm just 143// adding a hack that Reset()'s to zero all counts and stats. This is also 144// done in a slighly thread-unsafe fashion, as the reseting is done 145// asynchronously relative to ongoing updates, and worse yet, some data fields 146// are 64bit quantities, and are not atomicly accessed (reset or incremented 147// etc.). For basic profiling, this will work "most of the time," and should be 148// sufficient... but storing away DataCollections is the "right way" to do this. 149// 150class MessageLoop; 151 152 153namespace tracked_objects { 154 155//------------------------------------------------------------------------------ 156// For a specific thread, and a specific birth place, the collection of all 157// death info (with tallies for each death thread, to prevent access conflicts). 158class ThreadData; 159class BirthOnThread { 160 public: 161 explicit BirthOnThread(const Location& location); 162 163 const Location location() const { return location_; } 164 const ThreadData* birth_thread() const { return birth_thread_; } 165 166 private: 167 // File/lineno of birth. This defines the essence of the type, as the context 168 // of the birth (construction) often tell what the item is for. This field 169 // is const, and hence safe to access from any thread. 170 const Location location_; 171 172 // The thread that records births into this object. Only this thread is 173 // allowed to access birth_count_ (which changes over time). 174 const ThreadData* birth_thread_; // The thread this birth took place on. 175 176 DISALLOW_COPY_AND_ASSIGN(BirthOnThread); 177}; 178 179//------------------------------------------------------------------------------ 180// A class for accumulating counts of births (without bothering with a map<>). 181 182class Births: public BirthOnThread { 183 public: 184 explicit Births(const Location& location); 185 186 int birth_count() const { return birth_count_; } 187 188 // When we have a birth we update the count for this BirhPLace. 189 void RecordBirth() { ++birth_count_; } 190 191 // When a birthplace is changed (updated), we need to decrement the counter 192 // for the old instance. 193 void ForgetBirth() { --birth_count_; } // We corrected a birth place. 194 195 // Hack to quickly reset all counts to zero. 196 void Clear() { birth_count_ = 0; } 197 198 private: 199 // The number of births on this thread for our location_. 200 int birth_count_; 201 202 DISALLOW_COPY_AND_ASSIGN(Births); 203}; 204 205//------------------------------------------------------------------------------ 206// Basic info summarizing multiple destructions of an object with a single 207// birthplace (fixed Location). Used both on specific threads, and also used 208// in snapshots when integrating assembled data. 209 210class DeathData { 211 public: 212 // Default initializer. 213 DeathData() : count_(0), square_duration_(0) {} 214 215 // When deaths have not yet taken place, and we gather data from all the 216 // threads, we create DeathData stats that tally the number of births without 217 // a corrosponding death. 218 explicit DeathData(int count) : count_(count), square_duration_(0) {} 219 220 void RecordDeath(const base::TimeDelta& duration); 221 222 // Metrics accessors. 223 int count() const { return count_; } 224 base::TimeDelta life_duration() const { return life_duration_; } 225 int64 square_duration() const { return square_duration_; } 226 int AverageMsDuration() const; 227 double StandardDeviation() const; 228 229 // Accumulate metrics from other into this. 230 void AddDeathData(const DeathData& other); 231 232 // Simple print of internal state. 233 void Write(std::string* output) const; 234 235 // Reset all tallies to zero. 236 void Clear(); 237 238 private: 239 int count_; // Number of destructions. 240 base::TimeDelta life_duration_; // Sum of all lifetime durations. 241 int64 square_duration_; // Sum of squares in milliseconds. 242}; 243 244//------------------------------------------------------------------------------ 245// A temporary collection of data that can be sorted and summarized. It is 246// gathered (carefully) from many threads. Instances are held in arrays and 247// processed, filtered, and rendered. 248// The source of this data was collected on many threads, and is asynchronously 249// changing. The data in this instance is not asynchronously changing. 250 251class Snapshot { 252 public: 253 // When snapshotting a full life cycle set (birth-to-death), use this: 254 Snapshot(const BirthOnThread& birth_on_thread, const ThreadData& death_thread, 255 const DeathData& death_data); 256 257 // When snapshotting a birth, with no death yet, use this: 258 Snapshot(const BirthOnThread& birth_on_thread, int count); 259 260 261 const ThreadData* birth_thread() const { return birth_->birth_thread(); } 262 const Location location() const { return birth_->location(); } 263 const BirthOnThread& birth() const { return *birth_; } 264 const ThreadData* death_thread() const {return death_thread_; } 265 const DeathData& death_data() const { return death_data_; } 266 const std::string DeathThreadName() const; 267 268 int count() const { return death_data_.count(); } 269 base::TimeDelta life_duration() const { return death_data_.life_duration(); } 270 int64 square_duration() const { return death_data_.square_duration(); } 271 int AverageMsDuration() const { return death_data_.AverageMsDuration(); } 272 273 void Write(std::string* output) const; 274 275 void Add(const Snapshot& other); 276 277 private: 278 const BirthOnThread* birth_; // Includes Location and birth_thread. 279 const ThreadData* death_thread_; 280 DeathData death_data_; 281}; 282//------------------------------------------------------------------------------ 283// DataCollector is a container class for Snapshot and BirthOnThread count 284// items. It protects the gathering under locks, so that it could be called via 285// Posttask on any threads, or passed to all the target threads in parallel. 286 287class DataCollector { 288 public: 289 typedef std::vector<Snapshot> Collection; 290 291 // Construct with a list of how many threads should contribute. This helps us 292 // determine (in the async case) when we are done with all contributions. 293 DataCollector(); 294 ~DataCollector(); 295 296 // Add all stats from the indicated thread into our arrays. This function is 297 // mutex protected, and *could* be called from any threads (although current 298 // implementation serialized calls to Append). 299 void Append(const ThreadData& thread_data); 300 301 // After the accumulation phase, the following accessor is used to process the 302 // data. 303 Collection* collection(); 304 305 // After collection of death data is complete, we can add entries for all the 306 // remaining living objects. 307 void AddListOfLivingObjects(); 308 309 private: 310 // This instance may be provided to several threads to contribute data. The 311 // following counter tracks how many more threads will contribute. When it is 312 // zero, then all asynchronous contributions are complete, and locked access 313 // is no longer needed. 314 int count_of_contributing_threads_; 315 316 // The array that we collect data into. 317 Collection collection_; 318 319 // The total number of births recorded at each location for which we have not 320 // seen a death count. 321 typedef std::map<const BirthOnThread*, int> BirthCount; 322 BirthCount global_birth_count_; 323 324 Lock accumulation_lock_; // Protects access during accumulation phase. 325 326 DISALLOW_COPY_AND_ASSIGN(DataCollector); 327}; 328 329//------------------------------------------------------------------------------ 330// Aggregation contains summaries (totals and subtotals) of groups of Snapshot 331// instances to provide printing of these collections on a single line. 332 333class Aggregation: public DeathData { 334 public: 335 Aggregation(); 336 ~Aggregation(); 337 338 void AddDeathSnapshot(const Snapshot& snapshot); 339 void AddBirths(const Births& births); 340 void AddBirth(const BirthOnThread& birth); 341 void AddBirthPlace(const Location& location); 342 void Write(std::string* output) const; 343 void Clear(); 344 345 private: 346 int birth_count_; 347 std::map<std::string, int> birth_files_; 348 std::map<Location, int> locations_; 349 std::map<const ThreadData*, int> birth_threads_; 350 DeathData death_data_; 351 std::map<const ThreadData*, int> death_threads_; 352 353 DISALLOW_COPY_AND_ASSIGN(Aggregation); 354}; 355 356//------------------------------------------------------------------------------ 357// Comparator is a class that supports the comparison of Snapshot instances. 358// An instance is actually a list of chained Comparitors, that can provide for 359// arbitrary ordering. The path portion of an about:objects URL is translated 360// into such a chain, which is then used to order Snapshot instances in a 361// vector. It orders them into groups (for aggregation), and can also order 362// instances within the groups (for detailed rendering of the instances in an 363// aggregation). 364 365class Comparator { 366 public: 367 // Selector enum is the token identifier for each parsed keyword, most of 368 // which specify a sort order. 369 // Since it is not meaningful to sort more than once on a specific key, we 370 // use bitfields to accumulate what we have sorted on so far. 371 enum Selector { 372 // Sort orders. 373 NIL = 0, 374 BIRTH_THREAD = 1, 375 DEATH_THREAD = 2, 376 BIRTH_FILE = 4, 377 BIRTH_FUNCTION = 8, 378 BIRTH_LINE = 16, 379 COUNT = 32, 380 AVERAGE_DURATION = 64, 381 TOTAL_DURATION = 128, 382 383 // Imediate action keywords. 384 RESET_ALL_DATA = -1, 385 }; 386 387 explicit Comparator(); 388 389 // Reset the comparator to a NIL selector. Clear() and recursively delete any 390 // tiebreaker_ entries. NOTE: We can't use a standard destructor, because 391 // the sort algorithm makes copies of this object, and then deletes them, 392 // which would cause problems (either we'd make expensive deep copies, or we'd 393 // do more thna one delete on a tiebreaker_. 394 void Clear(); 395 396 // The less() operator for sorting the array via std::sort(). 397 bool operator()(const Snapshot& left, const Snapshot& right) const; 398 399 void Sort(DataCollector::Collection* collection) const; 400 401 // Check to see if the items are sort equivalents (should be aggregated). 402 bool Equivalent(const Snapshot& left, const Snapshot& right) const; 403 404 // Check to see if all required fields are present in the given sample. 405 bool Acceptable(const Snapshot& sample) const; 406 407 // A comparator can be refined by specifying what to do if the selected basis 408 // for comparison is insufficient to establish an ordering. This call adds 409 // the indicated attribute as the new "least significant" basis of comparison. 410 void SetTiebreaker(Selector selector, const std::string& required); 411 412 // Indicate if this instance is set up to sort by the given Selector, thereby 413 // putting that information in the SortGrouping, so it is not needed in each 414 // printed line. 415 bool IsGroupedBy(Selector selector) const; 416 417 // Using the tiebreakers as set above, we mostly get an ordering, which 418 // equivalent groups. If those groups are displayed (rather than just being 419 // aggregated, then the following is used to order them (within the group). 420 void SetSubgroupTiebreaker(Selector selector); 421 422 // Translate a keyword and restriction in URL path to a selector for sorting. 423 void ParseKeyphrase(const std::string& key_phrase); 424 425 // Parse a query in an about:objects URL to decide on sort ordering. 426 bool ParseQuery(const std::string& query); 427 428 // Output a header line that can be used to indicated what items will be 429 // collected in the group. It lists all (potentially) tested attributes and 430 // their values (in the sample item). 431 bool WriteSortGrouping(const Snapshot& sample, std::string* output) const; 432 433 // Output a sample, with SortGroup details not displayed. 434 void WriteSnapshot(const Snapshot& sample, std::string* output) const; 435 436 private: 437 // The selector directs this instance to compare based on the specified 438 // members of the tested elements. 439 enum Selector selector_; 440 441 // For filtering into acceptable and unacceptable snapshot instance, the 442 // following is required to be a substring of the selector_ field. 443 std::string required_; 444 445 // If this instance can't decide on an ordering, we can consult a tie-breaker 446 // which may have a different basis of comparison. 447 Comparator* tiebreaker_; 448 449 // We or together all the selectors we sort on (not counting sub-group 450 // selectors), so that we can tell if we've decided to group on any given 451 // criteria. 452 int combined_selectors_; 453 454 // Some tiebreakrs are for subgroup ordering, and not for basic ordering (in 455 // preparation for aggregation). The subgroup tiebreakers are not consulted 456 // when deciding if two items are in equivalent groups. This flag tells us 457 // to ignore the tiebreaker when doing Equivalent() testing. 458 bool use_tiebreaker_for_sort_only_; 459}; 460 461 462//------------------------------------------------------------------------------ 463// For each thread, we have a ThreadData that stores all tracking info generated 464// on this thread. This prevents the need for locking as data accumulates. 465 466class ThreadData { 467 public: 468 typedef std::map<Location, Births*> BirthMap; 469 typedef std::map<const Births*, DeathData> DeathMap; 470 471 ThreadData(); 472 ~ThreadData(); 473 474 // Using Thread Local Store, find the current instance for collecting data. 475 // If an instance does not exist, construct one (and remember it for use on 476 // this thread. 477 // If shutdown has already started, and we don't yet have an instance, then 478 // return null. 479 static ThreadData* current(); 480 481 // For a given about:objects URL, develop resulting HTML, and append to 482 // output. 483 static void WriteHTML(const std::string& query, std::string* output); 484 485 // For a given accumulated array of results, use the comparator to sort and 486 // subtotal, writing the results to the output. 487 static void WriteHTMLTotalAndSubtotals( 488 const DataCollector::Collection& match_array, 489 const Comparator& comparator, std::string* output); 490 491 // In this thread's data, record a new birth. 492 Births* TallyABirth(const Location& location); 493 494 // Find a place to record a death on this thread. 495 void TallyADeath(const Births& lifetimes, const base::TimeDelta& duration); 496 497 // (Thread safe) Get start of list of instances. 498 static ThreadData* first(); 499 // Iterate through the null terminated list of instances. 500 ThreadData* next() const { return next_; } 501 502 MessageLoop* message_loop() const { return message_loop_; } 503 const std::string ThreadName() const; 504 505 // Using our lock, make a copy of the specified maps. These calls may arrive 506 // from non-local threads, and are used to quickly scan data from all threads 507 // in order to build an HTML page for about:objects. 508 void SnapshotBirthMap(BirthMap *output) const; 509 void SnapshotDeathMap(DeathMap *output) const; 510 511 // Hack: asynchronously clear all birth counts and death tallies data values 512 // in all ThreadData instances. The numerical (zeroing) part is done without 513 // use of a locks or atomics exchanges, and may (for int64 values) produce 514 // bogus counts VERY rarely. 515 static void ResetAllThreadData(); 516 517 // Using our lock to protect the iteration, Clear all birth and death data. 518 void Reset(); 519 520 // Using the "known list of threads" gathered during births and deaths, the 521 // following attempts to run the given function once all all such threads. 522 // Note that the function can only be run on threads which have a message 523 // loop! 524 static void RunOnAllThreads(void (*Func)()); 525 526 // Set internal status_ to either become ACTIVE, or later, to be SHUTDOWN, 527 // based on argument being true or false respectively. 528 // IF tracking is not compiled in, this function will return false. 529 static bool StartTracking(bool status); 530 static bool IsActive(); 531 532#ifdef OS_WIN 533 // WARNING: ONLY call this function when all MessageLoops are still intact for 534 // all registered threads. IF you call it later, you will crash. 535 // Note: You don't need to call it at all, and you can wait till you are 536 // single threaded (again) to do the cleanup via 537 // ShutdownSingleThreadedCleanup(). 538 // Start the teardown (shutdown) process in a multi-thread mode by disabling 539 // further additions to thread database on all threads. First it makes a 540 // local (locked) change to prevent any more threads from registering. Then 541 // it Posts a Task to all registered threads to be sure they are aware that no 542 // more accumulation can take place. 543 static void ShutdownMultiThreadTracking(); 544#endif 545 546 // WARNING: ONLY call this function when you are running single threaded 547 // (again) and all message loops and threads have terminated. Until that 548 // point some threads may still attempt to write into our data structures. 549 // Delete recursively all data structures, starting with the list of 550 // ThreadData instances. 551 static void ShutdownSingleThreadedCleanup(); 552 553 private: 554 // Current allowable states of the tracking system. The states always 555 // proceed towards SHUTDOWN, and never go backwards. 556 enum Status { 557 UNINITIALIZED, 558 ACTIVE, 559 SHUTDOWN, 560 }; 561 562#if defined(OS_WIN) 563 class ThreadSafeDownCounter; 564 class RunTheStatic; 565#endif 566 567 // Each registered thread is called to set status_ to SHUTDOWN. 568 // This is done redundantly on every registered thread because it is not 569 // protected by a mutex. Running on all threads guarantees we get the 570 // notification into the memory cache of all possible threads. 571 static void ShutdownDisablingFurtherTracking(); 572 573 // We use thread local store to identify which ThreadData to interact with. 574 static TLSSlot tls_index_; 575 576 // Link to the most recently created instance (starts a null terminated list). 577 static ThreadData* first_; 578 // Protection for access to first_. 579 static Lock list_lock_; 580 581 // We set status_ to SHUTDOWN when we shut down the tracking service. This 582 // setting is redundantly established by all participating threads so that we 583 // are *guaranteed* (without locking) that all threads can "see" the status 584 // and avoid additional calls into the service. 585 static Status status_; 586 587 // Link to next instance (null terminated list). Used to globally track all 588 // registered instances (corresponds to all registered threads where we keep 589 // data). 590 ThreadData* next_; 591 592 // The message loop where tasks needing to access this instance's private data 593 // should be directed. Since some threads have no message loop, some 594 // instances have data that can't be (safely) modified externally. 595 MessageLoop* message_loop_; 596 597 // A map used on each thread to keep track of Births on this thread. 598 // This map should only be accessed on the thread it was constructed on. 599 // When a snapshot is needed, this structure can be locked in place for the 600 // duration of the snapshotting activity. 601 BirthMap birth_map_; 602 603 // Similar to birth_map_, this records informations about death of tracked 604 // instances (i.e., when a tracked instance was destroyed on this thread). 605 // It is locked before changing, and hence other threads may access it by 606 // locking before reading it. 607 DeathMap death_map_; 608 609 // Lock to protect *some* access to BirthMap and DeathMap. The maps are 610 // regularly read and written on this thread, but may only be read from other 611 // threads. To support this, we acquire this lock if we are writing from this 612 // thread, or reading from another thread. For reading from this thread we 613 // don't need a lock, as there is no potential for a conflict since the 614 // writing is only done from this thread. 615 mutable Lock lock_; 616 617 DISALLOW_COPY_AND_ASSIGN(ThreadData); 618}; 619 620 621//------------------------------------------------------------------------------ 622// Provide simple way to to start global tracking, and to tear down tracking 623// when done. Note that construction and destruction of this object must be 624// done when running in threaded mode (before spawning a lot of threads 625// for construction, and after shutting down all the threads for destruction). 626 627// To prevent grabbing thread local store resources time and again if someone 628// chooses to try to re-run the browser many times, we maintain global state and 629// only allow the tracking system to be started up at most once, and shutdown 630// at most once. See bug 31344 for an example. 631 632class AutoTracking { 633 public: 634 AutoTracking() { 635 if (state_ != kNeverBeenRun) 636 return; 637 ThreadData::StartTracking(true); 638 state_ = kRunning; 639 } 640 641 ~AutoTracking() { 642#ifndef NDEBUG 643 if (state_ != kRunning) 644 return; 645 // Don't call these in a Release build: they just waste time. 646 // The following should ONLY be called when in single threaded mode. It is 647 // unsafe to do this cleanup if other threads are still active. 648 // It is also very unnecessary, so I'm only doing this in debug to satisfy 649 // purify (if we need to!). 650 ThreadData::ShutdownSingleThreadedCleanup(); 651 state_ = kTornDownAndStopped; 652#endif 653 } 654 655 private: 656 enum State { 657 kNeverBeenRun, 658 kRunning, 659 kTornDownAndStopped, 660 }; 661 static State state_; 662 663 DISALLOW_COPY_AND_ASSIGN(AutoTracking); 664}; 665 666 667} // namespace tracked_objects 668 669#endif // BASE_TRACKED_OBJECTS_H_ 670