8b68c5bedfca08872de4e153684f880ae6b9ea31 |
|
11-May-2017 |
Nicolas Geoffray <ngeoffray@google.com> |
Revert "Revert "Use IsMarked instead of Mark for profiling info."" Bug in the original change was that we were infitely looping on the same inline cache entry, expecting null when it was actually an old pointer to a GC'ed class object. bug: 37693252 Test: test.py --jit This reverts commit 3afefba4b5558f5f726338485c1f6ddc7f107719. (cherry picked from commit 13056a1720aca64945541812a3c7602acfe4a937) Change-Id: Ia637d4a7db4394964d1de5c92370921c98a103fa
|
350cf8a406486a4fa96549114b3b21b975a5c8f8 |
|
29-Apr-2017 |
Mathieu Chartier <mathieuc@google.com> |
Add basic heap corruption detection ConcurrentCopying::Copy Detect objects that have a null class. This also detects objects that are not in the from-space allocated area since this area is zero initialized. Test: test-art-host Bug: 37683299 Bug: 12687968 Bug: 37187694 Change-Id: Ib7b9a1913a582692ce05791104c181b75e0f7bcb
|
988136bf7a731710c1c0979d6f2deec6abe4574f |
|
24-Apr-2017 |
Andreas Gampe <agampe@google.com> |
ART: More header cleanup - CC Forward-declare AtomicStack in CC. Use stack_reference in atomic_stack.h. Test: mmma art Change-Id: I863ca8d4a8dfd5e83279fd68ea0e1a94c3c5df6d (cherry picked from commit 291ce17ada5a126be99f0fc069a028d2100bcf9e)
|
1ca689096b532e007dc9f8ba16db4731e6afd719 |
|
18-Apr-2017 |
Mathieu Chartier <mathieuc@google.com> |
More robust GC verification and corruption dumping Added a test for GC heap corruption dumping, added more info to the dump like adjacent bytes and card table. Added heap corruption detection in ConcurrentCopying::MarkNonMoving(). Bug: 37187694 Bug: 12687968 Test: mm test-art-host-gtest-verification_test -j20 Change-Id: I8c90e45796d0784265aa091b2f8082f0cfb62719
|
3ed8ec10be0c79e7f7bbe73a754da0daf997b994 |
|
21-Apr-2017 |
Mathieu Chartier <mathieuc@google.com> |
Store java_lang_Object_ in the flip callback There was a race where the GC thread would not have is_marking = true, and call WellKnownClasses::ToClass(WellKnownClasses::java_lang_Object). This meant that the returned class was maybe in the from-space for the no image case. The fix was to move this decoding into the flip callback since this callback is called before flipping any thread roots. Bug: 37531237 Bug: 12687968 Test: test-art-host Change-Id: I9a09249e9c6ea2b3b124e957a9e4b61017869306
|
a1467d07c4c1e838ff5c07a4ff4ec35aab6a701f |
|
22-Feb-2017 |
Mathieu Chartier <mathieuc@google.com> |
Revert "Revert "Add missing card mark verification to CC"" Add missing write barrier for AddStrongRoot on the dex cache. Test: test-art-host-run-test ART_TEST_INTERPRETER=true ART_TEST_OPTIMIZING=true ART_TEST_GC_STRESS=true Bug: 12687968 This reverts commit 50805e747cbb7e9c8d30bd3b49e27ab0741f3cf8. Change-Id: I72c6de2120d8e0ddc2512dd41010776aecfc9e2c
|
50805e747cbb7e9c8d30bd3b49e27ab0741f3cf8 |
|
22-Feb-2017 |
Nicolas Geoffray <ngeoffray@google.com> |
Revert "Add missing card mark verification to CC" Fails in read-barrier-gcstress for 944-transform-classloaders Bug: 12687968 This reverts commit 49ba69667ce70f8efbed7d68814ab184bee53486. Change-Id: Ie91eaa034cea77918235766983661efa14fb1a14
|
057d977aed600843dd4a617dca7098555d79110b |
|
18-Feb-2017 |
Hiroshi Yamauchi <yamauchi@google.com> |
Always mark reference referents in transaction mode. Fix a to-space invariant check failure in EnqueueFinalizerReferences. Reference processing can be a problem and useless during transaction because it's not easy to roll back what reference processing does and there's no daemon threads running (in the unstarted runtime). To avoid issues, always mark reference referents. Add a do_atomic_update parameter to MarkHeapReference. Bug: 35417063 Test: test-art-host with CC/CMS/SS. Change-Id: If32eba8fca19ef86e5d13f7925d179c8aecb9e27
|
49ba69667ce70f8efbed7d68814ab184bee53486 |
|
15-Feb-2017 |
Mathieu Chartier <mathieuc@google.com> |
Add missing card mark verification to CC Easier than adapting the code in heap.cc to do this. The verification ensures that objects on clean cards never reference objects in newly allocated regions. Revert some changes from aog/341344 that caused the verification to fail. Bug: 12687968 Test: test-art-host with CC Change-Id: Iad583644bb76633ccea0dba87cb383f30adaa80b
|
65f5f247a367af9d6b9ac63767b69ecf3ab079bc |
|
19-Dec-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Fix race condition btw DelayReferenceRefernent vs Reference.clear(). Rename IsMarkedHeapReference to IsNullOrMarkedHeapReference. Move the null check from the caller of IsMarkedHeapReference into IsNullOrMarkedHeapReference. Make sure that the referent is only loaded once between the null check and the IsMarked call. Use a CAS in ConcurrentCopying::IsNullOrMarkedHeapReference when called from DelayReferenceRefernent. Bug: 33389022 Test: test-art-host without and with CC. Change-Id: I20edab4dac2a4bb02dbb72af0f09de77b55ac08e
|
31e88225b2ef68e7f32f11186acf922c74ddabab |
|
15-Oct-2016 |
Mathieu Chartier <mathieuc@google.com> |
Move most mirror:: args to ObjPtr Fixed possible moving GC bugs in ClinitImageUpdate class. Bug: 31113334 Test: test-art-host Change-Id: I0bf6578553d58b944aaa17665f1350bdf5ed15ec
|
febd0cf9b5070ecc54ba433b951b65e14a54ccde |
|
15-Sep-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Fix a deadlock in the CC collector. Fix a deadlock between CC GC disabling system weaks and thread attach. See 31500969#2 for more details. Bug: 31500969 Bug: 12687968 Test: test-art-host with CC. N9 libartd boot. Ritz EAAC. Change-Id: Ic9a8bfb1c636643a03f4580b811fe890273576b6
|
a5931185c97c7b17981a9fc5016834a0bdd9480b |
|
02-Sep-2016 |
Chih-Hung Hsieh <chh@google.com> |
Fix google-explicit-constructor warnings in art. * Add explicit keyword to conversion constructors, or NOLINT for implicit converters. Bug: 28341362 Test: build with WITH_TIDY=1 Change-Id: I1e1ee2661812944904fedadeff97b620506db47d
|
bdf7f1c3ab65ccb70f62db5ab31dba060632d458 |
|
31-Aug-2016 |
Andreas Gampe <agampe@google.com> |
ART: SHARED_REQUIRES to REQUIRES_SHARED This coincides with the actual attribute name and upstream usage. Preparation for deferring to libbase. Test: m Test: m test-art-host Change-Id: Ia8986b5dfd926ba772bf00b0a35eaf83596d8518
|
c381c36aacf977f7e314e6a91e47b31b04639f62 |
|
23-Aug-2016 |
Mathieu Chartier <mathieuc@google.com> |
Avoid CAS for marking region space bitmap for baker Only have the GC thread mark it. This occurs when popping from the mark stack. The race where an object may be pushed to the mark stack twice is handled by not scanning if it is already marked. Also avoid checking is_active when marking from the GC. EAAC: 1263 -> 1253 (average of 30 runs) GC time: 7.21s -> 6.83s (average of 18 runs) Timings on 960 mhz N6P. Bug: 12687968 Change-Id: I47e98c3e258829d2ba0babd803a219c82a36168c Test: test-art-host, debug N6P booting with baker CC.
|
cca44a0d65a4f43662f152d287025366a03162cb |
|
17-Aug-2016 |
Mathieu Chartier <mathieuc@google.com> |
Track cumulative objects and bytes copied for CC Also print out these values when dumping GC performance info. Bug: 12687968 Test: Test that values are reasonable after running EAAC. Change-Id: Id04fadeaf52511560fd4b261f5287ea0a5dae9d4
|
962cd7adf3d9d2a1dedf0318056a29e9390f1c38 |
|
16-Aug-2016 |
Mathieu Chartier <mathieuc@google.com> |
Always mark zygote large objects for CC Prevent needing to gray holders of zygote large objects. System wide zygote space PSS after boot: 12644 kB -> 5571 kB for CC. Also PSS reduction in zygote large objects themselves since their gray bit would have been set each GC. Overall LOS savings hard to measure, could be up to 316 * 4KB per app since there are 316 zygote large objects. Also clear mod-union tables for image spaces to prevent dirty image pages if any of the image spaces point to zygote large objects. System wide .art mmap: 37432 kB -> 34372 kB System server before (N6P): LOS shared dirty: 12888 kB Zygote space shared dirty: 700 kB Zygote space private dirty: 868 kB .art private dirty: 1696 kB After: LOS shared dirty 13672 kB Zygote space shared dirty: 1072 kB Zygote space private dirty: 496 kB .art private dirty: 1432 kB Bug: 29516968 Test: test-art-host with baker CC, debug N6P phone booting Change-Id: Ia37ce2c11217cf56885bd1d1dc084332fcbb7843
|
36a270ae4f288e49493432b7128f899ad579849e |
|
29-Jul-2016 |
Mathieu Chartier <mathieuc@google.com> |
Change one read barrier bit to mark bit Optimization to help slow path performance. When the GC marks an object through the read barrier slow path. The GC sets the mark bit in the lock word of that reference. This bit is checked from the assembly entrypoint the common case is that it is set. If the bit is set, the read barrier knows the object is already marked and there is no work to do. To prevent dirty pages in zygote and image, the bit is set by the image writer and zygote space creation. EAAC score (lower is better): N9: 777 -> 700 (average 31 of runs) N6P (960000 mhz): 1737.48 -> 1442.31 (average of 25 runs) Bug: 30162165 Bug: 12687968 Test: N9, N6P booting, test-art-host, test-art-target all with CC Change-Id: Iae0cacfae221e33151d3c0ab65338d1c822ab63d
|
d6636d3440efc68e1fa43f437ffbe77581096399 |
|
28-Jul-2016 |
Mathieu Chartier <mathieuc@google.com> |
Avoid read barrier for IntArray::GetArrayClass Changed the code use Mark instead of read barrier, this showed an existing lock violation and possible deadlock which was fixed. Prevent DCHECK failure from the assert. Bug: 30469265 Test: test-art-host with CC Change-Id: I275f953f06f6d13262043fc62eb88dca0356465a
|
21328a15e15005815efc843e774ac6974e94d4d8 |
|
22-Jul-2016 |
Mathieu Chartier <mathieuc@google.com> |
Improve CC handling for immune objects Currently we reduce ram for immune objects by racing agianst the mutators to try and finish processing them before the mutators change many objects to gray. However there is still a window of time where the mutator can dirty immune pages by changing the lock words to gray. These pages remain dirty for the lifetime of the app. This CL changes uses the FlipCallback pause to gray all of the immune objects that have a dirty card. Once these objects are all gray we don't to gray any more objects in the immune spaces since these objects are the only ones that may reference non immune objects. Also only scan objects that are gray when scanning immune spaces to reduce scanning time. System wide PSS after boot on N9, before: 61668 kB: .art mmap 11249 kB: .Zygote After: 36013 kB: .art mmap 12251 kB: .Zygote Results are better than demonstrated since there are more apps running after. Maps PSS / Private Dirty, before: .art mmap 3703 3116 .Zygote 577 480 After: .art mmap 1655 1092 .Zygote 476 392 System server before: .art mmap 4453 3956 .Zygote 849 780 After: .art mmap 2326 1748 .Zygote 640 564 EAAC: Before: ScanImmuneSpaces takes 669.434ms GC time Scores: 718, 761, 753 average 744 GC time: 4.2s, 4.35s, 4.3s average 4.28s After: ScanImmuneSpaces takes 138.328ms GC time Scores: 731, 730, 704 average 722 GC time: 3.92s, 3.83s, 3.85s average 3.87s Additional GC pause time is 285us on Maps on N9. TODO: Reduce this pause time. Test: N9 booting, test-art-host, EAAC all run with CC Bug: 29516968 Bug: 12687968 Change-Id: I584b10d017547b321f33eb23fb5d64372af6f69c
|
56fe25895e91d34a0a017429468829a20bdd5ae4 |
|
14-Jul-2016 |
Mathieu Chartier <mathieuc@google.com> |
Add a way to measure read barrier slow paths If enabled, this option counts number of slow paths, measures the total slow path time per GC and records the info into a histogram. Also added support for systrace to see which threads are performing slow paths. Added runtime option -Xgc:measure to enable. The info is dumped for SIGQUIT. Test: Volantis boot with CC, test-art-host with CC, run EEAC with CC and -Xgc:measure Bug: 30162165 Change-Id: I3c2bdb4156065249c45695f13c77c0579bc8e57a
|
d8db5a22d8a2cffc75f4b080cbafef5e04800244 |
|
28-Jun-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Revert "Revert "Gray only immune objects mutators access."" To reduce image/zygote dirty pages. GC doesn't gray immune space objects except when visiting the thread GC roots of suspended threads during the thread flip and in FillWithDummyObject(). GC updates the fields of immune space objects without pushing/popping them through the mark stack. GC sets a bool flag after updating the fields of immune space objects. After this point, mutators don't need gray to immune space objects. Removed the mark bitmaps for immune spaces. This reverts commit ddeb172eeedb58ab96e074a55a0d1578b5df4110. Bug: 29516465 Bug: 12687968 Test: art tests, libartd device boot, ritzperf, jdwp test. Change-Id: If272373aea3d41b2719e40a6a41f44d9299ba309
|
ddeb172eeedb58ab96e074a55a0d1578b5df4110 |
|
28-Jun-2016 |
Nicolas Geoffray <ngeoffray@google.com> |
Revert "Gray only immune objects mutators access." Fails with: art F 11338 11338 art/runtime/gc/collector/concurrent_copying-inl.h:83] Check failed: !kGrayImmuneObject || updated_all_immune_objects_.LoadRelaxed() || gc_grays_immune_objects_ Bug: 29516465 Bug: 12687968 This reverts commit 16292fcc98f03690576d0739b2e5fb04b375933c. Change-Id: I1d2d988b7707e03cc94f019cf8bef5b9a9099060
|
16292fcc98f03690576d0739b2e5fb04b375933c |
|
21-Jun-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Gray only immune objects mutators access. To reduce image/zygote dirty pages. GC doesn't gray immune space objects except when visiting the thread GC roots of suspended threads during the thread flip. GC updates the fields of immune space objects without pushing/popping them through the mark stack. GC sets a bool flag after updating the fields of immune space objects. After this point, mutators don't need gray to immune space objects. Removed the mark bitmaps for immune spaces. Bug: 29516465 Bug: 12687968 Test: art tests, libartd device boot, ritzperf. Change-Id: Idfcffbdb94dfc8bfc89c30d6aff8888f04990a56
|
8016bdee7ca1a066221a5d2fe5e60890de950a5b |
|
16-Jun-2016 |
Mathieu Chartier <mathieuc@google.com> |
Use collector specific helper classes Changed to use inner classes. Also changed some visitors to lambdas. Bug: 29413717 Bug: 19534862 (cherry picked from commit a07f55913824ab4215a9a4f827fa9c043c0d44d9) Change-Id: I631c8bfe5f795eda4623c5bb4f357f2dd12358e2
|
a07f55913824ab4215a9a4f827fa9c043c0d44d9 |
|
16-Jun-2016 |
Mathieu Chartier <mathieuc@google.com> |
Use collector specific helper classes Changed to use inner classes. Also changed some visitors to lambdas. Bug: 29413717 Bug: 19534862 Change-Id: I631c8bfe5f795eda4623c5bb4f357f2dd12358e2
|
daf61a19177e23beed4bff0134a825d7e5a9207b |
|
10-Jun-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Disable the CC collector / read barrier checks in non-debug build. Bug: 12687968 Change-Id: Ia8295354b705018ffa864eb8101aa5c09528af13
|
8e67465aa57ee58425be8812c8dba2f7f59cdc2e |
|
22-Dec-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Avoid the need for the black color for the baker-style read barrier. We used to set marked-through non-moving objects to black to distinguish between an unmarked object and a marked-through object (both would be white without black). This was to avoid a rare case where a marked-through (white) object would be incorrectly set to gray for a second time (and left gray) after it's marked through (white/unmarked -> gray/marked -> white/marked-through -> gray/incorrect). If an object is left gray, the invariant would be broken that all objects are white when GC isn't running. Also, we needed to have an extra pass over non-moving objects to change them from black to white after the marking phase. To avoid the need for the black color, we use a 'false gray' stack to detect such rare cases and register affected objects on it and change the objects to white at the end of the marking phase. This saves some GC time because we can avoid the gray-to-black CAS per non-moving object as well as the extra pass over non-moving objects. Ritzperf EAAC (N6): Avg GC time: 232 -> 183 ms (-21%) Total GC time: 15.3 -> 14.1 s (-7.7%) Bug: 12687968 Change-Id: Idb29c3dcb745b094bcf6abc4db646dac9cbd1f71
|
3c448933bea6af7362fb8eb48c24292bb5fba2ef |
|
23-Jan-2016 |
Hiroshi Yamauchi <yamauchi@google.com> |
Disable the CC collector verbose log. To reduce the amount of buildbot libcore test logs. Bug: 12687968 Change-Id: Idadca6622c745ef12629ccb26dcbdc9ebc379a00
|
763a31ed7a2bfad22a9cb07f5301a71c0f97ca49 |
|
17-Nov-2015 |
Mathieu Chartier <mathieuc@google.com> |
Add immune spaces abstraction ImmuneSpaces is a set of spaces which are not reclaimable by the GC in the current collection. This set of spaces does not have requirements about space adjacency like the old ImmuneRegion. ImmuneSpaces generates the largest immune region for the GC. Since there is no requirement on adjacency, it is possible to have multiple non-adjacent applicaton image files. For image spaces, we also look at the oat code which is normally after the application image. In this case, we add the code as part of the immune region. This is required to have both the boot image and the zygote space be in the same immune region (for performance reasons). Bug: 22858531 Change-Id: I5103b31c0e39ad63c594f5557fc848a3b288b43e
|
723e6cee35671d2dd9aeb884dd11f6994307c01f |
|
29-Oct-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Minor improvements for the CC collector. - Split Mark() and inline its first part. - Make sure some other routines are inlined. - Add some UNLIKELY's. - Use VisitConcurrentRoots(). Ritz EAAC GC time decreased from 28.9 -> 27.6s (-4.5%) on N5. Bug: 12687968 Change-Id: I7bd13f162e7daa2a5853000fb22c5fefc318994f
|
19eab409b3efab3889885b71db708fbe56594088 |
|
24-Oct-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Make the mark stack expandable for the CC collector. Bug: 12687968 Change-Id: I5d05df5524f54c6adb964901e5a963eb042cb2e1
|
00370827646cc21cb370c3e7e93f9c0cff4c30c2 |
|
18-Aug-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Use thread-local is_gc_marking flags for the CC collector. The currently global is_marking flag is used to check if the read barrier slow path needs to be taken for GC roots access. Changing it to a thread-local flag simplifies the fast path check and makes it easier to do it in assembly code. It also solves the issue that we need to avoid accessing the global flag during startup before the heap or the collector object isn't allocated and initialized. Bug: 12687968 Change-Id: Ibf0dca12f400bf3490188b12dfe96c7de30583e0
|
a4f6af9b1e6380b31674d7ac645b1732c846ac06 |
|
12-Aug-2015 |
Mathieu Chartier <mathieuc@google.com> |
Some heap cleanup Bug: 19534862 Change-Id: Ia63f489d26ec8813a263ce877bdbbc8c4e8fe5f4
|
da7c650022a974be10e2f00fa07d5109e3d8826f |
|
24-Jul-2015 |
Mathieu Chartier <mathieuc@google.com> |
Visit class native roots from VisitReferences Visit class roots when we call Class::VisitReferences instead of in the class linker. This makes it easier to implement class unloading since unmarked classes won't have their roots visited by the class linker. Bug: 22181835 Change-Id: I63f31e5ebef7b2a0b764b3ba3cb038b3f561b379
|
90443477f9a0061581c420775ce3b7eeae7468bc |
|
17-Jul-2015 |
Mathieu Chartier <mathieuc@google.com> |
Move to newer clang annotations Also enable -Wthread-safety-negative. Changes: Switch to capabilities and negative capabilities. Future work: Use capabilities to implement uninterruptible annotations to work with AssertNoThreadSuspension. Bug: 20072211 Change-Id: I42fcbe0300d98a831c89d1eff3ecd5a7e99ebf33
|
8118781ebc9659f806716c451bdb3fe9b77ae32b |
|
15-Jul-2015 |
Mathieu Chartier <mathieuc@google.com> |
Address some GC comments Follow-up from: https://android-review.googlesource.com/#/c/159650/ Change-Id: Id14f29b4ce5b70b63fcb3e74f8503ae60a3ae444
|
97509954404d031594b2ecbda607314d169d512e |
|
13-Jul-2015 |
Mathieu Chartier <mathieuc@google.com> |
Clean up GC callbacks to be virtual methods Change-Id: Ia08034a4e5931c4fcb329c3bd3c4b1f301135735
|
0b71357fb52be9bb06d35396a3042b4381b01041 |
|
17-Jun-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Thread-local mark stacks for the CC collector. Thread-local mark stacks are assigned to mutators where they push references in read barriers to reduce the (CAS) synchronization cost in a global mark stack/queue. We step through three mark stack modes (thread-local, shared, GC-exclusive) and use per-thread flags to disable/enable system weak accesses (only for the CC collector) instead of the existing global one to safely perform the marking phase. The reasons are 1) thread-local mark stacks for mutators need to be revoked using a checkpoint to avoid races (incorrectly leaving a reference on mark stacks) when terminating marking, and 2) we can’t use a checkpoint while system weak accesses are disabled (or a deadlock would happen). More details are described in the code comments. Performance improvements in Ritzperf EAAC: a ~2.8% improvement (13290->12918) in run time and a ~23% improvement (51.6s->39.8s) in the total GC time on N5. Bug: 12687968 Change-Id: I5d234d7e48bf115cd773d38bdb62ad24ce9116c7
|
3f64f25151780fdea3511be62b4fe50775f86541 |
|
13-Jun-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Print more diagnosis info on to-space invariant violation. Pass the method/field (in GcRootSource) to the read barrier to print more info when a to-space invariant violation is detected on a method/field GC root access. Refactor ConcurrentCopying::AssertToSpaceInvariant(). Bug: 12687968 Bug: 21564728 Change-Id: I3a5fde1f41969349b0fee6cd9217b948d5241a7c
|
414369a2e3f23e1408fc1cbf4f623014bd95cb8f |
|
04-May-2015 |
Mathieu Chartier <mathieuc@google.com> |
Add some more DISALLOW_COPY_AND_ASSIGN May help prevent bugs maybe. (cherry picked from commit 3130cdf29eb203be0c38d1107a65d920ec39c106) Change-Id: Ie73d469dfcd078492ecb3aa28682b42707221202
|
3130cdf29eb203be0c38d1107a65d920ec39c106 |
|
04-May-2015 |
Mathieu Chartier <mathieuc@google.com> |
Add some more DISALLOW_COPY_AND_ASSIGN May help prevent bugs maybe. Change-Id: Ie73d469dfcd078492ecb3aa28682b42707221202
|
65b798ea10dd716c1bb3dda029f9bf255435af72 |
|
06-Apr-2015 |
Andreas Gampe <agampe@google.com> |
ART: Enable more Clang warnings Change-Id: Ie6aba02f4223b1de02530e1515c63505f37e184c
|
bb87e0f1a52de656bc77cb01cb887e51a0e5198b |
|
03-Apr-2015 |
Mathieu Chartier <mathieuc@google.com> |
Refactor and improve GC root handling Changed GcRoot to use compressed references. Changed root visiting to use virtual functions instead of function pointers. Changed root visting interface to be an array of roots instead of a single root at a time. Added buffered root marking helper to avoid dispatch overhead. Root marking seems a bit faster on EvaluateAndApplyChanges due to batch marking. Pause times unaffected. Mips64 is untested but might work, maybe. Before: MarkConcurrentRoots: Sum: 67.678ms 99% C.I. 2us-664.999us Avg: 161.138us Max: 671us After: MarkConcurrentRoots: Sum: 54.806ms 99% C.I. 2us-499.986us Avg: 136.333us Max: 602us Bug: 19264997 Change-Id: I0a71ebb5928f205b9b3f7945b25db6489d5657ca
|
e15ea086439b41a805d164d2beb07b4ba96aaa97 |
|
10-Feb-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
Reserve bits in the lock word for read barriers. This prepares for the CC collector to use the standard object header model by storing the read barrier state in the lock word. Bug: 19355854 Bug: 12687968 Change-Id: Ia7585662dd2cebf0479a3e74f734afe5059fb70f
|
cb535da36915f9d10bec3880b46f1de1f7a69f22 |
|
23-Jan-2015 |
Mathieu Chartier <mathieuc@google.com> |
Change AtomicStack to use StackReference Previously used Object*, using StackReference saves memory on 64 bit devices. Bug: 12935052 Bug: 17643507 Change-Id: I035878690054eeeb24d655a900b8f26c837703ff
|
6c08a453c7fac58388bcf4cd521b4075ef5840d9 |
|
24-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Fix new[] / delete mismatch The type of the unique_ptr in MarkQueue should be an array type, as it is holding an array, actually. Change-Id: If1d05a1d52cd58a373f240f7156fc69b70324298
|
2cd334ae2d4287216523882f0d298cf3901b7ab1 |
|
09-Jan-2015 |
Hiroshi Yamauchi <yamauchi@google.com> |
More of the concurrent copying collector. Bug: 12687968 Change-Id: I62f70274d47df6d6cab714df95c518b750ce3105
|
6a3c1fcb4ba42ad4d5d142c17a3712a6ddd3866f |
|
31-Oct-2014 |
Ian Rogers <irogers@google.com> |
Remove -Wno-unused-parameter and -Wno-sign-promo from base cflags. Fix associated errors about unused paramenters and implict sign conversions. For sign conversion this was largely in the area of enums, so add ostream operators for the effected enums and fix tools/generate-operator-out.py. Tidy arena allocation code and arena allocated data types, rather than fixing new and delete operators. Remove dead code. Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
|
6f365cc033654a5a3b45eaa1379d4b5f156b0cee |
|
23-Apr-2014 |
Mathieu Chartier <mathieuc@google.com> |
Enable concurrent sweeping for non-concurrent GC. Refactored the GarbageCollector to let all of the phases be run by the collector's RunPhases virtual method. This lets the GC decide which phases should be concurrent and reduces how much baked in GC logic resides in GarbageCollector. Enabled concurrent sweeping in the semi space and non concurrent mark sweep GCs. Changed the semi-space collector to have a swap semi spaces boolean which can be changed with a setter. Fixed tests to pass with GSS collector, there was an error related to the large object space limit. Before (EvaluateAndApplyChanges): GSS paused GC time 7.81s/7.81s, score: 3920 After (EvaluateAndApplyChanges): GSS paused GC time 6.94s/7.71s, score: 3900 Benchmark score doesn't go up since the GC happens in the allocating thread. There is a slight reduction in pause times experienced by other threads (0.8s total). Added options for pre sweeping GC heap verification and pre sweeping rosalloc verification. Bug: 14226004 Bug: 14250892 Bug: 14386356 Change-Id: Ib557d0590c1ed82a639d0f0281ba67cf8cae938c
|
d5307ec41c8344be0c32273ec4f574064036187d |
|
28-Mar-2014 |
Hiroshi Yamauchi <yamauchi@google.com> |
An empty collector skeleton for a read barrier-based collector. Bug: 12687968 Change-Id: Ic2a3a7b9943ca64e7f60f4d6ed552a316ea4a6f3
|