History log of /art/runtime/gc/heap-inl.h
Revision Date Author Comments
f1c4d0e3a27e9b39916750147ecdea1418fcc231 02-Dec-2014 Mathieu Chartier <mathieuc@google.com> Try normal allocation if large object allocation fails

If a large object allocation fails, we now try the normal allocators.

Bug: 18504942
Change-Id: I18b9759d6af885556941542c57fec584f18197f1
27f5ae830c5418fa92094608a6e9f693ea88bb69 21-Aug-2014 Mathieu Chartier <mathieuc@google.com> Check pending exception result in AllocObjectWithAllocator.

Possible previous bug:
Allocation fails due to OOM and the collector transitions.
This caused us to incorrectly retry the allocation with a pending
exception. We now return null if there is a pending exception.

Bug: 17164348
Change-Id: I22eab472afb2fdea6e800963ccb35ec0755ba0e6
59fe711f88191cd8ca1a386c4fa0d2f9e484af50 14-Jul-2014 Mathieu Chartier <mathieuc@google.com> Fix infinite loop when calling SetStatus after OOM.

There was a problem where we would call SetStatus when we had an OOM
error. This results in attempting to find the ExceptionInInitializer
class which if not loaded does more allocations resulting in an
infinite loop.

Also some cleanup addressing other comments.

Bug: 16082350

(cherry picked from commit fd22d5bada15d95b5ea8ab5a4dda39077e1a54ee)

Change-Id: Ie291eb0f52ba9c63f24591fae691dd9f393e6ccb
fd22d5bada15d95b5ea8ab5a4dda39077e1a54ee 14-Jul-2014 Mathieu Chartier <mathieuc@google.com> Fix infinite loop when calling SetStatus after OOM.

There was a problem where we would call SetStatus when we had an OOM
error. This results in attempting to find the ExceptionInInitializer
class which if not loaded does more allocations resulting in an
infinite loop.

Also some cleanup addressing other comments.

Bug: 16082350
Change-Id: I5c1e638a03ddf700ab4e9cad9a3077d2b1b26c43
14cc9be4adc652071979395d337d1380763844fa 11-Jul-2014 Mathieu Chartier <mathieuc@google.com> Faster TLAB allocator.

New TLAB allocator doesn't increment bytes allocated until we allocate
a new TLAB. This increases allocation performance by avoiding a CAS.

MemAllocTest:
Before GSS TLAB: 3400ms.
After GSS TLAB: 2750ms.

Bug: 9986565

Change-Id: I1673c27555330ee90d353b98498fa0e67bd57fad
ffddfdf6fec0b9d98a692e27242eecb15af5ead2 03-Jun-2014 Tim Murray <timmurray@google.com> DO NOT MERGE

Merge ART from AOSP to lmp-preview-dev.

Change-Id: I0f578733a4b8756fd780d4a052ad69b746f687a9
c179016fe188bef09487e777aa0fd861f5cdf067 23-May-2014 Mathieu Chartier <mathieuc@google.com> Add reserve area to allocation stacks.

This fixes an issue with heap verification which was caused when
the allocation stack overflowed. This resulted in heap verification
failures since we were storing the newly allocated object in a
handle scope without having it be live either in the live bitmap
or allocation stack. We now push the object in the reserve area
before we do a GC due to allocation stack overflow.

Change-Id: I83b42c4b3250d7eaab1b49e53066e21c8656a740
3e5cf305db800b2989ad57b7cde8fb3cc9fa1b9e 21-May-2014 Ian Rogers <irogers@google.com> Begin migration of art::Atomic to std::atomic.

Change-Id: I4858d9cbed95e5ca560956b9dabd976cebe68333
eb8167a4f4d27fce0530f6724ab8032610cd146b 08-May-2014 Mathieu Chartier <mathieuc@google.com> Add Handle/HandleScope and delete SirtRef.

Delete SirtRef and replaced it with Handle. Handles are value types
which wrap around StackReference*.

Renamed StackIndirectReferenceTable to HandleScope.

Added a scoped handle wrapper which wraps around an Object** and
restores it in its destructor.

Renamed Handle::get -> Get.

Bug: 8473721

Change-Id: Idbfebd4f35af629f0f43931b7c5184b334822c7a
4cd662e54440f76fc920cb2c67acab3bba8b33dd 04-Apr-2014 Hiroshi Yamauchi <yamauchi@google.com> Fix Object::Clone()'s pre-fence barrier.

Pass in a pre-fence barrier object that sets in the array length
instead of setting it after returning from AllocObject().

Fix another potential bug due to the wrong default pre-fence barrier
parameter value. Since this appears error-prone, removed the default
parameter value and make it an explicit parameter.

Fix another potential moving GC bug due to a lack of a SirtRef.

Bug: 13097759
Change-Id: I466aa0e50f9e1a5dbf20be5a195edee619c7514e
624468cd401cc1ac0dd70c746301e0788a597759 01-Apr-2014 Hiroshi Yamauchi <yamauchi@google.com> Make the support code for read barriers a bit more general.

Add an option for Baker in addition to Brooks.

Bug: 12687968
Change-Id: I8a31db817ff6686c72951b6534f588228e270b11
3e41780cb3bcade3b724908e00443a9caf6977ef 20-Mar-2014 Hiroshi Yamauchi <yamauchi@google.com> Refactor the garbage collector driver (GarbageCollector::Run).

Bug: 12687968

Change-Id: Ifc9ee86249f7938f51495ea1498cf0f7853a27e8
38e68e9978236db87c9008bbe47db80525d2fa16 07-Mar-2014 Hiroshi Yamauchi <yamauchi@google.com> Use the card table to speed up the GSS collector.

Scan only dirty cards, as opposed to the whole space, to find
references from the non-moving spaces to the bump pointer spaces at
bump pointer space only collections.

With this change, the Ritz MemAllocTest speeds up by 8-10% on host and
2-3% on N4. The Ritz EvaluateFibonacci speeds up by 8% and its average
pause time is reduced by 43% on N4.

Bug: 11650816
Change-Id: I1eefe75776bc37e24673b301ffa65a25f9bd4cde
5ccd498d4aa450b0381344724b072a932709a59a 11-Mar-2014 Hiroshi Yamauchi <yamauchi@google.com> Put the post zygote non-moving space next to the malloc space.

This change fixes the following problem with the GSS collector: if the
post zygote non-moving space happens to be located after the bump
pointer spaces, the bump pointer space is treated wrongfully as an
immune space. The reasons are 1) the immune space is represented by a
simple address range and 2) the GSS collector treats the post zygote
non-moving space as an immune space at a bump pointer space only
collection.

In addition, this change makes it a reality that all the non-moving
spaces are adjacent, which we tend to assume in the code (eg. the
notion of the immune space represented by a simple address range.)
This should help avoid potential confusions in the future.

Fix a DCHECK failure where usable_size isn't set properly when
-XX:UseTLAB is used.

Change-Id: I585920e433744a390f87e9a25bef6114b2a4623f
c645f1ddb7c40bea6a38eda4b3f83f6b6dec405b 07-Mar-2014 Mathieu Chartier <mathieuc@google.com> Add more VerifyObject calls.

Added verify object calls to SirtRef, IndirectReferenceTable,
ReferenceTable.

Removed un-needed verify object in ScopedObjectAccess / DecodeJObject
since object sources are handled.

Bug: 12934910
Change-Id: I55a46a8ea61fed2a77526eda27fd2cce97a9b125
f517f1a994fab72ba484bbbac6911e315f59f6cd 07-Mar-2014 Mathieu Chartier <mathieuc@google.com> Restore obj after RequestConcurrentGC.

RequestConcurrentGC can cause thread suspension, this means that
another thread could transition the heap or cause moving GC.

Bug: 12934910

Change-Id: I5c07161e2e849d7acbdf939f1c24e1ba361a1d6a
bd0a65339a08dc28c6b56d2673f1f13b6bddd7aa 27-Feb-2014 Mathieu Chartier <mathieuc@google.com> Enable large object space for command line runs.

Added a dynamic large_object_threshold_ variable which is max int
when the large object space is disabled.

No longer clear timing logger timings after zygote creation since
it is not required and removes the boot semispace timings.

Change-Id: I693865f4699cc32381199377239854c6ec42f37e
a55cf41c9d1da7ee8b2f63974dedfb484042dd03 27-Feb-2014 Ian Rogers <irogers@google.com> Ensure usable space data is zeroed in arrays.

Address other outstanding review comments.

Change-Id: Iaffe04de080772a0d0c5fd973bcac0e23c8c3e25
6fac447555dc94a935b78198479cce645c837b89 26-Feb-2014 Ian Rogers <irogers@google.com> Make allocations report usable size.

Work-in-progress to allow arrays to fill usable size. Bug: 13028925.
Use C++11's override keyword on GCC >= 2.7 to ensure that we override GC and
allocator methods.
Move initial mirror::Class set up into a Functor so that all allocated objects
have non-zero sizes. Use this property to assert that all objects are never
larger than their usable size.
Other bits of GC related clean-up, missing initialization, missing use of
const, hot methods in .cc files, "unimplemented" functions that fail at
runtime in header files, reducing header file includes, move valgrind's space
into its own files, reduce number of array allocation routines.

Change-Id: Id5760041a2d7f94dcaf17ec760f6095ec75dadaa
9d04a20bde1b1855cefc64aebc1a44e253b1a13b 31-Jan-2014 Hiroshi Yamauchi <yamauchi@google.com> (Experimental) Add Brooks pointers.

This feature is disabled by default.

Verified that the Brooks pointers are installed correctly by using the
CMS/SS collectors.

Change-Id: Ia9be9814ab6e29169ac85edc4792ce8c81d552a9
4e30541a92381fb280cd0be9a1763b713ee4d64c 19-Feb-2014 Mathieu Chartier <mathieuc@google.com> Fix and optimize verify object.

VerifyObject no longer resides in heap. You can now enable
VerifyObject for non-debug builds. VerifyStack is still slow, so it
is now guarded by its own flag.

Fixed the image writer to not use verification at places where
verification fails due to invalid reads.

Fixed RosAlloc to use SizeOf which doesn't call verify object.

Added a flag paremeter to some of the mirror getters / setters to
be able to selectively disable VerifyObject on certain calls.

Optimized the GC to not verify each object multiple times during
object scanning if verify object is enabled.

Added 3 verification options: verify reads, verify this, and verify
writes so that you can select how much verification you want for
mirror getters and setters.

Removed some useless DCHECKs which would slow debug builds without
providing any benefits.

TODO: RosAlloc verification doesn't currently work with verify
objects.

Bug: 12934910
Bug: 12879358

Change-Id: Ic61033104dfc334543f89b0fc0ad8cd4f4015d69
f5b0e20b5b31f5f5465784adcf2a204dcd69c7fd 12-Feb-2014 Hiroshi Yamauchi <yamauchi@google.com> Thread-local allocation stack.

With this change, Ritz MemAllocTest gets ~14% faster on N4.

Bug: 9986565
Change-Id: I2fb7d6f7c5daa63dd4fc73ba739e6ae4ed820617
e6da9af8dfe0a3e3fbc2be700554f6478380e7b9 16-Dec-2013 Mathieu Chartier <mathieuc@google.com> Background compaction support.

When the process state changes to a state which does not perceives
jank, we copy from the main free-list backed allocation space to
the bump pointer space and enable the semispace allocator.

When we transition back to foreground, we copy back to a free-list
backed space.

Create a seperate non-moving space which only holds non-movable
objects. This enables us to quickly wipe the current alloc space
(DlMalloc / RosAlloc) when we transition to background.

Added multiple alloc space support to the sticky mark sweep GC.

Added a -XX:BackgroundGC option which lets you specify
which GC to use for background apps. Passing in
-XX:BackgroundGC=SS makes the heap compact the heap for apps which
do not perceive jank.

Results:
Simple background foreground test:
0. Reboot phone, unlock.
1. Open browser, click on home.
2. Open calculator, click on home.
3. Open calendar, click on home.
4. Open camera, click on home.
5. Open clock, click on home.
6. adb shell dumpsys meminfo

PSS Normal ART:
Sample 1:
88468 kB: Dalvik
3188 kB: Dalvik Other
Sample 2:
81125 kB: Dalvik
3080 kB: Dalvik Other

PSS Dalvik:
Total PSS by category:
Sample 1:
81033 kB: Dalvik
27787 kB: Dalvik Other
Sample 2:
81901 kB: Dalvik
28869 kB: Dalvik Other

PSS ART + Background Compaction:
Sample 1:
71014 kB: Dalvik
1412 kB: Dalvik Other
Sample 2:
73859 kB: Dalvik
1400 kB: Dalvik Other

Dalvik other reduction can be explained by less deep allocation
stacks / less live bitmaps / less dirty cards.

TODO improvements: Recycle mem-maps which are unused in the current
state. Not hardcode 64 MB capacity of non movable space (avoid
returning linear alloc nightmares). Figure out ways to deal with low
virtual address memory problems.

Bug: 8981901

Change-Id: Ib235d03f45548ffc08a06b8ae57bf5bada49d6f3
b122a4bbed34ab22b4c1541ee25e5cf22f12a926 20-Nov-2013 Ian Rogers <irogers@google.com> Tidy up memory barriers.

Change-Id: I937ea93e6df1835ecfe2d4bb7d84c24fe7fc097b
692fafd9778141fa6ef0048c9569abd7ee0253bf 30-Nov-2013 Mathieu Chartier <mathieuc@google.com> Thread local bump pointer allocator.

Added a thread local allocator to the heap, each thread has three
pointers which specify the thread local buffer: start, cur, and
end. When the remaining space in the thread local buffer isn't large
enough for the allocation, the allocator allocates a new thread
local buffer using the bump pointer allocator.

The bump pointer space had to be modified to accomodate thread
local buffers. These buffers are called "blocks", where a block
is a buffer which contains a set of adjacent objects. Blocks
aren't necessarily full and may have wasted memory towards the
end. Blocks have an 8 byte header which specifies their size and is
required for traversing bump pointer spaces.

Memory usage is in between full bump pointer and ROSAlloc since
madvised memory limits wasted ram to an average of 1/2 page per
block.

Added a runtime option -XX:UseTLAB which specifies whether or
not to use the thread local allocator. Its a NOP if the garbage
collector is not the semispace collector.

TODO: Smarter block accounting to prevent us reading objects until
we either hit the end of the block or GetClass() == null which
signifies that the block isn't 100% full. This would provide a
slight speedup to BumpPointerSpace::Walk.

Timings: -XX:HeapMinFree=4m -XX:HeapMaxFree=8m -Xmx48m
ritzperf memalloc:
Dalvik -Xgc:concurrent: 11678
Dalvik -Xgc:noconcurrent: 6697
-Xgc:MS: 5978
-Xgc:SS: 4271
-Xgc:CMS: 4150
-Xgc:SS -XX:UseTLAB: 3255

Bug: 9986565
Bug: 12042213

Change-Id: Ib7e1d4b199a8199f3b1de94b0a7b6e1730689cad
c528dba35b5faece51ca658fc008b688f8b690ad 26-Nov-2013 Mathieu Chartier <mathieuc@google.com> Enable moving classes.

Slight reduction in Zygote size, memory savings are in the noise.
Before: Zygote size: 8739224
After: Zygote size: 8733568

Fixed a bug where we didn't set the concurrent start bytes after
switching the allocator from bump pointer to ROSAlloc in the
zygote. This caused excessive memory usage.

Added the method verifiers as roots to fix an issue caused by
RegTypes holding a Class*.

Added logic to clear card table in the SemiSpace collector, this
reduces DalvikOther from ~2400k -> ~1760k when using the SemiSpace
collector.

Added a missing lock to the timing loggers which caused a rare
one time crash in std::set.

Bug: 11771255
Bug: 8499494
Bug: 10802951

Change-Id: I99d2b528cd51c1c5ed7012e3220b3aefded680ae
7bf82af01ec250a4ed2cee03a0e51d179fa820f9 07-Dec-2013 Mathieu Chartier <mathieuc@google.com> Fix memory usage regression and clean up collector changing code.

Memory usage regressed since we didn't properly update
concurrent_start_bytes_ when changing collectors.

Bug: 12034247

Change-Id: I1c69e71cd2919e0d3bf75485a4ac0b0aeca59278
e4e23c0b0cfebd01f17732b6686a74f9e437e978 06-Dec-2013 Hiroshi Yamauchi <yamauchi@google.com> Fix valgrind-test-art-host-gtest-object_test.

This had been failing with the error message:

object_test F 2360 2360 art/runtime/gc/heap-inl.h:139] Check failed: !running_on_valgrind_

Then, this failing DCHECK was removed in a refactoring in
cbb2d20bea2861f244da2e2318d8c088300a3710, I believe.

This change adds back a DCHECK that's equivalent to the old one and
fixes ObjectTest.CheckAndAllocArrayFromCode by replacing a
CheckAndAllocArrayFromCode call with a
CheckAndAllocArrayFromCodeInstrumented call.

Bug: 11670287
Change-Id: Ica2f707b9a9ff48ef973b9e326a4d9786c1781f8
95a659fca9cb89d5cf592ad5808d38f0117a583e 22-Nov-2013 Hiroshi Yamauchi <yamauchi@google.com> Fix a libartd.so boot crash when kMovingCollector is true.

Bug: 11828452
Change-Id: I208a99d7ee7cb6af37046387e97156e3b240cda6
1febddf359ae500ef1bb01ab4883b076fcb56440 20-Nov-2013 Mathieu Chartier <mathieuc@google.com> Set array length before fence in allocation code path.

Could not delete SetLength since it is required by
space_test.

Bug: 11747779

Change-Id: Icf1ead216b6ff1b519240ab0d0ca30d68429d5b6
cbb2d20bea2861f244da2e2318d8c088300a3710 15-Nov-2013 Mathieu Chartier <mathieuc@google.com> Refactor allocation entrypoints.

Adds support for switching entrypoints during runtime. Enables
addition of new allocators with out requiring significant copy
paste. Slight speedup on ritzperf probably due to more inlining.

TODO: Ensuring that the entire allocation path is inlined so
that the switch statement in the allocation code is optimized
out.

Rosalloc measurements:
4583
4453
4439
4434
4751

After change:
4184
4287
4131
4335
4097

Change-Id: I1352a3cbcdf6dae93921582726324d91312df5c9
cf58d4adf461eb9b8e84baa8019054c88cd8acc6 26-Sep-2013 Hiroshi Yamauchi <yamauchi@google.com> A custom 'runs-of-slots' memory allocator.

Bug: 9986565
Change-Id: I0eb73b9458752113f519483616536d219d5f798b
590fee9e8972f872301c2d16a575d579ee564bee 13-Sep-2013 Mathieu Chartier <mathieuc@google.com> Compacting collector.

The compacting collector is currently similar to semispace. It works by
copying objects back and forth between two bump pointer spaces. There
are types of objects which are "non-movable" due to current runtime
limitations. These are Classes, Methods, and Fields.

Bump pointer spaces are a new type of continuous alloc space which have
no lock in the allocation code path. When you allocate from these it uses
atomic operations to increase an index. Traversing the objects in the bump
pointer space relies on Object::SizeOf matching the allocated size exactly.

Runtime changes:
JNI::GetArrayElements returns copies objects if you attempt to get the
backing data of a movable array. For GetArrayElementsCritical, we return
direct backing storage for any types of arrays, but temporarily disable
the GC until the critical region is completed.

Added a new runtime call called VisitObjects, this is used in place of
the old pattern which was flushing the allocation stack and walking
the bitmaps.

Changed image writer to be compaction safe and use object monitor word
for forwarding addresses.

Added a bunch of added SIRTs to ClassLinker, MethodLinker, etc..

TODO: Enable switching allocators, compacting on background, etc..

Bug: 8981901

Change-Id: I3c886fd322a6eef2b99388d19a765042ec26ab99
dfb325e0ddd746cd8f7c2e3723b3a573eb7cc111 30-Oct-2013 Ian Rogers <irogers@google.com> Don't use UTF16 length as length for MUTF8.

Bug 11367555.

Change-Id: Ia0b07072a1a49d435c3b71ed9a668b316b7ff5d8
3b4c18933c24b8a33f38573c2ebcdb9aa16efeb5 13-Sep-2013 Hiroshi Yamauchi <yamauchi@google.com> Split the allocation path into 'instrumented' and 'uninstrumented'
ones.

The instrumented path is equivalent to the existing allocation path
that checks for three instrumentation mechanisms (the debugger
allocation tracking, the runtime allocation stats collection, and
valgrind) for every allocation. The uinstrumented path does not
perform these checks. We use the uninstrumented path by default and
enable the instrumented path only when any of the three mechanisms is
enabled. The uninstrumented version of Heap::AllocObject() is inlined.

This change improves the Ritz MemAllocTest by ~4% on Nexus 4 and ~3%
on Host/x86.

Bug: 9986565
Change-Id: I3e68dfff6789d77bbdcea98457b694e1b5fcef5f