History log of /art/runtime/gc/space/bump_pointer_space-inl.h
Revision Date Author Comments
4557b3858a66aa20e42bce937e1f0620aad880a2 03-Jan-2018 Orion Hodson <oth@google.com> ART: Rename Atomic::CompareExchange methods

Renames Atomic::CompareExchange methods to Atomic::CompareAndSet
equivalents. These methods return a boolean and do not get the witness
value. This makes space for Atomic::CompareAndExchange methods in a
later commit that will return a boolean and get the witness value.

This is pre-work for VarHandle accessors which require both forms.

Bug: 65872996
Test: art/test.py --host -j32
Change-Id: I9c691250e5556cbfde7811381b06d2920247f1a1
d49012909625c3bf87bf51138fe79315ce1b1bdc 31-May-2017 Andreas Gampe <agampe@google.com> ART: Clean up heap headers

Use more forward declarations for accounting structures and spaces.
Factor out structs to reduce header surface. Remove heap include where
unnecessary. Fix up transitive users. Move some debug-only code out
of line.

Test: m test-art-host
Change-Id: I16db4aaa803f39e155ce6e1b0778b7e393dcbb17
bdf7f1c3ab65ccb70f62db5ab31dba060632d458 31-Aug-2016 Andreas Gampe <agampe@google.com> ART: SHARED_REQUIRES to REQUIRES_SHARED

This coincides with the actual attribute name and upstream usage.
Preparation for deferring to libbase.

Test: m
Test: m test-art-host
Change-Id: Ia8986b5dfd926ba772bf00b0a35eaf83596d8518
90443477f9a0061581c420775ce3b7eeae7468bc 17-Jul-2015 Mathieu Chartier <mathieuc@google.com> Move to newer clang annotations

Also enable -Wthread-safety-negative.

Changes:
Switch to capabilities and negative capabilities.

Future work:
Use capabilities to implement uninterruptible annotations to work
with AssertNoThreadSuspension.

Bug: 20072211

Change-Id: I42fcbe0300d98a831c89d1eff3ecd5a7e99ebf33
14d90579f013b374638b599361970557ed4b3f09 16-Jul-2015 Roland Levillain <rpl@google.com> Use (D)CHECK_ALIGNED more.

Change-Id: I9d740f6a88d01e028d4ddc3e4e62b0a73ea050af
41b175aba41c9365a1c53b8a1afbd17129c87c14 19-May-2015 Vladimir Marko <vmarko@google.com> ART: Clean up arm64 kNumberOfXRegisters usage.

Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.

Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.

Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.

Bug: 13925192

(cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0)

Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
80afd02024d20e60b197d3adfbb43cc303cf29e0 19-May-2015 Vladimir Marko <vmarko@google.com> ART: Clean up arm64 kNumberOfXRegisters usage.

Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.

Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.

Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.

Bug: 13925192
Change-Id: I704884dab15b41ecf7a1c47d397ab1c3fc7ee0f7
4460a84be92b5a94ecfb5c650aef4945ab849c93 09-Mar-2015 Hiroshi Yamauchi <yamauchi@google.com> Rosalloc thread local allocation path without a cas.

Speedup on N4:
MemAllocTest 3044 -> 2396 (~21% reduction)
BinaryTrees 4101 -> 2929 (~26% reduction)

Bug: 9986565
Change-Id: Ia1d1a37b9e001f903c3c056e8ec68fc8c623a78b
13735955f39b3b304c37d2b2840663c131262c18 08-Oct-2014 Ian Rogers <irogers@google.com> stdint types all the way!

Change-Id: I4e4ef3a2002fc59ebd9097087f150eaf3f2a7e08
be2a1df15a31a5223ee9af3015a00c31d2ad2e10 10-Jul-2014 Ian Rogers <irogers@google.com> Fix GC to use art::Atomic rather than compiler intrinsics.

Changes to SpaceBitmap::AtomicTestAndSet and Space::end_. Space::end_ is made
atomic rather than volatile to fully capture all its uses multi-threaded or not
uses.

Change-Id: I3058964b8ad90a8c253b3d7f75585f63ca2fb5e3
3e5cf305db800b2989ad57b7cde8fb3cc9fa1b9e 21-May-2014 Ian Rogers <irogers@google.com> Begin migration of art::Atomic to std::atomic.

Change-Id: I4858d9cbed95e5ca560956b9dabd976cebe68333
0651d41e41341fb2e9ef3ee41dc1f1bfc832dbbb 29-Apr-2014 Mathieu Chartier <mathieuc@google.com> Add thread unsafe allocation methods to spaces.

Used by SS/GSS collectors since these run with mutators suspended and
only allocate from a single thread. Added AllocThreadUnsafe to
BumpPointerSpace and RosAllocSpace. Added AllocThreadUnsafe which uses
current runs as thread local runs for a thread unsafe allocation.
Added code to revoke current runs which are the same idx as thread
local runs.

Changed:
The number of thread local runs in each thread is now the the number
of thread local runs in RosAlloc instead of the number of size
brackets.

Total GC time / time on EvaluateAndApplyChanges.
TLAB SS:
Before: 36.7s / 7254
After: 16.1s / 4837

TLAB GSS:
Before: 6.9s / 3973
After: 5.7s / 3778

Bug: 8981901

Change-Id: Id1d264ade3799f431bf7ebbdcca6146aefbeb632
6fac447555dc94a935b78198479cce645c837b89 26-Feb-2014 Ian Rogers <irogers@google.com> Make allocations report usable size.

Work-in-progress to allow arrays to fill usable size. Bug: 13028925.
Use C++11's override keyword on GCC >= 2.7 to ensure that we override GC and
allocator methods.
Move initial mirror::Class set up into a Functor so that all allocated objects
have non-zero sizes. Use this property to assert that all objects are never
larger than their usable size.
Other bits of GC related clean-up, missing initialization, missing use of
const, hot methods in .cc files, "unimplemented" functions that fail at
runtime in header files, reducing header file includes, move valgrind's space
into its own files, reduce number of array allocation routines.

Change-Id: Id5760041a2d7f94dcaf17ec760f6095ec75dadaa
55b2764c23776a7eba91d6514a4ad2cbc95e0fd0 24-Jan-2014 Ian Rogers <irogers@google.com> 64bit friendly GC CAS operations.

Change-Id: I54dfa5fdaf0d057ef74c84efb10a609e01923c9e
b122a4bbed34ab22b4c1541ee25e5cf22f12a926 20-Nov-2013 Ian Rogers <irogers@google.com> Tidy up memory barriers.

Change-Id: I937ea93e6df1835ecfe2d4bb7d84c24fe7fc097b
692fafd9778141fa6ef0048c9569abd7ee0253bf 30-Nov-2013 Mathieu Chartier <mathieuc@google.com> Thread local bump pointer allocator.

Added a thread local allocator to the heap, each thread has three
pointers which specify the thread local buffer: start, cur, and
end. When the remaining space in the thread local buffer isn't large
enough for the allocation, the allocator allocates a new thread
local buffer using the bump pointer allocator.

The bump pointer space had to be modified to accomodate thread
local buffers. These buffers are called "blocks", where a block
is a buffer which contains a set of adjacent objects. Blocks
aren't necessarily full and may have wasted memory towards the
end. Blocks have an 8 byte header which specifies their size and is
required for traversing bump pointer spaces.

Memory usage is in between full bump pointer and ROSAlloc since
madvised memory limits wasted ram to an average of 1/2 page per
block.

Added a runtime option -XX:UseTLAB which specifies whether or
not to use the thread local allocator. Its a NOP if the garbage
collector is not the semispace collector.

TODO: Smarter block accounting to prevent us reading objects until
we either hit the end of the block or GetClass() == null which
signifies that the block isn't 100% full. This would provide a
slight speedup to BumpPointerSpace::Walk.

Timings: -XX:HeapMinFree=4m -XX:HeapMaxFree=8m -Xmx48m
ritzperf memalloc:
Dalvik -Xgc:concurrent: 11678
Dalvik -Xgc:noconcurrent: 6697
-Xgc:MS: 5978
-Xgc:SS: 4271
-Xgc:CMS: 4150
-Xgc:SS -XX:UseTLAB: 3255

Bug: 9986565
Bug: 12042213

Change-Id: Ib7e1d4b199a8199f3b1de94b0a7b6e1730689cad
590fee9e8972f872301c2d16a575d579ee564bee 13-Sep-2013 Mathieu Chartier <mathieuc@google.com> Compacting collector.

The compacting collector is currently similar to semispace. It works by
copying objects back and forth between two bump pointer spaces. There
are types of objects which are "non-movable" due to current runtime
limitations. These are Classes, Methods, and Fields.

Bump pointer spaces are a new type of continuous alloc space which have
no lock in the allocation code path. When you allocate from these it uses
atomic operations to increase an index. Traversing the objects in the bump
pointer space relies on Object::SizeOf matching the allocated size exactly.

Runtime changes:
JNI::GetArrayElements returns copies objects if you attempt to get the
backing data of a movable array. For GetArrayElementsCritical, we return
direct backing storage for any types of arrays, but temporarily disable
the GC until the critical region is completed.

Added a new runtime call called VisitObjects, this is used in place of
the old pattern which was flushing the allocation stack and walking
the bitmaps.

Changed image writer to be compaction safe and use object monitor word
for forwarding addresses.

Added a bunch of added SIRTs to ClassLinker, MethodLinker, etc..

TODO: Enable switching allocators, compacting on background, etc..

Bug: 8981901

Change-Id: I3c886fd322a6eef2b99388d19a765042ec26ab99