History log of /art/runtime/lock_word.h
Revision Date Author Comments
2ae376f5af8953d3524cd8ed915ebdacf505625c 30-Jan-2018 Roland Levillain <rpl@google.com> Stylistic and aesthetic changes.

Test: art/test.py
Change-Id: Ic41aa80430d16af748994c80f049c5b479fd9980
57943810cfc789da890d73621741729da5feaaf8 07-Dec-2017 Andreas Gampe <agampe@google.com> ART: Replace base/logging with android-base/logging

Replace wherever possible. ART's base/logging is now mainly VLOG
and initialization code that is unnecessary to pull in and makes
changes to verbose logging more painful than they have to be.

Test: m test-art-host
Change-Id: I3e3a4672ba5b621e57590a526c7d1c8b749e4f6e
2ffb703bf431d74326c88266b4ddaf225eb3c6ad 08-Nov-2017 Igor Murashkin <iam@google.com> cpplint: Cleanup errors

Cleanup errors from upstream cpplint in preparation
for moving art's cpplint fork to upstream tip-of-tree cpplint.

Test: cd art && mm
Bug: 68951293
Change-Id: I15faed4594cbcb8399850f8bdee39d42c0c5b956
8cf9cb386cd9286d67e879f1ee501ec00d72a4e1 19-Jul-2017 Andreas Gampe <agampe@google.com> ART: Include cleanup

Let clang-format reorder the header includes.

Derived with:

* .clang-format:
BasedOnStyle: Google
IncludeIsMainRegex: '(_test|-inl)?$'

* Steps:
find . -name '*.cc' -o -name '*.h' | xargs sed -i.bak -e 's/^#include/ #include/' ; git commit -a -m 'ART: Include cleanup'
git-clang-format -style=file HEAD^
manual inspection
git commit -a --amend

Test: mmma art
Change-Id: Ia963a8ce3ce5f96b5e78acd587e26908c7a70d02
ba650a4d5a0a82c6c88d6546b6111013c2ee8072 06-Mar-2017 Roland Levillain <rpl@google.com> Revert "Revert "Use the "GC is marking" information in compiler read barriers (ARM, ARM64).""

This reverts commit 35345a555bd7928582a7ffa6369b374b3ddc379d.

In compiler-generated code, when deciding whether to mark
a heap reference or not in a read barrier, check whether
the GC is currently marking, instead of checking the gray
bit in the reference's holder's lock word.

This change is only for ARM and ARM64, as it does not
benefit x86 nor x86-64.

Change-Id: Id3d2758c600115b2f07d345442cfa87edfc2792c
Test: Run ART tests in Baker read barrier configuration.
Test: Boot a device in Baker read barrier configuration.
Bug: 35780827
Bug: 29516974
35345a555bd7928582a7ffa6369b374b3ddc379d 27-Feb-2017 Roland Levillain <rpl@google.com> Revert "Use the "GC is marking" information in compiler read barriers (ARM, ARM64)."

This reverts commit 1372c9f40df1e47bf775f1466bbb96f472b6b9ed.

This change (along with https://android-review.googlesource.com/#/c/342429/)
creates null pointer dereferences.

Bug: 35780827
Bug: 29516974
Change-Id: I2a9c4d0ad8d2ab870c2e0ddbff32152933c77abe
1372c9f40df1e47bf775f1466bbb96f472b6b9ed 13-Jan-2017 Roland Levillain <rpl@google.com> Use the "GC is marking" information in compiler read barriers (ARM, ARM64).

In compiler-generated code, when deciding whether to mark
a heap reference or not in a read barrier, check whether
the GC is currently marking, instead of checking the gray
bit in the reference's holder's lock word.

This change is only for ARM and ARM64, as it does not
benefit x86 nor x86-64.

Test: Run ART tests in Baker read barrier configuration.
Test: Boot a device in Baker read barrier configuration.
Bug: 29516974
Change-Id: Ia5d90286bb9f753f3bbcb3a6254eb166523a2ff5
6f198e3fde6fe0009c1f333c283c6d1cb4fa9b55 03-Nov-2016 Mathieu Chartier <mathieuc@google.com> Add forwarding address checks for X86, arm, arm64

Added to READ_BARRIER_MARK_REG.

Bug: 30162165

Test: test-art-host, test-art-target

Change-Id: I15cf0d51ed3d22fa401e80ffac3877d61593527c
12b58b23de974232e991c650405f929f8b0dcc9f 01-Nov-2016 Hiroshi Yamauchi <yamauchi@google.com> Clean up the runtime read barrier and fix fake address dependency.

- Rename GetReadBarrierPointer to GetReadBarrierState.
- Change its return type to uint32_t.
- Fix the runtime fake address dependency for arm/arm64 using inline
asm.
- Drop ReadBarrier::black_ptr_ and some brooks code.

Bug: 12687968
Test: test-art with CC, Ritz EAAC, libartd boot on N9.
Change-Id: I595970db825db5be2e98ee1fcbd7696d5501af55
1cf194f055b7152fde817787fcdadeea1fb1067c 02-Nov-2016 Mathieu Chartier <mathieuc@google.com> Check for forwarding address in READ_BARRIER_MARK_REG

When the object is in the from-space, the mark bit is not set.
In this case, we can also check the lock word for being a forwarding
address. The forwarding address case happens around 25% of the time.
This CL adds the case for forwarding address lock words to
READ_BARRIER_MARK_REG.

Reduces total read barriers reaching runtime on ritzperf:
Slow paths: 20758783 -> 15457783

Deleted the mark bit check in MarkFromReadBarrier since most of the
callers check the bit now.

Perf:
ReadBarrier::Mark: 2.59% -> 2.12%
art_quick_read_barrier_mark_reg01: 0.79% -> 0.78%
art_quick_read_barrier_mark_reg00: 0.54% -> 0.50%
art_quick_read_barrier_mark_reg02: 0.31% -> 0.25%

Only X86_64 for now, will do other archs after.

Bug: 30162165

Test: test-art-host

Change-Id: Ie7289d684d0e37a887943d77710092e380457860
36a270ae4f288e49493432b7128f899ad579849e 29-Jul-2016 Mathieu Chartier <mathieuc@google.com> Change one read barrier bit to mark bit

Optimization to help slow path performance. When the GC marks an
object through the read barrier slow path. The GC sets the mark bit
in the lock word of that reference. This bit is checked from the
assembly entrypoint the common case is that it is set. If the bit is
set, the read barrier knows the object is already marked and there is
no work to do.

To prevent dirty pages in zygote and image, the bit is set by the
image writer and zygote space creation.

EAAC score (lower is better):
N9: 777 -> 700 (average 31 of runs)
N6P (960000 mhz): 1737.48 -> 1442.31 (average of 25 runs)

Bug: 30162165
Bug: 12687968

Test: N9, N6P booting, test-art-host, test-art-target all with CC

Change-Id: Iae0cacfae221e33151d3c0ab65338d1c822ab63d
3887c468d731420e929e6ad3acf190d5431e94fc 12-Aug-2015 Roland Levillain <rpl@google.com> Remove unnecessary `explicit` qualifiers on constructors.

Change-Id: Id12e392ad50f66a6e2251a68662b7959315dc567
14d90579f013b374638b599361970557ed4b3f09 16-Jul-2015 Roland Levillain <rpl@google.com> Use (D)CHECK_ALIGNED more.

Change-Id: I9d740f6a88d01e028d4ddc3e4e62b0a73ea050af
3f64f25151780fdea3511be62b4fe50775f86541 13-Jun-2015 Hiroshi Yamauchi <yamauchi@google.com> Print more diagnosis info on to-space invariant violation.

Pass the method/field (in GcRootSource) to the read barrier to print
more info when a to-space invariant violation is detected on a
method/field GC root access.

Refactor ConcurrentCopying::AssertToSpaceInvariant().

Bug: 12687968
Bug: 21564728

Change-Id: I3a5fde1f41969349b0fee6cd9217b948d5241a7c
41b175aba41c9365a1c53b8a1afbd17129c87c14 19-May-2015 Vladimir Marko <vmarko@google.com> ART: Clean up arm64 kNumberOfXRegisters usage.

Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.

Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.

Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.

Bug: 13925192

(cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0)

Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
80afd02024d20e60b197d3adfbb43cc303cf29e0 19-May-2015 Vladimir Marko <vmarko@google.com> ART: Clean up arm64 kNumberOfXRegisters usage.

Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.

Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.

Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.

Bug: 13925192
Change-Id: I704884dab15b41ecf7a1c47d397ab1c3fc7ee0f7
60f63f53c01cb38ca18a815603282e802a6cf918 24-Apr-2015 Hiroshi Yamauchi <yamauchi@google.com> Use the lock word bits for Baker-style read barrier.

This enables the standard object header to be used with the
Baker-style read barrier.

Bug: 19355854
Bug: 12687968

Change-Id: Ie552b6e1dfe30e96cb1d0895bd0dff25f9d7d015
e15ea086439b41a805d164d2beb07b4ba96aaa97 10-Feb-2015 Hiroshi Yamauchi <yamauchi@google.com> Reserve bits in the lock word for read barriers.

This prepares for the CC collector to use the standard object header
model by storing the read barrier state in the lock word.

Bug: 19355854
Bug: 12687968
Change-Id: Ia7585662dd2cebf0479a3e74f734afe5059fb70f
6a3c1fcb4ba42ad4d5d142c17a3712a6ddd3866f 31-Oct-2014 Ian Rogers <irogers@google.com> Remove -Wno-unused-parameter and -Wno-sign-promo from base cflags.

Fix associated errors about unused paramenters and implict sign conversions.
For sign conversion this was largely in the area of enums, so add ostream
operators for the effected enums and fix tools/generate-operator-out.py.
Tidy arena allocation code and arena allocated data types, rather than fixing
new and delete operators.
Remove dead code.

Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
d8481ccad5b33aa9783cd8f7c614ee083a4f1ccc 28-Jul-2014 nikolay serdjuk <nikolay.y.serdjuk@intel.com> ART: A couple of checks were missed in class LockWord

Change-Id: I1fc2d77f78f49741c1316ccc76b02357158dfdbe
8d82de5d7b04b8f43e7d2bb7ee8a66b0c7e71e1b 28-Jul-2014 Dmitry Petrochenko <dmitry.petrochenko@intel.com> ART: Fix lock max count definition

The lock max count should utilize 14 bits since 2 highest bits are reserved for lock state.

Change-Id: I9d562f7bca9c0853231800a706a8523204e8aa9d
Signed-off-by: Dmitry Petrochenko <dmitry.petrochenko@intel.com>
Signed-off-by: Serguei Katkov <serguei.i.katkov@intel.com>
ef7d42fca18c16fbaf103822ad16f23246e2905d 06-Jan-2014 Ian Rogers <irogers@google.com> Object model changes to support 64bit.

Modify mirror objects so that references between them use an ObjectReference
value type rather than an Object* so that functionality to compress larger
references can be captured in the ObjectRefererence implementation.
ObjectReferences are 32bit and all other aspects of object layout remain as
they are currently.

Expand fields in objects holding pointers so they can hold 64bit pointers. Its
expected the size of these will come down by improving where we hold compiler
meta-data.
Stub out x86_64 architecture specific runtime implementation.
Modify OutputStream so that reads and writes are of unsigned quantities.
Make the use of portable or quick code more explicit.
Templatize AtomicInteger to support more than just int32_t as a type.
Add missing, and fix issues relating to, missing annotalysis information on the
mutator lock.
Refactor and share implementations for array copy between System and uses
elsewhere in the runtime.
Fix numerous 64bit build issues.

Change-Id: I1a5694c251a42c9eff71084dfdd4b51fff716822
590fee9e8972f872301c2d16a575d579ee564bee 13-Sep-2013 Mathieu Chartier <mathieuc@google.com> Compacting collector.

The compacting collector is currently similar to semispace. It works by
copying objects back and forth between two bump pointer spaces. There
are types of objects which are "non-movable" due to current runtime
limitations. These are Classes, Methods, and Fields.

Bump pointer spaces are a new type of continuous alloc space which have
no lock in the allocation code path. When you allocate from these it uses
atomic operations to increase an index. Traversing the objects in the bump
pointer space relies on Object::SizeOf matching the allocated size exactly.

Runtime changes:
JNI::GetArrayElements returns copies objects if you attempt to get the
backing data of a movable array. For GetArrayElementsCritical, we return
direct backing storage for any types of arrays, but temporarily disable
the GC until the critical region is completed.

Added a new runtime call called VisitObjects, this is used in place of
the old pattern which was flushing the allocation stack and walking
the bitmaps.

Changed image writer to be compaction safe and use object monitor word
for forwarding addresses.

Added a bunch of added SIRTs to ClassLinker, MethodLinker, etc..

TODO: Enable switching allocators, compacting on background, etc..

Bug: 8981901

Change-Id: I3c886fd322a6eef2b99388d19a765042ec26ab99
4e6a31eb97f22f4480827474b30b9e64f396eace 31-Oct-2013 Mathieu Chartier <mathieuc@google.com> Lazily compute object identity hash codes.

Before, we computed identity hashcodes whenever we inflated a monitor.
This caused issues since it meant that we would have all of these
hash codes in the image, causing locks to excessively inflate during
application run time.

This change makes it so that we lazily compute hash codes. When a
thin lock gets inflated, we assign a hash code of 0 assigned to it.
This value signifies no hash code. When we try to get the identity
hash code of an object with an inflated monitor, it gets computed if
it is 0.

Change-Id: Iae6acd1960515a36e74644e5b1323ff336731806
ad2541a59c00c2c69e8973088891a2b5257c9780 25-Oct-2013 Mathieu Chartier <mathieuc@google.com> Fix object identity hash.

The object identity hash is now stored in the monitor word after
being computed. Hashes are computed by a pseudo random number
generator.

When we write the image, we eagerly compute object hashes to
prevent pages getting dirtied.

Bug: 8981901

Change-Id: Ic8edacbacb0afc7055fd740a52444929f88ed564
d9c4fc94fa618617f94e1de9af5f034549100753 02-Oct-2013 Ian Rogers <irogers@google.com> Inflate contended lock word by suspending owner.

Bug 6961405.
Don't inflate monitors for Notify and NotifyAll.
Tidy lock word, handle recursive lock case alongside unlocked case and move
assembly out of line (except for ARM quick). Also handle null in out-of-line
assembly as the test is quick and the enter/exit code is already a safepoint.
To gain ownership of a monitor on behalf of another thread, monitor contenders
must not hold the monitor_lock_, so they wait on a condition variable.
Reduce size of per mutex contention log.
Be consistent in calling thin lock thread ids just thread ids.
Fix potential thread death races caused by the use of FindThreadByThreadId,
make it invariant that returned threads are either self or suspended now.

Code size reduction on ARM boot.oat 0.2%.
Old nexus 7 speedup 0.25%, new nexus 7 speedup 1.4%, nexus 10 speedup 2.24%,
nexus 4 speedup 2.09% on DeltaBlue.

Change-Id: Id52558b914f160d9c8578fdd7fc8199a9598576a