b089eccf503646e6ed2d5bb20d973d9131166655 |
|
01-Jun-2016 |
Vladimir Marko <vmarko@google.com> |
Delay dex-to-dex compilation until Optimizing is done. This fixes a race between inlining in the Optimizing backend and dex-to-dex quickening where the Optimizing can read the non-quickened opcode and then the quickened field index or vtable index and look up the wrong field or method. Even if we such tearing of the dex instruction does not happen, the possible reordering of dex-to-dex and Optimizing compilation makes the final oat file non-deterministic. Also, remove VerificationResults::RemoveVerifiedMethod() as we have only the Optimizing backend now and as such it was dead code and would have interfered with this change. Bug: 29043547 Bug: 29089975 (cherry picked from commit 492a7fa6df3b197a24099a50f5abf624164f3842) Change-Id: I1337b772dc69318393845a790e5f6d38aa3de60f
|
1bbdfd73a98b149c31f8a80888c7ee9ab2587630 |
|
24-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Verify encoded stack maps in debug builds. Read all stack map data back after we write it and DCHECK the content. Change-Id: Ia679594ac9e5805f6d4c56686030af153b45ea8b
|
f695a009725c8c840d916d01c14998f5c5f816d2 |
|
07-Aug-2015 |
Andreas Gampe <agampe@google.com> |
ART: Change UnresolvedMergedType internal representation Squashed cherry-picks: * 067f1ed7816cf4eb5d6258ca31b387ddb2073ab7 * 750f7c2827318f6d07620f2ef0321218ea4d8670 * 2f90b3415aadc2587d26c767c6bfb235797119a8 * 2ea7b70b2347969f3735bd0ec1b462bd6d2ff1bd Bug: 22881413
|
067f1ed7816cf4eb5d6258ca31b387ddb2073ab7 |
|
07-Aug-2015 |
Andreas Gampe <agampe@google.com> |
ART: Remove TODO in BitVector Refactor the BitVector constructor: split it up to remove the possibility to provide contradicting parameters, and add a custom copying constructor. Change-Id: Ie943f279baa007db578aea0f2f33fa93311612ee
|
f10a25f961eb8029c01c84fe8eabd405055cca37 |
|
02-Jun-2015 |
David Brazdil <dbrazdil@google.com> |
ART: Fast copy stack mask StackMap::SetStackMask will currently copy a BitVector into a Memory- Region bit by bit. This patch adds a new function for copying the data with memcpy. This is resubmission of CL I28d45a590b35a4a854cca2f57db864cf8a081487 but with a fix for a broken test which it revealed. Change-Id: Ib65aa614d3ab7b5c99c6719fdc8e436466a4213d
|
d84b4384bc14a6bc256ad85955eca0582e6b2364 |
|
02-Jun-2015 |
David Brazdil <dbrazdil@google.com> |
Revert "ART: Fast copy stack mask" DCHECK failure, need to investigate This reverts commit 6b10c9b2c0e62193ab9df4d63aedea1d0798e742. Change-Id: Ie1d1cc6fb71367bc5ac5d6a260af8de316a758dd
|
6b10c9b2c0e62193ab9df4d63aedea1d0798e742 |
|
29-May-2015 |
David Brazdil <dbrazdil@google.com> |
ART: Fast copy stack mask StackMap::SetStackMask will currently copy a BitVector into a Memory- Region bit by bit. This patch adds a new function for copying the data with memcpy. Change-Id: I28d45a590b35a4a854cca2f57db864cf8a081487
|
41b175aba41c9365a1c53b8a1afbd17129c87c14 |
|
19-May-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Clean up arm64 kNumberOfXRegisters usage. Avoid undefined behavior for arm64 stemming from 1u << 32 in loops with upper bound kNumberOfXRegisters. Create iterators for enumerating bits in an integer either from high to low or from low to high and use them for <arch>Context::FillCalleeSaves() on all architectures. Refactor runtime/utils.{h,cc} by moving all bit-fiddling functions to runtime/base/bit_utils.{h,cc} (together with the new bit iterators) and all time-related functions to runtime/base/time_utils.{h,cc}. Improve test coverage and fix some corner cases for the bit-fiddling functions. Bug: 13925192 (cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0) Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
|
80afd02024d20e60b197d3adfbb43cc303cf29e0 |
|
19-May-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Clean up arm64 kNumberOfXRegisters usage. Avoid undefined behavior for arm64 stemming from 1u << 32 in loops with upper bound kNumberOfXRegisters. Create iterators for enumerating bits in an integer either from high to low or from low to high and use them for <arch>Context::FillCalleeSaves() on all architectures. Refactor runtime/utils.{h,cc} by moving all bit-fiddling functions to runtime/base/bit_utils.{h,cc} (together with the new bit iterators) and all time-related functions to runtime/base/time_utils.{h,cc}. Improve test coverage and fix some corner cases for the bit-fiddling functions. Bug: 13925192 Change-Id: I704884dab15b41ecf7a1c47d397ab1c3fc7ee0f7
|
8db2a6deb82d9c14d62e7ea201bc27b3040f1b62 |
|
12-May-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix DCE to mark wide register overlaps correctly. Previously we missed some cases of overlap with registers coming from previous blocks. Bug: 20640451 (cherry picked from commit 83d46ef1eaa8fdecadfdb9564d80e50b42646c37) Change-Id: I1be879edfbc900b70cee411d9e31e5a4b524530a
|
83d46ef1eaa8fdecadfdb9564d80e50b42646c37 |
|
12-May-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix DCE to mark wide register overlaps correctly. Previously we missed some cases of overlap with registers coming from previous blocks. Bug: 20640451 Change-Id: I4b32a7aaea2dea1b0b9560ae3459a4d903683f20
|
7d275379bf490a87805852129e3fe2e8afe961e7 |
|
21-Apr-2015 |
David Brazdil <dbrazdil@google.com> |
ART: Update loop info of all nested loops when inlining When inlining into a nested loop, the inliner would only add the new blocks into the innermost loop info object. This patch fixes that and modifies SsaChecker to verify the property. Change-Id: I21d343a6f7d972f5b7420701f816c65ab3f20566
|
040719630f33019693b5c4d9b573311b2f935c39 |
|
02-Jan-2015 |
Vladimir Marko <vmarko@google.com> |
Fix BitVector::IndexIterator::operator*() to return uint32_t. Change-Id: I3cfc028b1c3744ec85ea00eadcbccfdde6fd51d3
|
e77493c7217efdd1a0ecef521a6845a13da0305b |
|
21-Aug-2014 |
Ian Rogers <irogers@google.com> |
Make common BitVector operations inline-able. Change-Id: Ie25de4fae56c6712539f04172c42e3eff57df7ca
|
ffddfdf6fec0b9d98a692e27242eecb15af5ead2 |
|
03-Jun-2014 |
Tim Murray <timmurray@google.com> |
DO NOT MERGE Merge ART from AOSP to lmp-preview-dev. Change-Id: I0f578733a4b8756fd780d4a052ad69b746f687a9
|
014d77a2107fec8ba978a7428fd4d04e0bf8e168 |
|
02-Jun-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: BitVector not calculating number_of_bits correctly The number_of_bits_ field has an unclear intent. Instead, using storage_size_ * kWordBits when relevant. Change-Id: I8c13be0d6643de37813fb154296d451f22c298c8 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
920be0b27c72ceb3d40b5f2775cd1950f7c65b5f |
|
23-May-2014 |
Vladimir Marko <vmarko@google.com> |
Fix style issue. Change-Id: I2044e01c68265c33e7fa6057efa7b6c7ac41ada4
|
520f37bb5c34c5d86ad0091cb84a84c163a2fa9c |
|
23-May-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Added print indices back to BitVector Dumper - Added an API to get the indices set instead of 001...0 format Change-Id: I75841e41ca9b7ef77a0717715669dbe12506d6a1 Signed-Off-By: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
a5b8fde2d2bc3167078694fad417fddfe442a6fd |
|
23-May-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite BitVector index iterator. The BitVector::Iterator was not iterating over the bits but rather over indexes of the set bits. Therefore, we rename it to IndexIterator and provide a BitVector::Indexes() to get a container-style interface with begin() and end() for range based for loops. Also, simplify InsertPhiNodes where the tmp_blocks isn't needed since the phi_nodes and input_blocks cannot lose any blocks in subsequent iterations, so we can do the Union() directly in those bit vectors and we need to repeat the loop only if we have new input_blocks, rather than on phi_nodes change. And move the temporary bit vectors to scoped arena. Change-Id: I6cb87a2f60724eeef67c6aaa34b36ed5acde6d43
|
622d9c31febd950255b36a48b47e1f630197c5fe |
|
12-May-2014 |
Nicolas Geoffray <ngeoffray@google.com> |
Add loop recognition and CFG simplifications in new compiler. We do three simplifications: - Split critical edges, for code generation from SSA (new). - Ensure one back edge per loop, to simplify loop recognition (new). - Ensure only one pre header for a loop, to simplify SSA creation (existing). Change-Id: I9bfccd4b236a00486a261078627b091c8a68be33
|
804d09372cc3d80d537da1489da4a45e0e19aa5d |
|
02-May-2014 |
Nicolas Geoffray <ngeoffray@google.com> |
Build live-in, live-out and kill sets for each block. This information will be used when computing live ranges of instructions. Change-Id: I345ee833c1ccb4a8e725c7976453f6d58d350d74
|
5afa08f95d43dd24fb4b3d7a08aa1ec23386ad54 |
|
16-Apr-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Bitvector extensions for dumping and size handling - Added dumping functions and the ensure size and clear function. - Fixed a bug in union where if a bit is not set in the source, an buffer overflow can occur later down. Change-Id: Iff40529f3a8970a1ce2dd5c591f659f71924dea3 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com> Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com> Signed-off-by: Yixin Shou <yixin.shou@intel.com> Signed-off-by: Chao-ying Fu <chao-ying.fu@intel.com> Signed-off-by: Udayan Banerji <udayan.banerji@intel.com>
|
d3c5bebcb52a67cb06e7ab303eaf45f230c08b60 |
|
11-Apr-2014 |
Vladimir Marko <vmarko@google.com> |
Avoid allocating OatFile::OatClass on the heap. Avoid allocating a BitVector for OatFile::OatClass::bitmap_ with kOatClassSomeCompiled methods. That makes the OatClass copy-constructible as it doesn't own any memory. We use that in OatFile::OatDexFile::GetOatClass() to return the result by value thus avoiding one or two heap allocations per call. Change-Id: Ic7098109028a5b49e39ef626f877de86e732ed18
|
ad0d30a2a2141aa0e9da9e97993ce20e4d8e056e |
|
16-Jan-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
Update to the BitVector Implementation IsBitSet: - If the index requested is above the size, return false. ClearBit: - If the index requested is above the size, ignore. Added SameBitsSet: - Check the bits set disregarding size and expandable. Intersect and Union: - removed the requirement of same size. - handles case where the sizes are not the same. Added Subtract between BitVectors. SetInitialBits: - Now requests expansion if above the bits available. - Clears upper bits. Added GetHighestBitSet. ClearBit: - If we clear above the size, it is fine, it has not been set yet. Copy: - Supposes it is well allocated. - It used to just copy what was available in destination without checking source's size. - Now actually allocate the destination to make sure it holds enough space. - Set parameter to const. General: - Moved sizeof(uint32_t) to sizeof(*storage_) for future maintenance. Change-Id: Iebb214632482c46807deca957f5b6dc892a61a84
|
ba150c37d582eeeb8c11ba5245edc281cf31793c |
|
28-Aug-2013 |
Brian Carlstrom <bdc@google.com> |
Omit OatMethodOffsets for classes without compiled code Change-Id: If0d290f4aebc778ff12d8fed017c270ad2ac3220
|