d021e166babaaf131e59caf5ad84772b73acb4c5 |
|
22-Jul-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Fix Quick/Optimizing suspend check assumption mismatch. Quick's SuspendCheckElimination (SCE) expects that every method contains a suspend check and it eliminates suspend checks in loops containing an invoke. Optimizing eliminates the suspend check from leaf methods, so the combination of a Quick-compiled loop calling an Optimizing-compiled leaf method can lead to missing suspend checks and potentially leading to ANRs. Enable Quick's kLeafOptimization flag to remove suspend checks from leaf methods and disable Quick's SCE. This aligns the suspend check placement for the two backends and avoids the broken combination. Currently, all methods containing a try-catch are compiled with Quick, so it's relatively easy to create a regression test. However, this test will not be valid when Optimizing starts supporting try-catch. Bug: 22657404 (cherry picked from commit d29e8487ff1774b6eb5f0e18d854415c1ee8f6b0) Change-Id: I733c38bf68bfc2c618f2f2e6b59f8b0e015d7be1
|
3e91a44bc9063f7f69b5415e3cf162991f73283f |
|
18-Jun-2015 |
Jeff Hao <jeffhao@google.com> |
Fix case where block has no predecessor for StringChange. Removes part that checks for throwing half of instruction. It's no longer necessary. Also adds regression test. Bug: 21902684 Change-Id: Ic600165e6b3719de3d83a73b8a1fa64473668fc8
|
9f7687cb5c1390ec4bcc2f8fa10dbee33aff3d6a |
|
19-Jun-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix optimizations for empty if blocks. If a block ending with if-eqz or if-nez has the same "taken" and "fallthrough", we cannot assume that the value has been checked against zero in one of the succesors. This affects the null check elimination pass as well as GVN. Refactor all those checks to a single function in BasicBlock and check that the "taken" and "falthrough" are different when needed. Bug: 21614284 (cherry picked from commit f11c420c448baffac6a70ac0884d481ab347e257) Change-Id: I062e0042de3470ce8680b586487b9c7acbd206bc
|
21cb657159b3e93cc888685ade83f8fc519290be |
|
08-Jun-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix LoopRepeatingTopologicalSortIterator. Always push the loop head on the loop head stack. This fixes a bug where we failed to return to an unnatural loop head to recalculate its GVN data. Bug: 17410955 (cherry picked from commit 67c8c942e9dfcabd548351db75e6d3b8b5165afa) Change-Id: I44b9a17cbcd7307d1cc70ac564b99e29803723c5
|
c8d000a12d853a72999c96e3b73587bad2be6954 |
|
04-Jun-2015 |
Vladimir Marko <vmarko@google.com> |
Revert "Quick: Fix "select" pattern to update data used for GC maps. DO NOT MERGE" This reverts commit fad2cbf97c71b9742ccd88cc1a5ba13fa918e677. Change-Id: I175dd9e49014b71a300d987678032bd624a99cf1
|
fad2cbf97c71b9742ccd88cc1a5ba13fa918e677 |
|
25-Mar-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix "select" pattern to update data used for GC maps. DO NOT MERGE Follow-up to https://android-review.googlesource.com/143222 (cherry picked from commit 6e07183e822a32856da9eb60006989496e06a9cc) Change-Id: I916743c845d9568063cd6a4b2ef71e9cbc43dee8
|
3d21bdf8894e780d349c481e5c9e29fe1556051c |
|
22-Apr-2015 |
Mathieu Chartier <mathieuc@google.com> |
Move mirror::ArtMethod to native Optimizing + quick tests are passing, devices boot. TODO: Test and fix bugs in mips64. Saves 16 bytes per most ArtMethod, 7.5MB reduction in system PSS. Some of the savings are from removal of virtual methods and direct methods object arrays. Bug: 19264997 (cherry picked from commit e401d146407d61eeb99f8d6176b2ac13c4df1e33) Change-Id: I622469a0cfa0e7082a2119f3d6a9491eb61e3f3d Fix some ArtMethod related bugs Added root visiting for runtime methods, not currently required since the GcRoots in these methods are null. Added missing GetInterfaceMethodIfProxy in GetMethodLine, fixes --trace run-tests 005, 044. Fixed optimizing compiler bug where we used a normal stack location instead of double on ARM64, this fixes the debuggable tests. TODO: Fix JDWP tests. Bug: 19264997 Change-Id: I7c55f69c61d1b45351fd0dc7185ffe5efad82bd3 ART: Fix casts for 64-bit pointers on 32-bit compiler. Bug: 19264997 Change-Id: Ief45cdd4bae5a43fc8bfdfa7cf744e2c57529457 Fix JDWP tests after ArtMethod change Fixes Throwable::GetStackDepth for exception event detection after internal stack trace representation change. Adds missing ArtMethod::GetInterfaceMethodIfProxy call in case of proxy method. Bug: 19264997 Change-Id: I363e293796848c3ec491c963813f62d868da44d2 Fix accidental IMT and root marking regression Was always using the conflict trampoline. Also included fix for regression in GC time caused by extra roots. Most of the regression was IMT. Fixed bug in DumpGcPerformanceInfo where we would get SIGABRT due to detached thread. EvaluateAndApplyChanges: From ~2500 -> ~1980 GC time: 8.2s -> 7.2s due to 1s less of MarkConcurrentRoots Bug: 19264997 Change-Id: I4333e80a8268c2ed1284f87f25b9f113d4f2c7e0 Fix bogus image test assert Previously we were comparing the size of the non moving space to size of the image file. Now we properly compare the size of the image space against the size of the image file. Bug: 19264997 Change-Id: I7359f1f73ae3df60c5147245935a24431c04808a [MIPS64] Fix art_quick_invoke_stub argument offsets. ArtMethod reference's size got bigger, so we need to move other args and leave enough space for ArtMethod* and 'this' pointer. This fixes mips64 boot. Bug: 19264997 Change-Id: I47198d5f39a4caab30b3b77479d5eedaad5006ab
|
41b175aba41c9365a1c53b8a1afbd17129c87c14 |
|
19-May-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Clean up arm64 kNumberOfXRegisters usage. Avoid undefined behavior for arm64 stemming from 1u << 32 in loops with upper bound kNumberOfXRegisters. Create iterators for enumerating bits in an integer either from high to low or from low to high and use them for <arch>Context::FillCalleeSaves() on all architectures. Refactor runtime/utils.{h,cc} by moving all bit-fiddling functions to runtime/base/bit_utils.{h,cc} (together with the new bit iterators) and all time-related functions to runtime/base/time_utils.{h,cc}. Improve test coverage and fix some corner cases for the bit-fiddling functions. Bug: 13925192 (cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0) Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
|
be8f57d3a5ff121f1056791ed0910435c834b211 |
|
25-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Avoid unnecessary GVN work in release builds. In GVN's post-processing phase, compare LVNs only in debug builds as they should be equal anyway. Remove the Gate() from GVN cleanup pass and remove the DCHECK() from MIRGraph::GlobalValueNumberingCleanup() to make it a no-op if the GVN didn't run. Bug: 16398693 (cherry picked from commit f725550c8df90f8ec07395d9be5177a4be591c12) Change-Id: I518fba4a06c8d6d5ab16a6c122dc680b6d44814b
|
848f70a3d73833fc1bf3032a9ff6812e429661d9 |
|
15-Jan-2014 |
Jeff Hao <jeffhao@google.com> |
Replace String CharArray with internal uint16_t array. Summary of high level changes: - Adds compiler inliner support to identify string init methods - Adds compiler support (quick & optimizing) with new invoke code path that calls method off the thread pointer - Adds thread entrypoints for all string init methods - Adds map to verifier to log when receiver of string init has been copied to other registers. used by compiler and interpreter Change-Id: I797b992a8feb566f9ad73060011ab6f51eb7ce01
|
f725550c8df90f8ec07395d9be5177a4be591c12 |
|
25-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Avoid unnecessary GVN work in release builds. In GVN's post-processing phase, compare LVNs only in debug builds as they should be equal anyway. Remove the Gate() from GVN cleanup pass and remove the DCHECK() from MIRGraph::GlobalValueNumberingCleanup() to make it a no-op if the GVN didn't run. Bug: 16398693 Change-Id: Ia4f1e7e3ecf12d0305966c86e0e7dbae61dab0b7
|
24d65cce84165c2c3b0e02e09cdb92479ee4e479 |
|
25-Apr-2015 |
Andreas Gampe <agampe@google.com> |
ART: Fix missing dependency between GVN and other passes The GVN may be turned off completely, or skip running when the method is too complex. Turn off DCE in that case. The dependent cleanup pass is not an optimization pass, so can't be turned off that way. Check whether the GVN skipped in the gate function. A possible follow-up is proper dependencies between passes. Change-Id: I5b7951ecd6c74ebbfa5b23726a3d2f3ea1a23a47
|
ad67727a492df635aa54dbe58d6c0de54431f600 |
|
20-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix and enable DCE and improve GVN/DCE cleanup. When eliminating a move by renaming its source register, check that it doesn't conflict with vreg usage by insns between the defining insn and the move. Improve the GVN/DCE cleanup so that it can handle cases where GVN or DCE is individually disabled in the pass driver but not in the disable_opt flags. Bug: 19419671 Change-Id: I49bb67b81509f51fbaf90c6016c509962be43736
|
c91df2d6339dd4adf2da582372451df19ce2ff44 |
|
23-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Revert "Revert "Quick: Rewrite type inference pass."" Fix the type of the ArtMethod* SSA register. Bug: 19419671 This reverts commit 1b717f63847de8762e7f7bdd6708fdfae9d24a67. Change-Id: Ie4da3c03a0e0334a39a24718f6dc31f9255cfb53
|
1b717f63847de8762e7f7bdd6708fdfae9d24a67 |
|
23-Apr-2015 |
Andreas Gampe <agampe@google.com> |
Revert "Quick: Rewrite type inference pass." Breaks arm64, as the method register is not correctly flagged as ref and thus 32bit. Bug: 19419671 This reverts commit e490b01c12d33f3bd5c247b55b47e507cc9c8fab.
|
2cebb24bfc3247d3e9be138a3350106737455918 |
|
22-Apr-2015 |
Mathieu Chartier <mathieuc@google.com> |
Replace NULL with nullptr Also fixed some lines that were too long, and a few other minor details. Change-Id: I6efba5fb6e03eb5d0a300fddb2a75bf8e2f175cb
|
e490b01c12d33f3bd5c247b55b47e507cc9c8fab |
|
24-Feb-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Rewrite type inference pass. Use method signatures, field types and types embedded in dex insns for type inference. Perform the type inference in two phases, first a simple pass that records all types implied by individual insns, and then an iterative pass to propagate those types further via phi, move, if-cc and aget/aput insns. Bug: 19419671 Change-Id: Id38579d48a44fc5eadd13780afb6d370093056f9
|
87b7c52ac660119b8dea46967974b76c86d0750b |
|
08-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Clean up temp use counting. For the boot image on arm64 and x86-64 we're using true PC-relative addressing, so pc_rel_temp_ is nullptr and CanUsePcRelDexCacheArrayLoad() returns true, but we're not actually using the ArtMethod* so fix the AnalyzeMIR() to take it into account. Also don't count intrinsic invokes towards ArtMethod* uses. To avoid repeated method inliner inquiries about whether a method is intrinsic or special (requiring lock acquisition), cache that information in MirMethodLoweringInfo. As part of that cleanup, take quickened invokes into account for suspend check elimination. Change-Id: I5b4ec124221c0db1314c8e72675976c110ebe7ca
|
cc23481b66fd1f2b459d82da4852073e32f033aa |
|
07-Apr-2015 |
Vladimir Marko <vmarko@google.com> |
Promote pointer to dex cache arrays on arm. Do the use-count analysis on temps (ArtMethod* and the new PC-relative temp) in Mir2Lir, rather than MIRGraph. MIRGraph isn't really supposed to know how the ArtMethod* is used by the backend. Change-Id: Iaf56a46ae203eca86281b02b54f39a80fe5cc2dd
|
6e07183e822a32856da9eb60006989496e06a9cc |
|
25-Mar-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix "select" pattern to update data used for GC maps. Follow-up to https://android-review.googlesource.com/143222 Change-Id: I1c12af9a19f76e64fd209f6cc2eaec5587b3083b
|
22fe45de11ed7afdf21400d2de3abd23f3a62800 |
|
18-Mar-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Eliminate check-cast guaranteed by instance-of. Eliminate check-cast if the result of an instance-of with the very same type on the same value is used to branch to the check-cast's block or a dominator of it. Note that there already exists a verifier-based elimination of check-cast but it excludes check-cast on interfaces. This new optimization works for interface types and, since it's GVN-based, it can better recognize when the same reference is used for instance-of and check-cast. Change-Id: Ib315199805099d1cb0534bb4a90dc51baa409685
|
e5f13e57ff8fa36342beb33830b3ec5942a61cca |
|
24-Feb-2015 |
Mathieu Chartier <mathieuc@google.com> |
Revert "Revert "Add JIT"" Added missing EntryPointToCodePointer. This reverts commit a5ca888d715cd0c6c421313211caa1928be3e399. Change-Id: Ia74df0ef3a7babbdcb0466fd24da28e304e3f5af
|
a5ca888d715cd0c6c421313211caa1928be3e399 |
|
24-Feb-2015 |
Nicolas Geoffray <ngeoffray@google.com> |
Revert "Add JIT" Sorry, run-test crashes on target: 0-05 12:15:51.633 I/DEBUG (27995): Abort message: 'art/runtime/mirror/art_method.cc:349] Check failed: PcIsWithinQuickCode(reinterpret_cast<uintptr_t>(code), pc) java.lang.Throwable java.lang.Throwable.fillInStackTrace() pc=71e3366b code=0x71e3362d size=ad000000' 10-05 12:15:51.633 I/DEBUG (27995): r0 00000000 r1 0000542b r2 00000006 r3 00000000 10-05 12:15:51.633 I/DEBUG (27995): r4 00000006 r5 b6f9addc r6 00000002 r7 0000010c 10-05 12:15:51.633 I/DEBUG (27995): r8 b63fe1e8 r9 be8e1418 sl b6427400 fp b63fcce0 10-05 12:15:51.633 I/DEBUG (27995): ip 0000542b sp be8e1358 lr b6e9a27b pc b6e9c280 cpsr 40070010 10-05 12:15:51.633 I/DEBUG (27995): Bug: 17950037 This reverts commit 2535abe7d1fcdd0e6aca782b1f1932a703ed50a4. Change-Id: I6f88849bc6f2befed0c0aaa0b7b2a08c967a83c3
|
2535abe7d1fcdd0e6aca782b1f1932a703ed50a4 |
|
17-Feb-2015 |
Mathieu Chartier <mathieuc@google.com> |
Add JIT Currently disabled by default unless -Xjit is passed in. The proposed JIT is a method JIT which works by utilizing interpreter instrumentation to request compilation of hot methods async during runtime. JIT options: -Xjit / -Xnojit -Xjitcodecachesize:N -Xjitthreshold:integervalue The JIT has a shared copy of a compiler driver which is accessed by worker threads to compile individual methods. Added JIT code cache and data cache, currently sized at 2 MB capacity by default. Most apps will only fill a small fraction of this cache however. Added support to the compiler for compiling interpreter quickened byte codes. Added test target ART_TEST_JIT=TRUE and --jit for run-test. TODO: Clean up code cache. Delete compiled methods after they are added to code cache. Add more optimizations related to runtime checks e.g. direct pointers for invokes. Add method recompilation. Move instrumentation to DexFile to improve performance and reduce memory usage. Bug: 17950037 Change-Id: Ifa5b2684a2d5059ec5a5210733900aafa3c51bca
|
b666f4805c8ae707ea6fd7f6c7f375e0b000dba8 |
|
18-Feb-2015 |
Mathieu Chartier <mathieuc@google.com> |
Move arenas into runtime Moved arena pool into the runtime. Motivation: Allow GC to use arena allocators, recycle arena pool for linear alloc. Bug: 19264997 Change-Id: I8ddbb6d55ee923a980b28fb656c758c5d7697c2f
|
7a01dc2107d4255b445c32867d15d45fcebb3acd |
|
02-Jan-2015 |
Vladimir Marko <vmarko@google.com> |
Dead code elimination based on GVN results. Change-Id: I5b77411a8f088f0b561da14b123cf6b0501c9db5
|
e4fcc5ba2284c201c022b52d27f7a1201d696324 |
|
13-Feb-2015 |
Vladimir Marko <vmarko@google.com> |
Clean up Scoped-/ArenaAlocator array allocations. Change-Id: Id718f8a4450adf1608306286fa4e6b9194022532
|
ea392161b862f2cb93dffa019a32ec7dff47f21f |
|
29-Jan-2015 |
Serguei Katkov <serguei.i.katkov@intel.com> |
AdvanceMIR does not know how to passthrough the empty block AdvanceMIR utility function could easily traverse the empty blocks to find the next bytecode. Change-Id: I037710b567275799f940b5b9766bcafec570b70e Signed-off-by: Serguei Katkov <serguei.i.katkov@intel.com>
|
0b9203e7996ee1856f620f95d95d8a273c43a3df |
|
23-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Some Quick cleanup Make several fields const in CompilationUnit. May benefit some Mir2Lir code that repeats tests, and in general immutability is good. Remove compiler_internals.h and refactor some other headers to reduce overly broad imports (and thus forced recompiles on changes). Change-Id: I898405907c68923581373b5981d8a85d2e5d185a
|
066f9e4b87d875f02a10fd43d8a251b3c17a64a3 |
|
16-Jan-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Clean up optimization pass order. Move the TypeInference pass to post-opt passes and make it a PassMEMirSsaRep as we need to rerun the pass if the SSA representation has changed. (Though we currently don't have any pass that would require it.) The results of MethodUseCount and ConstantPropagation passes are used only in the BBOptimization and codegen and stay valid across BBOptimization and SuspendCheckElimination, so move them out of post-opt passes to just before the BBOpt (and reverse the dependency between ConstantPropagation and init reg locations passes). Change-Id: If02c087107cef48d5f9f7c18b0a0ace370fe2647
|
341e425599ba32f0776bcd294882e1e30cafa10f |
|
19-Dec-2014 |
Vladimir Marko <vmarko@google.com> |
Clean up dead loops before suspend check elimination. Get rid of BasicBlock::KillUnreachable() and just Kill() unreachable blocks from the DFS order calculation. Bug: 18718277 Change-Id: Icaf7b9c2320530e950f87e1e2e2bd1fa5f53cb98
|
315cc20c2146b777c278d43987b2eeb61035d6a6 |
|
18-Dec-2014 |
Vladimir Marko <vmarko@google.com> |
Clean up MIRGraph::CanThrow(). Merge with the code from CombineBlocks(). Change-Id: I73c71286acba1b6042f85f0bd19c525450ce9c05
|
ffda4993af78484feb7a4ce5497c1796463c0ba1 |
|
18-Dec-2014 |
Vladimir Marko <vmarko@google.com> |
Clean up post-opt passes, perform only those we need. Change-Id: If802074d780d91151d236ef52236b6f33ca47258
|
956af0f0cb05422e38c1d22cbef309d16b8a1a12 |
|
11-Dec-2014 |
Elliott Hughes <enh@google.com> |
Remove portable. Change-Id: I3bf3250fa866fd2265f1b115d52fa5dedc48a7fc
|
a262f7707330dccfb50af6345813083182b61043 |
|
25-Nov-2014 |
Ningsheng Jian <ningsheng.jian@arm.com> |
ARM: Combine multiply accumulate operations. Try to combine integer multiply and add(sub) into a MAC operation. For AArch64, also try to combine long type multiply and add(sub). Change-Id: Ic85812e941eb5a66abc355cab81a4dd16de1b66e
|
8b858e16563ebf8e522df026a6ab409f1bd9b3de |
|
27-Nov-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: Redefine the notion of back-egdes. Redefine a back-edge to really mean an edge to a loop head instead of comparing instruction offsets. Generate suspend checks also on fall-through to a loop head; insert an extra GOTO for these edges. Add suspend checks to fused cmp instructions. Rewrite suspend check elimination to track whether there is an invoke on each path from the loop head to a given back edge, instead of using domination info to look for a basic block with invoke that must be on each path. Ignore invokes to intrinsics and move the optimization to a its own pass. The new loops in 109-suspend-check should prevent intrinsics and fused cmp-related regressions. Bug: 18522004 Change-Id: I96ac818f76ccf9419a6e70e9ec00555f9d487a9e
|
b218c858606641050d13f35a2365168b89b44841 |
|
08-Dec-2014 |
Zheng Xu <zheng.xu@arm.com> |
ART: Clear use count for unused VRs. The use count of temp VR should be cleared when we replace "CMP_XXX vA, vB, vC" and "IF_XXX vA" with "kMirOpFusedCmpXXX vB, vC". Otherwise, the backend may allocate a physical register for the unused vA. Change-Id: I43ad37d0e7161ec3de154de8888caa94603f7715
|
9af6929d12d843ef1891fc0733746f7fa7ecedd4 |
|
05-Dec-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix code layout pass; don't terminate too early. Change-Id: I0c417fdc2ee8213672a7568fe228e5e2f1c1ab61
|
eb1404a0b59c4a981af61852c94129507efc331a |
|
04-Sep-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Fix variable formatting and CodeLayout's multiple visits The CodeLayout pass visits multiple times the same BasicBlock. This patch fixes that issue to reduce its overhead. The patch also fixes cUnit to c_unit in the bb_optimization files. Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com> Bug: 18507840 (cherry picked from commit 75bcc3780bc40dd7c265e150aff5b891135ff6e3) Change-Id: I4675ba0b4886c35f0093ac54e171dd87548f60c8
|
7ab2fce83cd72c0963128b098a78606e77ea15d5 |
|
28-Nov-2014 |
Vladimir Marko <vmarko@google.com> |
Refactor handling of conditional branches with known result. Detect IF_cc and IF_ccZ instructions with known results in the basic block optimization phase (instead for the codegen phase) and replace them with GOTO/NOP. Kill blocks that are unreachable as a result. Change-Id: I169c2fa6f1e8af685f4f3a7fe622f5da862ce329
|
26e7d454b9924f3673b075b05e4c604ad658a062 |
|
24-Nov-2014 |
Vladimir Marko <vmarko@google.com> |
Eliminate suspend checks on back-edges to return insn. This optimization seems to have been broken for a long time. Change-Id: I62ec85c71bb5253917ad9465a952911e917f6b52
|
c7a77bfecb77c7a4538775411049e76eb853641c |
|
30-Oct-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Fix NullCheckElimination, BBCombine, and SplitBlock NullCheckElimination had one issue and one assumption that could be broken: -It ignored that compiler temps may hold references. -Assumed there are no phi nodes even though algorithm can be run even after phi nodes are inserted. BBCombine also had issue in that it did not properly maintain the instruction links. The logic has been updated to use utility methods. SplitBlock has an issue with being called after extended instructions are inserted. Namely, in the case in question, it was called after SpecialMethodInliner was through and although it was doing semantically correct thing, it was hitting dcheck due to the kMirOpNullCheck. Change-Id: Id5863ddb0762064e74bf1d9173b8db5cb47cf3b9 Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
af6925b7fe5dc5a3c8d52ee3370e86e75400f873 |
|
31-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite GVN's field id and field type handling. Create a helper unit for dex insn classification and cache dex field type (as encoded in the insn) in the MirFieldInfo. Use this for cleanup and a few additional DCHECKs. Change the GVN's field id to match the field lowering info index (MIR::meta::{i,s}field_lowering_info), except where multiple indexes refer to the same field and we use the lowest of the applicable indexes. Use the MirMethodInfo from MIRGraph to retrieve field type for GVN using this index. This slightly reduces GVN compilation time and prepares for further compilation time improvements. Change-Id: I1b1247cdb8e8b6897254e2180f3230f10159bed5
|
f585e549682a98eec12f92033e9634dc162b7df8 |
|
21-Nov-2014 |
Vladimir Marko <vmarko@google.com> |
Clean up MIRGraph pass temporaries. Create a union of pass-specific structs with temporaries instead of shared temporaries with common names. Change-Id: Id80d3b12c48139af1580b0839c21e07e7afd0ed5
|
08794a9481dec1da98f1a0c668c6dab3907b342a |
|
06-Nov-2014 |
Serguei Katkov <serguei.i.katkov@intel.com> |
Fix CombineBlocks optimization (df_attributes & DF_DA) means Get not Put. The patch fixes the condition to eliminate catch block for get/put operations. Change-Id: I48036f3614de5116e27c0d6e9a7a342432c9a828 Signed-off-by: Serguei Katkov <serguei.i.katkov@intel.com>
|
785d2f2116bb57418d81bb55b55a087afee11053 |
|
04-Nov-2014 |
Andreas Gampe <agampe@google.com> |
ART: Replace COMPILE_ASSERT with static_assert (compiler) Replace all occurrences of COMPILE_ASSERT in the compiler tree. Change-Id: Icc40a38c8bdeaaf7305ab3352a838a2cd7e7d840
|
6a3c1fcb4ba42ad4d5d142c17a3712a6ddd3866f |
|
31-Oct-2014 |
Ian Rogers <irogers@google.com> |
Remove -Wno-unused-parameter and -Wno-sign-promo from base cflags. Fix associated errors about unused paramenters and implict sign conversions. For sign conversion this was largely in the area of enums, so add ostream operators for the effected enums and fix tools/generate-operator-out.py. Tidy arena allocation code and arena allocated data types, rather than fixing new and delete operators. Remove dead code. Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
|
66c6d7bdfdd535e6ecf4461bba3804f1a7794fcd |
|
16-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite class initialization check elimination. Split the notion of type being in dex cache away from the class being initialized. Include static invokes in the class initialization elimination pass. Change-Id: Ie3760d8fd55b987f9507f32ef51456a57d79e3fb
|
415ac88a6471792a28cf2b457fe4ba9dc099396e |
|
30-Sep-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: In GVN, apply modifications early if outside loop. To improve GVN performance, apply modifications to blocks outside loops during the initial convergence phase. During the post processing phase, apply modifications only to the blocks belonging to loops. Also clean up the check whether to run the LVN and add the capability to limit the maximum number of nested loops we allow the GVN to process. Change-Id: Ie7f1254f91a442397c06a325d5d314d8f58e5012
|
312eb25273dc0e2f8880d80f00c5b0998febaf7b |
|
07-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: Improve the BBCombine pass. Eliminate exception edges for insns that cannot throw even when inside a try-block. Run the BBCombine pass before the SSA transformation to reduce the compilation time. Bug: 16398693 Change-Id: I8e91df593e316c994679b9d482b0ae20700b9499
|
7baa6f8783b12bb4b159ed4648145be5912215f2 |
|
09-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite null check elimination to work on dalvik regs. And move the null check and class init check elimination before the SSA transformation. The new pass ordering is in anticipation of subsequent changes. (An improved class init check elimination can benefit special method inlining. An improved block combination pass before SSA transformation can improve compilation time.) Also add tests for the NCE. Change-Id: Ie4fb1880e06334a703295aef454b437d58a3e878
|
423b137214debfa066522763a8e78511d300c8c9 |
|
15-Oct-2014 |
Yevgeny Rouban <yevgeny.y.rouban@intel.com> |
ART: NullCheckElimination should converge with MIR_IGNORE_NULL_CHECK If the MIRGraph::EliminateNullChecksAndInferTypes() function managed to prove that some regs are non-null then it sets the flag MIR_IGNORE_NULL_CHECK and resets this flag for all the other regs. If some previous optimizations have already set MIR_IGNORE_NULL_CHECK then it can be reset by EliminateNullChecksAndInferTypes. This way NullCheckElimination discards some optimization efforts. Optimization passes should not reset MIR_IGNORE_NULL_CHECK unless they 100% sure NullCheck is needed. This patch makes the NCE_TypeInference pass merge its own calculated MIR_IGNORE_NULL_CHECK with the one came from previous optimizations. Technically NCE_TypeInference calculates the flag in a temporary MIR_MARK-th bit by preserving MIR_IGNORE_NULL_CHECK. Then at the end of NCE pass MIR_MARK is or-ed with MIR_IGNORE_NULL_CHECK. Change-Id: Ib26997c70ecf2c158f61496dee9b1fe45c812096 Signed-off-by: Yevgeny Rouban <yevgeny.y.rouban@intel.com>
|
7cd01f5d496c384874ea8c21eafb2b6479833e6a |
|
13-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Add regression test for null check elimination. Prompted by https://android-review.googlesource.com/110090 Bug: 17969907 Change-Id: I938c27cda0681b9431d69baf4eafa7ca2f9b5c9c
|
cb46ee13a5683c2973244da964887a448e61b6ec |
|
13-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Revert "ART: fix NullCheckElimination to preserve MIR_IGNORE_NULL_CHECK" This reverts commit 504b7882fbb841787e350f2da54b1fa9171ce82a. Change-Id: I41c7a03c49f7904370a64c6ececc89146ff735c8
|
5229cf17e3240d55f043c0a9308e22d967f897dc |
|
09-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: Reduce memory usage and improve compile time. Move the def-block-matrix from Arena to ScopedArena. Remove BasicBlockDataFlow::ending_check_v and use a temporary bit matrix instead. Remove unused BasicBlockDataFlow::phi_v. Avoid some BitVector::Copy() at the end of null and clinit check elimination passes when the contents of the source BitVector is no longer needed. Change-Id: I8111b2f8a51e63075aa124b528d61b79b6933274
|
67c72b8882f539afd1c8643396fce417cadb85d5 |
|
09-Oct-2014 |
Vladimir Marko <vmarko@google.com> |
Quick: Separate null check elimination and type inference. Change-Id: I4566ae9354c91ca935481cb4f5b729bba05c1592
|
504b7882fbb841787e350f2da54b1fa9171ce82a |
|
01-Oct-2014 |
Yevgeny Rouban <yevgeny.y.rouban@intel.com> |
ART: fix NullCheckElimination to preserve MIR_IGNORE_NULL_CHECK If the MIRGraph::EliminateNullChecksAndInferTypes() function managed to prove that some regs are non-null then it sets the flag MIR_IGNORE_NULL_CHECK and resets this flag for all the other regs. If some previous optimizations have already set MIR_IGNORE_NULL_CHECK then it can be reset by EliminateNullChecksAndInferTypes. This way NullCheckElimination discards some optimization efforts. Optimization passes should not reset MIR_IGNORE_NULL_CHECK unless they 100% sure NullCheck is needed. This patch makes the NCE_TypeInference pass be conservative in resetting MIR_IGNORE_NULL_CHECK. Change-Id: I4ea74020968b5c5bd8e3af48211ffd4c6afd7f80 Signed-off-by: Yevgeny Rouban <yevgeny.y.rouban@intel.com>
|
8ac41af13c8e48ede6a7c8a3bf2fb1a414326038 |
|
02-Oct-2014 |
Chao-ying Fu <chao-ying.fu@intel.com> |
ART: Fix SelectKind to work with nullptr This patch fixes SelectKind to return kSelectNone when MIR is nullptr to avoid segmentation fault. Change-Id: I174ff5c153e03c1a1e2ef8bc68f7fb50e8a9bf3f Signed-off-by: Chao-ying Fu <chao-ying.fu@intel.com>
|
750359753444498d509a756fa9a042e9f3c432df |
|
12-Sep-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Deprecate CompilationUnit's code_item The code_item field is tracked in both the CompilationUnit and the MIRGraph. However, the existence of this field in CompilationUnit promotes bad practice because it creates assumption only a single code_item can be part of method. This patch deprecates this field and updates MIRGraph methods to make it easy to get same information as before. Part of this is the update to interface GetNumDalvikInsn which ensures to count all code_items in MIRGraph. Some dead code was also removed because it was not friendly to these updates. Change-Id: Ie979be73cc56350321506cfea58f06d688a7fe99 Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
e39c54ea575ec710d5e84277fcdcc049f8acb3c9 |
|
22-Sep-2014 |
Vladimir Marko <vmarko@google.com> |
Deprecate GrowableArray, use ArenaVector instead. Purge GrowableArray from Quick and Portable. Remove GrowableArray<T>::Iterator. Change-Id: I92157d3a6ea5975f295662809585b2dc15caa1c6
|
c80605d6f13b0f1e5ac5446c755e6d210f06b19a |
|
11-Sep-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Consider clinit elimination for inlining Currently inliner rejects inlining method if class initialization is needed. However, if it has been proven already that it was done, then inlining can safely proceed. Change-Id: Iaf1638fcfffff1bcf66010dc39090c77e009a1bb Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
75bcc3780bc40dd7c265e150aff5b891135ff6e3 |
|
04-Sep-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Fix variable formatting and CodeLayout's multiple visits The CodeLayout pass visits multiple times the same BasicBlock. This patch fixes that issue to reduce its overhead. The patch also fixes cUnit to c_unit in the bb_optimization files. Change-Id: I76afa73dc79b9ee9993223c066a974ca81adf203 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
8d0d03e24325463f0060abfd05dba5598044e9b1 |
|
07-Jun-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Change temporaries to positive names Changes compiler temporaries to have positive names. The numbering now puts them above the code VRs (locals + ins, in that order). The patch also introduces APIs to query the number of temporaries, locals and ins. The compiler temp infrastructure suffered from several issues which are also addressed by this patch: -There is no longer a queue of compiler temps. This would be polluted with Method* when post opts were called multiple times. -Sanity checks have been added to allow requesting of temps from BE and to prevent temps after frame is committed. -None of the structures holding temps can overflow because they are allocated to allow holding maximum temps. Thus temps can be requested by BE with no problem. -Since the queue of compiler temps is no longer maintained, it is no longer possible to refer to a temp that has invalid ssa (because it was requested before ssa was run). -The BE can now request temps after all ME allocations and it is guaranteed to actually receive them. -ME temps are now treated like normal VRs in all cases with no special handling. Only the BE temps are handled specially because there are no references to them from MIRs. -Deprecated and removed several fields in CompilationUnit that saved register information and updated callsites to call the new interface from MIRGraph. Change-Id: Ia8b1fec9384a1a83017800a59e5b0498dfb2698c Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com> Signed-off-by: Udayan Banerji <udayan.banerji@intel.com>
|
fb0ea2df9a52e5db18e1aa85da282938bbd92f2e |
|
29-Jul-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Extending FlagsOf Modified FlagsOf to handle extended flags. Change-Id: I9e47e0c42816136b2b53512c914200dd9dd11376 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
53c913bb71b218714823c8c87a1f92830c336f61 |
|
13-Aug-2014 |
Andreas Gampe <agampe@google.com> |
ART: Clean up compiler Clean up the compiler: less extern functions, dis-entangle compilers, hide some compiler specifics, lower global includes. Change-Id: Ibaf88d02505d86994d7845cf0075be5041cc8438
|
d04d309276a6d35b34ff9805de3754299bbde4a9 |
|
04-Aug-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Support MIRGraph constant interface -Adds a helper to be able to ask for a wide constant. -Allows MIRGraph to provide interface to set constants. Change-Id: Id282ee1604a0bd0bce6f495176d6bca35dcd5a00 Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
e77493c7217efdd1a0ecef521a6845a13da0305b |
|
21-Aug-2014 |
Ian Rogers <irogers@google.com> |
Make common BitVector operations inline-able. Change-Id: Ie25de4fae56c6712539f04172c42e3eff57df7ca
|
f1770fdeb0911d11489b7d495ce31420ac0cdc61 |
|
12-Aug-2014 |
Junmo Park <junmoz.park@samsung.com> |
Fix missing operation in CombineBlocks explicit throw, conditional branch flags also need to be moved. There are no quick compiler bugs before this CL. So regression test is not necessary. Change-Id: I5f7c0f261fff5f7a46b32763096ab4fe85b2c0c0 Signed-off-by: Junmo Park <junmoz.park@samsung.com>
|
07206af370746e6d7cf528e655b4854e7a865cfa |
|
29-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Reduce time and memory usage of GVN. Filter out dead sregs in GVN. Reclaim memory after each LVN in the GVN modification phase. Bug: 16398693 (cherry picked from commit b19955d3c8fbd9588f7e17299e559d02938154b6) Change-Id: I33c7912258a768b4c99d787056979fbc3b023b3b
|
b19955d3c8fbd9588f7e17299e559d02938154b6 |
|
29-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Reduce time and memory usage of GVN. Filter out dead sregs in GVN. Reclaim memory after each LVN in the GVN modification phase. Bug: 16398693 Change-Id: I8c88c3009663754e1b66c0ef3f62c3b93276e385
|
293caab66e9b1e4129843f6bdeb31353bb77ccef |
|
11-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Fix null pointer check elimination for catch entries. Remove the special treatment of catch blocks for null pointer check elimination and class initialization check elimination. In both cases this can help optimizing previously missed cases. In the null check case, this avoids incorrect optimization as exposed by the new test. Bug: 16230771 (cherry picked from 0a810d2eab27cd097ebd09a44f0ce83aa608285b) Change-Id: I0764f47fa0aacfa89904a82e9528177b3ad67e31
|
1fd4821f6b3ac57a44c2ce91025686da4641d197 |
|
10-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite topological sort order and improve GVN. Rewrite the topological sort order to include a full loop before the blocks that go after the loop. Add a new iterator class LoopRepeatingTopologicalSortIterator that differs from the RepeatingTopologicalSortIterator by repeating only loops and repeating them early. It returns to the loop head if the head needs recalculation when we reach the end of the loop. In GVN, use the new loop-repeating topological sort iterator and for a loop head merge only the preceding blocks' LVNs if we're not currently recalculating this loop. Also fix LocalValueNumbering::InPlaceIntersectMaps() which was keeping only the last element of the intersection, avoid some unnecessary processing during LVN merge and add some missing braces to MIRGraph::InferTypeAndSize(). Bug: 16398693 (cherry picked from 55fff044d3a4f7196098e25bab1dad106d9b54a2) Change-Id: Id7bcd99c8abed1b7500b9ef723313d4c5fc6f1e8
|
55fff044d3a4f7196098e25bab1dad106d9b54a2 |
|
10-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite topological sort order and improve GVN. Rewrite the topological sort order to include a full loop before the blocks that go after the loop. Add a new iterator class LoopRepeatingTopologicalSortIterator that differs from the RepeatingTopologicalSortIterator by repeating only loops and repeating them early. It returns to the loop head if the head needs recalculation when we reach the end of the loop. In GVN, use the new loop-repeating topological sort iterator and for a loop head merge only the preceding blocks' LVNs if we're not currently recalculating this loop. Also fix LocalValueNumbering::InPlaceIntersectMaps() which was keeping only the last element of the intersection, avoid some unnecessary processing during LVN merge and add some missing braces to MIRGraph::InferTypeAndSize(). Bug: 16398693 Change-Id: I4e10d4acb626a5b8a28ec0de106a7b37f9cbca32
|
0a810d2eab27cd097ebd09a44f0ce83aa608285b |
|
11-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Fix null pointer check elimination for catch entries. Remove the special treatment of catch blocks for null pointer check elimination and class initialization check elimination. In both cases this can help optimizing previously missed cases. In the null check case, this avoids incorrect optimization as exposed by the new test. Bug: 16230771 Change-Id: I834b7a1835d9ca8572f4f8d8516d93913c701ad1
|
597da1f76e542b9699f8e5f8cacfea84f8854429 |
|
15-Jul-2014 |
Serguei Katkov <serguei.i.katkov@intel.com> |
SetConstantWide should mark both SSA regs as constant Without this change user have no chances to get the highest part of wide reg using utility functions. Change-Id: I2f56229ffb98276768a77c1e4a2913f288999328 Signed-off-by: Serguei Katkov <serguei.i.katkov@intel.com>
|
2ab40eb3c23559205ac7b9b039bd749458e8a761 |
|
02-Jun-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Add Invokes to DecodedInstruction Add a method Invokes to test for the kInvoke flag. Also moved IsPseudoMirOp to DecodedInstruction to use it for the various querry methods. Change-Id: I59a2056b7b802b8393fa2b0d977304d252b38c89 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
cb804742f8786826286046e9c4489ef9d7ceb7ec |
|
10-Jul-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
ART: Rename CallInlining to SpecialMethodInliner The CallInlining pass is used to inline just a set of pre-categorized methods. This set of methods includes empty, instance getters, instance setters, argument return, and constant return. Since it inlines only "special methods", it makes sense to name it to reflect that. Change-Id: Iea2c1820080b0c212c99e977f6b5d34ee0774868 Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
95a059793c4c194f026afc74c713cc295d75d91a |
|
30-May-2014 |
Vladimir Marko <vmarko@google.com> |
Global Value Numbering. Implement the Global Value Numbering for optimization purposes. Use it for the null check and range check elimination as the LVN used to do. The order of evaluation of basic blocks needs improving as we currently fail to recognize some obviously identical values in methods with more than one loop. (There are three disabled tests that check this. This is just a missed optimization, not a correctness issue.) Change-Id: I0d0ce16b2495b5a3b17ad1b2b32931cd69f5a25a
|
f418f3227e0001c8d75257ceff0c248cc406d81a |
|
09-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
Handle potential <clinit>() correctly in LVN. Bug: 16177324 Change-Id: I727ab6ce9aa9a608fe570cf391a6b732a12a8655
|
c26efa86c02c214665d30760df9d6f370d1c9ef1 |
|
01-Jun-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Update the DecodedInstruction for the Fused extended bytecodes. The BasicBlockOpt function creates extended MIRs but does not update the associated DecodedInstruction. This results in an impossibility to re-create the SSA algorithm. Change-Id: I4a6393281e0ee52094f72f3de0001858c3d41ba3 Signed-off-by: Jean Christophe Beyler
|
ffddfdf6fec0b9d98a692e27242eecb15af5ead2 |
|
03-Jun-2014 |
Tim Murray <timmurray@google.com> |
DO NOT MERGE Merge ART from AOSP to lmp-preview-dev. Change-Id: I0f578733a4b8756fd780d4a052ad69b746f687a9
|
35ba7f3a78d38885ec54e61ed060d2771eeceea7 |
|
31-May-2014 |
buzbee <buzbee@google.com> |
Quick compiler: fix array overrun. MIRGraph::InlineCalls() was using the MIR opcode to recover Dalvik instruction flags - something that is only valid for Dalvik opcodes and not the set of extended MIR opcodes. This is probably the 3rd or 4th time we've had a bug using the MIR opcode in situations that are only valid for the Dalvik opcode subset. I took the opportunity to scan the code for other cases of this (didn't find any), and did some cleanup while I was in the neighborhood. We should probably rework the DalvikOpcode/MirOpcode model whenver we get around to removing DalvikInstruction from MIR. Internal bug b/15352667: out-of-bound access in mir_optimization.cc Change-Id: I75f06780468880892151e3cdd313e14bfbbaa489
|
05e27ff942b42e123ea9519d13d31070ab96f0ac |
|
28-May-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Enable extended MIR This patch enables all the extended MIR opcodes for ARM64. Please note that currently the compiler will never generate these opcodes since the BB optimisations are not enabled. Change-Id: Ia712b071f62301db868297d37567795128b5bf2e Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
2ac01fc279e8397beacf90302b0f215040eb78fa |
|
22-May-2014 |
Vladimir Marko <vmarko@google.com> |
Improve tracking of memory locations in LVN. Rewrite the tracking of values stored in memory to allow recognizing the same value after storing it in memory and loading it back to vreg. Drop reliance on value name ordering for memory versioning in preparation for GVN. Also fix a few minor issues in LVN. Change-Id: Ifabe2d47d669d9ec43942cea6fd157e41af77ec8
|
54d36b6bf280c22ae69280feffa9d65d6417b153 |
|
23-May-2014 |
Chao-ying Fu <chao-ying.fu@intel.com> |
Create two CompilerTemp for a wide compiler temp We create a new CompilerTemp for the high part of a wide compiler temp to fix counting compiler temps. Otherwise, assertion failures may happen inside GetNumUsedCompilerTemps(), if there are any wide compiler temps. Previously, we never ask for a wide compiler temp, such that we don't hit the issue. Change-Id: I9e79ad15e4192665b9d8a9dae5a5453496e48a79 Signed-off-by: Chao-ying Fu <chao-ying.fu@intel.com>
|
700a402244a1a423da4f3ba8032459f4b65fa18f |
|
20-May-2014 |
Ian Rogers <irogers@google.com> |
Now we have a proper C++ library, use std::unique_ptr. Also remove the Android.libcxx.mk and other bits of stlport compatibility mechanics. Change-Id: Icdf7188ba3c79cdf5617672c1cfd0a68ae596a61
|
69f08baaa4b70ce32a258f3da43cf12f2a034696 |
|
11-Apr-2014 |
Vladimir Marko <vmarko@google.com> |
Clean up ScopedArenaAllocatorAdapter. Make the adapter equality-comparable, define aliases for containers using the adapter and use those aliases. Fix DebugStackIndirectTopRefImpl assignment. Change-Id: I689aa8a93d169f63a659dec5040567d7b1343277
|
b5c9b4008760c9042061490f22aaff990ed04c9a |
|
30-Apr-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: BitVector and Optimization changes - The BitVector has a function SameBitsSet that is a bit upside down - This patch fixes it. - Two optimizations are fixed also: - The null check pass uses now same bits set instead of equal due to a subsequent change that will make it not always the case that the compared bitvectors be of the same size. - The fused optimization supposes a predecessor will have an instruction. Change-Id: I9ef1c793964b18dc0f47baf9d1f361448bb053a3 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com> Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com> Signed-off-by: Yixin Shou <yixin.shou@intel.com> Signed-off-by: Chao-ying Fu <chao-ying.fu@intel.com> Signed-off-by: Udayan Banerji <udayan.banerji@intel.com>
|
0add77a86599260aba3ea4b56e9db3da6bb881a8 |
|
06-May-2014 |
Mark Mendell <mark.p.mendell@intel.com> |
ART: Ensure use counts updated when adding SSA reg Ensure that matching data structures are updated when adding SSA registers late in the compile. Change-Id: I8e664dddf52c1a9095ba5b7a8df84e5a733bbc43 Signed-off-by: Mark Mendell <mark.p.mendell@intel.com>
|
091cc408e9dc87e60fb64c61e186bea568fc3d3a |
|
31-Mar-2014 |
buzbee <buzbee@google.com> |
Quick compiler: allocate doubles as doubles Significant refactoring of register handling to unify usage across all targets & 32/64 backends. Reworked RegStorage encoding to allow expanded use of x86 xmm registers; removed vector registers as a separate register type. Reworked RegisterInfo to describe aliased physical registers. Eliminated quite a bit of target-specific code and generalized common code. Use of RegStorage instead of int for registers now propagated down to the NewLIRx() level. In future CLs, the NewLIRx() routines will be replaced with versions that are explicit about what kind of operand they expect (RegStorage, displacement, etc.). The goal is to eventually use RegStorage all the way to the assembly phase. TBD: MIPS needs verification. TBD: Re-enable liveness tracking. Change-Id: I388c006d5fa9b3ea72db4e37a19ce257f2a15964
|
29a2648821ea4d0b5d3aecb9f835822fdfe6faa1 |
|
03-May-2014 |
Ian Rogers <irogers@google.com> |
Move DecodedInstruction into MIR. Change-Id: I188dc7fef4f4033361c78daf2015b869242191c6
|
cc794c3dc5b45601da23fb0d7bc16f9b4ef04065 |
|
02-May-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
ART: Move oat_data_flow_attributes_ to private and put an API The oat_data_flow_attributes had no checking mechanism to ensure bound correctness. This fix handles this and also offers two functions to retrieve the attributes: using the MIR and DecodedInstruction. Change-Id: Ib4f1f749efb923a803d364a4eea83a174527a644 Signed-Off-By: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
9d894662426e413454935e483d56a8cc33924174 |
|
21-Apr-2014 |
Alexei Zavjalov <alexei.zavjalov@intel.com> |
Skip BBs without SSA representation in the Constant Propagation phase In some cases the constant propagation optimization may get the MIR graph where some of the BBs have no predecessors and do not transformed to the SSA form. If such BB has operations on constants this may lead to segfault. This patch adds the condition that will pass the only BBs with SSA. Change-Id: I816d46b2492c5bd4748f983c3725b4798f9ebd68 Signed-off-by: Alexei Zavjalov <alexei.zavjalov@intel.com>
|
6a58cb16d803c9a7b3a75ccac8be19dd9d4e520d |
|
02-Apr-2014 |
Dmitry Petrochenko <dmitry.petrochenko@intel.com> |
art: Handle x86_64 architecture equal to x86 This patch forces FE/ME to treat x86_64 as x86 exactly. The x86_64 logic will be revised later when assembly will be ready. Change-Id: I4a92477a6eeaa9a11fd710d35c602d8d6f88cbb6 Signed-off-by: Dmitry Petrochenko <dmitry.petrochenko@intel.com>
|
9820b7c1dc70e75ad405b9e6e63578fa9fe94e94 |
|
02-Jan-2014 |
Vladimir Marko <vmarko@google.com> |
Early inlining of simple methods. Inlining "special" methods: empty methods, methods returning constants or their arguments, simple getters and setters. Bug: 8164439 Change-Id: I8c7fa9c14351fbb2470000b378a22974daaef236
|
bfea9c29e809e04bde4a46591fea64c5a7b922fb |
|
17-Jan-2014 |
Vladimir Marko <vmarko@google.com> |
Class initialization check elimination. Also, move null check elimination temporaries to the ScopedArenaAllocator and reuse the same variables in the class initialization check elimination. Change-Id: Ic746f95427065506fa6016d4931e4cb8b34937af
|
b34f69ab43aaf7a6e6045c95f398baf566ef5023 |
|
07-Mar-2014 |
Nicolas Geoffray <ngeoffray@google.com> |
Add command line support for enabling the optimizing compiler. Also run tests with the optimizing compiler enabled when the file art/USE_OPTIMIZING_COMPILER is present. Change-Id: Ibc33eed62a43547bc3b9fe786d014c0d81b5add8
|
83cc7ae96d4176533dd0391a1591d321b0a87f4f |
|
12-Feb-2014 |
Vladimir Marko <vmarko@google.com> |
Create a scoped arena allocator and use that for LVN. This saves more than 0.5s of boot.oat compilation time on Nexus 5. TODO: Move other stuff to the scoped allocator. This CL alone increases the peak memory allocation. By reusing the memory for other parts of the compilation we should reduce this overhead. Change-Id: Ifbc00aab4f3afd0000da818dfe68b96713824a08
|
a1a7074eb8256d101f7b5d256cda26d7de6ce6ce |
|
03-Mar-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite kMirOpSelect for all IF_ccZ opcodes. Also improve special cases for ARM and add tests. Change-Id: I06f575b9c7b547dbc431dbfadf2b927151fe16b9
|
00e1ec6581b5b7b46ca4c314c2854e9caa647dd2 |
|
28-Feb-2014 |
Bill Buzbee <buzbee@android.com> |
Revert "Revert "Rework Quick compiler's register handling"" This reverts commit 86ec520fc8b696ed6f164d7b756009ecd6e4aace. Ready. Fixed the original type, plus some mechanical changes for rebasing. Still needs additional testing, but the problem with the original CL appears to have been a typo in the definition of the x86 double return template RegLocation. Change-Id: I828c721f91d9b2546ef008c6ea81f40756305891
|
86ec520fc8b696ed6f164d7b756009ecd6e4aace |
|
26-Feb-2014 |
Bill Buzbee <buzbee@android.com> |
Revert "Rework Quick compiler's register handling" This reverts commit 2c1ed456dcdb027d097825dd98dbe48c71599b6c. Change-Id: If88d69ba88e0af0b407ff2240566d7e4545d8a99
|
2c1ed456dcdb027d097825dd98dbe48c71599b6c |
|
20-Feb-2014 |
buzbee <buzbee@google.com> |
Rework Quick compiler's register handling For historical reasons, the Quick backend found it convenient to consider all 64-bit Dalvik values held in registers to be contained in a pair of 32-bit registers. Though this worked well for ARM (with double-precision registers also treated as a pair of 32-bit single-precision registers) it doesn't play well with other targets. And, it is somewhat problematic for 64-bit architectures. This is the first of several CLs that will rework the way the Quick backend deals with physical registers. The goal is to eliminate the "64-bit value backed with 32-bit register pair" requirement from the target-indendent portions of the backend and support 64-bit registers throughout. The key RegLocation struct, which describes the location of Dalvik virtual register & register pairs, previously contained fields for high and low physical registers. The low_reg and high_reg fields are being replaced with a new type: RegStorage. There will be a single instance of RegStorage for each RegLocation. Note that RegStorage does not increase the space used. It is 16 bits wide, the same as the sum of the 8-bit low_reg and high_reg fields. At a target-independent level, it will describe whether the physical register storage associated with the Dalvik value is a single 32 bit, single 64 bit, pair of 32 bit or vector. The actual register number encoding is left to the target-dependent code layer. Because physical register handling is pervasive throughout the backend, this restructuring necessarily involves large CLs with lots of changes. I'm going to roll these out in stages, and attempt to segregate the CLs with largely mechanical changes from those which restructure or rework the logic. This CL is of the mechanical change variety - it replaces low_reg and high_reg from RegLocation and introduces RegStorage. It also includes a lot of new code (such as many calls to GetReg()) that should go away in upcoming CLs. The tentative plan for the subsequent CLs is: o Rework standard register utilities such as AllocReg() and FreeReg() to use RegStorage instead of ints. o Rework the target-independent GenXXX, OpXXX, LoadValue, StoreValue, etc. routines to take RegStorage rather than int register encodings. o Take advantage of the vector representation and eliminate the current vector field in RegLocation. o Replace the "wide" variants of codegen utilities that take low_reg/high_reg pairs with versions that use RegStorage. o Add 64-bit register target independent codegen utilities where possible, and where not virtualize with 32-bit general register and 64-bit general register variants in the target dependent layer. o Expand/rework the LIR def/use flags to allow for more registers (currently, we lose out on 16 MIPS floating point regs as well as ARM's D16..D31 for lack of space in the masks). o [Possibly] move the float/non-float determination of a register from the target-dependent encoding to RegStorage. In other words, replace IsFpReg(register_encoding_bits). At the end of the day, all code in the target independent layer should be using RegStorage, as should much of the target dependent layer. Ideally, we won't be using the physical register number encoding extracted from RegStorage (i.e. GetReg()) until the NewLIRx() layer. Change-Id: Idc5c741478f720bdd1d7123b94e4288be5ce52cb
|
a36aeb30e806e0c899592bbaeee2b94bac4ab585 |
|
25-Feb-2014 |
Nicolas Geoffray <ngeoffray@google.com> |
Fix typo from previous commit that lead to performance regression. Change-Id: I851654ae28522dd0358137578593d030e7eb9256
|
f5df8974173124faddb8e2b6a331959afdb94fdf |
|
14-Feb-2014 |
Nicolas Geoffray <ngeoffray@google.com> |
Rewrite the compiler interface for CompilerDriver. Change-Id: I15fa9afe7ffb7283ebda8d788a1e02793e3f75a6
|
da7a69b3fa7bb22d087567364b7eb5a75824efd8 |
|
09-Jan-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
Enable compiler temporaries Compiler temporaries are a facility for having virtual register sized space for dealing with intermediate values during MIR transformations. They receive explicit space in managed frames so they can have a home location in case they need to be spilled. The facility also supports "special" temporaries which have specific semantic purpose and their location in frame must be tracked. The compiler temporaries are treated in the same way as virtual registers so that the MIR level transformations do not need to have special logic. However, generated code needs to know stack layout so that it can distinguish between home locations. MIRGraph has received an interface for dealing with compiler temporaries. This interface allows allocation of wide and non-wide virtual register temporaries. The information about how temporaries are kept on stack has been moved to stack.h. This is was necessary because stack layout is dependent on where the temporaries are placed. Change-Id: Iba5cf095b32feb00d3f648db112a00209c8e5f55 Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
e27b3bf2c1044bfbfbe874affd3758a73009c6c6 |
|
23-Jan-2014 |
Razvan A Lupusoru <razvan.a.lupusoru@intel.com> |
Support GenSelect for x86 kMirOpSelect is an extended MIR that has been generated in order to remove trivial diamond shapes where the conditional is an if-eqz or if-nez and on each of the paths there is a move or const bytecode with same destination register. This patch enables x86 to generate code for this extended MIR. A) Handling the constant specialization of kMirOpSelect: 1) When the true case is zero and result_reg is not same as src_reg: xor result_reg, result_reg cmp $0, src_reg mov t1, $false_case cmovnz result_reg, t1 2) When the false case is zero and result_reg is not same as src_reg: xor result_reg, result_reg cmp $0, src_reg mov t1, $true_case cmovz result_reg, t1 3) All other cases (we do compare first to set eflags): cmp $0, src_reg mov result_reg, $true_case mov t1, $false_case cmovnz result_reg, t1 B) Handling the move specialization of kMirOpSelect: 1) When true case is already in place: cmp $0, src_reg cmovnz result_reg, false_reg 2) When false case is already in place: cmp $0, src_reg cmovz result_reg, true_reg 3) When neither cases are in place: cmp $0, src_reg mov result_reg, true_reg cmovnz result_reg, false_reg Change-Id: Ic7c50823208fe82019916476a0a77c6a271679fe Signed-off-by: Razvan A Lupusoru <razvan.a.lupusoru@intel.com>
|
a894607bca7eb623bc957363e4b36f44cfeea1b6 |
|
22-Jan-2014 |
Vladimir Marko <vmarko@google.com> |
Move fused cmp branch ccode to MIR::meta. This a small refactoring towards removing the large DecodedInstruction from the MIR class. Change-Id: I10f9ed5eaac42511d864c71d20a8ff6360292cec
|
4e97c539408f47145526f0062c1c06df99146a73 |
|
07-Jan-2014 |
Jean Christophe Beyler <jean.christophe.beyler@intel.com> |
Added pass framework The patch adds a Middle-End pass system and normalizes the current passes into the pass framework. Passes have: - A start, work, and end functions. - A gate to determine to apply the pass. - Can provide a CFG dump folder. mir_dataflow.cc, mir_graph.cc, mir_optimization.cc, ssa_transformation.cc: - Changed due to moving code into bb_optimizations.cc. - Moved certain functions from private to public due to needed from the passes. pass.cc, pass.h: - Pass base class pass_driver.cc, pass_driver.h: - The pass driver implementation. frontend.cc: - Replace the function calls to the passes with the pass driver. Change-Id: I88cd82efbf6499df9e6c7f135d7e294dd724a079 Signed-off-by: Jean Christophe Beyler <jean.christophe.beyler@intel.com>
|
cdcfdfcb704416882beec98f5a790a65c9b798ae |
|
11-Dec-2013 |
buzbee <buzbee@google.com> |
Art: fix basic block optimization pass A bracket mismatch in Change 70885 inadvertently prevented the basic block optimization pass from running in most cases. Fixed here. Change-Id: I33f2687904cc05c90f74fb3bdc8f312d009cc0ac
|
1da1e2fceb0030b4b76b43510b1710a9613e0c2e |
|
15-Nov-2013 |
buzbee <buzbee@google.com> |
More compile-time tuning Another round of compile-time tuning, this time yeilding in the vicinity of 3% total reduction in compile time (which means about double that for the Quick Compile portion). Primary improvements are skipping the basic block combine optimization pass when using Quick (because we already have big blocks), combining the null check elimination and type inference passes, and limiting expensive local value number analysis to only those blocks which might benefit from it. Following this CL, the actual compile phase consumes roughly 60% of the total dex2oat time on the host, and 55% on the target (Note, I'm subtracting out the Deduping time here, which the timing logger normally counts against the compiler). A sample breakdown of the compilation time follows (this taken on PlusOne.apk w/ a Nexus 4): 39.00% -> MIR2LIR: 1374.90 (Note: includes local optimization & scheduling) 10.25% -> MIROpt:SSATransform: 361.31 8.45% -> BuildMIRGraph: 297.80 7.55% -> Assemble: 266.16 6.87% -> MIROpt:NCE_TypeInference: 242.22 5.56% -> Dedupe: 196.15 3.45% -> MIROpt:BBOpt: 121.53 3.20% -> RegisterAllocation: 112.69 3.00% -> PcMappingTable: 105.65 2.90% -> GcMap: 102.22 2.68% -> Launchpads: 94.50 1.16% -> MIROpt:InitRegLoc: 40.94 1.16% -> Cleanup: 40.93 1.10% -> MIROpt:CodeLayout: 38.80 0.97% -> MIROpt:ConstantProp: 34.35 0.96% -> MIROpt:UseCount: 33.75 0.86% -> MIROpt:CheckFilters: 30.28 0.44% -> SpecialMIR2LIR: 15.53 0.44% -> MIROpt:BBCombine: 15.41 (cherry pick of 9e8e234af4430abe8d144414e272cd72d215b5f3) Change-Id: I86c665fa7e88b75eb75629a99fd292ff8c449969
|
0b1191cfece83f6f8d4101575a06555a2d13387a |
|
28-Oct-2013 |
Bill Buzbee <buzbee@google.com> |
Revert "Revert "Null check elimination improvement"" This reverts commit 31aa97cfec5ee76b2f2496464e1b6f9e11d21a29. ..and thereby brings back change 380165, which was reverted because it was buggy. Three problems with the original CL: 1. The author ran the pre-submit tests, but used -j24 and failed to search the output for fail messages. 2. The new null check analysis pass uses an interative approach to identify whether a null check is needed. It is possible that the null-check-required state may oscillate, and a logic error caused it to stick in the "no check needed" state. 3. Our old nemesis Dalvik untyped constants, in which 0 values can be used both as object reference and non-object references. This CL conservatively treats all CONST definitions as potential object definitions for the purposes of null check elimination. Change-Id: I3c1744e44318276e42989502a314585e56ac57a0
|
31aa97cfec5ee76b2f2496464e1b6f9e11d21a29 |
|
26-Oct-2013 |
Ian Rogers <irogers@google.com> |
Revert "Null check elimination improvement" This reverts commit 4db179d1821a9e78819d5adc8057a72f49e2aed8. Change-Id: I059c15c85860c6c9f235b5dabaaef2edebaf1de2
|
4db179d1821a9e78819d5adc8057a72f49e2aed8 |
|
23-Oct-2013 |
buzbee <buzbee@google.com> |
Null check elimination improvement See b/10862777 Improves the null check elimination pass by tracking visibility of object definitions, rather than successful uses of object dereferences. For boot class path, increases static null check elimination success rate from 98.4% to 98.6%. Reduces size of boot.oat by ~300K bytes. Fixes loop nesting depth computation, which is used by register promotion, and tweaked the heuristics. Fixes a bug in verbose listing output in which a basic block id is directly dereferenced, rather than first being converted to a pointer. Change-Id: Id01c20b533cdb12ea8fc4be576438407d0a34cec
|
0d82948094d9a198e01aa95f64012bdedd5b6fc9 |
|
12-Oct-2013 |
buzbee <buzbee@google.com> |
64-bit prep Preparation for 64-bit roll. o Eliminated storing pointers in 32-bit int slots in LIR. o General size reductions of common structures to reduce impact of doubled pointer sizes: - BasicBlock struct was 72 bytes, now is 48. - MIR struct was 72 bytes, now is 64. - RegLocation was 12 bytes, now is 8. o Generally replaced uses of BasicBlock* pointers with 16-bit Ids. o Replaced several doubly-linked lists with singly-linked to save one stored pointer per node. o We had quite a few uses of uintptr_t's that were a holdover from the JIT (which used pointers to mapped dex & actual code cache addresses rather than trace-relative offsets). Replaced those with uint32_t's. o Clean up handling of embedded data for switch tables and array data. o Miscellaneous cleanup. I anticipate one or two additional CLs to reduce the size of MIR and LIR structs. Change-Id: I58e426d3f8e5efe64c1146b2823453da99451230
|
409fe94ad529d9334587be80b9f6a3d166805508 |
|
11-Oct-2013 |
buzbee <buzbee@google.com> |
Quick assembler fix This CL re-instates the select pattern optimization disabled by CL 374310, and fixes the underlying problem: improper handling of the kPseudoBarrier LIR opcode. The bug was introduced in the recent assembler restructuring. In short, LIR pseudo opcodes (which have values < 0), should always have size 0 - and thus cause no bits to be emitted during assembly. In this case, bad logic caused us to set the size of a kPseudoBarrier opcode via lookup through the EncodingMap. Because all pseudo ops are < 0, this meant we did an array underflow load, picking up whatever garbage was located before the EncodingMap. This explains why this error showed up recently - we'd previuosly just gotten a lucky layout. This CL corrects the faulty logic, and adds DCHECKs to uses of the EncodingMap to ensure that we don't try to access w/ a pseudo op. Additionally, the existing is_pseudo_op() macro is replaced with IsPseudoLirOp(), named similar to the existing IsPseudoMirOp(). Change-Id: I46761a0275a923d85b545664cadf052e1ab120dc
|
41cdd43bd6968a06b1344efdd57ccf302f997a0e |
|
11-Oct-2013 |
Ian Rogers <irogers@google.com> |
Disable select instruction generation on ARM. Change-Id: I114547d44605b06b2fed396b2fbad03935f66ebc
|
56c717860df2d71d66fb77aa77f29dd346e559d3 |
|
06-Sep-2013 |
buzbee <buzbee@google.com> |
Compile-time tuning Specialized the dataflow iterators and did a few other minor tweaks. Showing ~5% compile-time improvement in a single-threaded environment; less in multi-threaded (presumably because we're blocked by something else). Change-Id: I2e2ed58d881414b9fc97e04cd0623e188259afd2
|
f6c4b3ba3825de1dbb3e747a68b809c6cc8eb4db |
|
25-Aug-2013 |
Mathieu Chartier <mathieuc@google.com> |
New arena memory allocator. Before we were creating arenas for each method. The issue with doing this is that we needed to memset each memory allocation. This can be improved if you start out with arenas that contain all zeroed memory and recycle them for each method. When you give memory back to the arena pool you do a single memset to zero out all of the memory that you used. Always inlined the fast path of the allocation code. Removed the "zero" parameter since the new arena allocator always returns zeroed memory. Host dex2oat time on target oat apks (2 samples each). Before: real 1m11.958s user 4m34.020s sys 1m28.570s After: real 1m9.690s user 4m17.670s sys 1m23.960s Target device dex2oat samples (Mako, Thinkfree.apk): Without new arena allocator: 0m26.47s real 0m54.60s user 0m25.85s system 0m25.91s real 0m54.39s user 0m26.69s system 0m26.61s real 0m53.77s user 0m27.35s system 0m26.33s real 0m54.90s user 0m25.30s system 0m26.34s real 0m53.94s user 0m27.23s system With new arena allocator: 0m25.02s real 0m54.46s user 0m19.94s system 0m25.17s real 0m55.06s user 0m20.72s system 0m24.85s real 0m55.14s user 0m19.30s system 0m24.59s real 0m54.02s user 0m20.07s system 0m25.06s real 0m55.00s user 0m20.42s system Correctness of Thinkfree.apk.oat verified by diffing both of the oat files. Change-Id: I5ff7b85ffe86c57d3434294ca7a621a695bf57a9
|
9329e6d1a8ff8d3775c4a9db9a7bb97694bc267d |
|
19-Aug-2013 |
buzbee <buzbee@google.com> |
More suspend check repair. The previous fix to the suspend check optimization mechanism left a bug in the handling of constant-folded branches. Change-Id: Ib71f1cb9f17203bee26746006e568d448666962d
|
cbcfaf3a410e35730c4daeaff6c791665764925a |
|
19-Aug-2013 |
buzbee <buzbee@google.com> |
Fix suspend check optimization Art's Quick compiler currently uses a convervative mechanism to ensure that a safe point will be reached within a "small" amount of time. Explicit suspend checks are placed prior to backwards branches and on returns. There are a lot of ways to optimize, which we'll get to in the future, but for now the only optimization is to detect a backwards branch that targets a return block. That's a common pattern in dex, and simple to detect. In those cases, we can suppress the suspend check on the backwards branch knowing that the return will do it. However, the notion of what is a backwards branch got a bit muddied with some mir optimizations that transform the graph by changing the sense of branches. What started off as a taken backwards branch may turn into a fallthrough backwards branch. This CL avoid the confusion by marking branches backwards based on their original dex targets rather than using the post-transform test of backwardness. Change-Id: I9b30be168c801af51bae7f66ecd442edcb115a18
|
7934ac288acfb2552bb0b06ec1f61e5820d924a4 |
|
26-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fix cpplint whitespace/comments issues Change-Id: Iae286862c85fb8fd8901eae1204cd6d271d69496
|
02c8cc6d1312a2b55533f02f6369dc7c94672f90 |
|
19-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fixing cpplint whitespace/blank_line, whitespace/end_of_line, whitespace/labels, whitespace/semicolon issues Change-Id: Ide4f8ea608338b3fed528de7582cfeb2011997b6
|
6f485c62b9cfce3ab71020c646ab9f48d9d29d6d |
|
19-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fix cpplint whitespace/indent issues Change-Id: I7c1647f0c39e1e065ca5820f9b79998691ba40b1
|
df62950e7a32031b82360c407d46a37b94188fbb |
|
18-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fix cpplint whitespace/parens issues Change-Id: Ifc678d59a8bed24ffddde5a0e543620b17b0aba9
|
b1eba213afaf7fa6445de863ddc9680ab99762ea |
|
18-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fix cpplint whitespace/comma issues Change-Id: I456fc8d80371d6dfc07e6d109b7f478c25602b65
|
2ce745c06271d5223d57dbf08117b20d5b60694a |
|
18-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Fix cpplint whitespace/braces issues Change-Id: Ide80939faf8e8690d8842dde8133902ac725ed1a
|
7940e44f4517de5e2634a7e07d58d0fb26160513 |
|
12-Jul-2013 |
Brian Carlstrom <bdc@google.com> |
Create separate Android.mk for main build targets The runtime, compiler, dex2oat, and oatdump now are in seperate trees to prevent dependency creep. They can now be individually built without rebuilding the rest of the art projects. dalvikvm and jdwpspy were already this way. Builds in the art directory should behave as before, building everything including tests. Change-Id: Ic6b1151e5ed0f823c3dd301afd2b13eb2d8feb81
|