2cebb24bfc3247d3e9be138a3350106737455918 |
|
22-Apr-2015 |
Mathieu Chartier <mathieuc@google.com> |
Replace NULL with nullptr Also fixed some lines that were too long, and a few other minor details. Change-Id: I6efba5fb6e03eb5d0a300fddb2a75bf8e2f175cb
|
65b798ea10dd716c1bb3dda029f9bf255435af72 |
|
06-Apr-2015 |
Andreas Gampe <agampe@google.com> |
ART: Enable more Clang warnings Change-Id: Ie6aba02f4223b1de02530e1515c63505f37e184c
|
0b9203e7996ee1856f620f95d95d8a273c43a3df |
|
23-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Some Quick cleanup Make several fields const in CompilationUnit. May benefit some Mir2Lir code that repeats tests, and in general immutability is good. Remove compiler_internals.h and refactor some other headers to reduce overly broad imports (and thus forced recompiles on changes). Change-Id: I898405907c68923581373b5981d8a85d2e5d185a
|
582f541b35f40b75f2629a41259d2162608647d5 |
|
21-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Fix arm64 backend Fix a register size problem after previous commit f681570077563bb529a30f9e7c572b837cecfb83. Change-Id: If04e647324bcd6fe279c25e70214a9f7c5b816ec
|
f681570077563bb529a30f9e7c572b837cecfb83 |
|
20-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Make some helpers non-virtual in Mir2Lir These don't need to be virtual. Change-Id: Idca3c0a4e8b5e045d354974bd993492d6c0e70ba
|
b28c1c06236751aa5c9e64dcb68b3c940341e496 |
|
08-Nov-2014 |
Ian Rogers <irogers@google.com> |
Tidy RegStorage for X86. Don't use global variables initialized in constructors to hold onto constant values, instead use the TargetReg32 helper. Improve this helper with the use of lookup tables. Elsewhere prefer to use constexpr values as they will have less runtime cost. Add an ostream operator to RegStorage for CHECK_EQ and use. Change-Id: Ib8d092d46c10dac5909ecdff3cc1e18b7e9b1633
|
6a3c1fcb4ba42ad4d5d142c17a3712a6ddd3866f |
|
31-Oct-2014 |
Ian Rogers <irogers@google.com> |
Remove -Wno-unused-parameter and -Wno-sign-promo from base cflags. Fix associated errors about unused paramenters and implict sign conversions. For sign conversion this was largely in the area of enums, so add ostream operators for the effected enums and fix tools/generate-operator-out.py. Tidy arena allocation code and arena allocated data types, rather than fixing new and delete operators. Remove dead code. Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
|
2c4257be8191c5eefde744e8965fcefc80a0a97d |
|
24-Oct-2014 |
Ian Rogers <irogers@google.com> |
Tidy logging code not using UNIMPLEMENTED. Change-Id: I7a79c1671a6ff8b2040887133b3e0925ef9a3cfe
|
fc787ecd91127b2c8458afd94e5148e2ae51a1f5 |
|
10-Oct-2014 |
Ian Rogers <irogers@google.com> |
Enable -Wimplicit-fallthrough. Falling through switch cases on a clang build must now annotate the fallthrough with the FALLTHROUGH_INTENDED macro. Bug: 17731372 Change-Id: I836451cd5f96b01d1ababdbf9eef677fe8fa8324
|
4163c53ce38a0f1f88bf3e8d26de9914da38498b |
|
15-Jul-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: address some outstanding TODOs. Fix comments in arm64_lir.h. Rename Arm* to A64* and replace FWIDE, FUNWIDE, ... with WIDE, UNWIDE, ... Change-Id: I4900902e28463ea5e00e34ea40ddfc15704c0bfa
|
b504d2f89fdd5c01816bcbad752797cb78de0e99 |
|
27-Sep-2014 |
buzbee <buzbee@google.com> |
Quick compiler: aarch64 codegen & long_min literal Int64 overflow during instruction selection caused incorrect code patterns to emitted in some cases of long operations with an immediate value of 0x8000000000000000. The code in question was attempting to determine if the immediate operand would fit in aarch64 immediate instruction variants. Internal b/17630605 Change-Id: I8177021b73e51302bc1032387d83b1dd567ed6db
|
c763e350da562b0c6bebf10599588d4901140e45 |
|
04-Jul-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: Implement InexpensiveConstant methods. Implement IsInexpensiveConstant and friends for A64. Also extending the methods to take the opcode with respect to which the constant is inexpensive. Additionally, logical operations (i.e. and, or, xor) can now handle the immediates 0 and ~0 (which are not logical immediates). Change-Id: I46ce1287703765c5ab54983d13c1b3a1f5838622
|
2eba1fa7e9e5f91e18ae3778d529520bd2c78d55 |
|
31-Jul-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Add inlining support for ceil(), floor(), rint(), round() This patch adds inlining support for the following Math, StrictMath methods in the ARM64 backend: * double ceil(double) * double floor(double) * double rint(double) * long round(double) * int round(float) Also some cleanup. Change-Id: I9f5a2f4065b1313649f4b0c4380b8176703c3fe1 Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
cedee4744c2d4f3611a7bb9fe98ef1bf4c37d915 |
|
01-Jul-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Remove unnecessary work around for sp. Use RegRegRegExtend to encode instruction for "sub/add sp, sp, Xm". Change-Id: I13d3d2d386a7bd827e1396f291a7dcb9bffd5a29
|
63999683329612292d534e6be09dbde9480f1250 |
|
15-Jul-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
Revert "Revert "Enable Load Store Elimination for ARM and ARM64"" This patch refactors the implementation of the LoadStoreElimination optimisation pass. Please note that this pass was disabled and not functional for any of the backends. The current implementation tracks aliases and handles DalvikRegs as well as Heap memory regions. It has been tested and it is known to optimise out the following: * Load - Load * Store - Load * Store - Store * Load Literals Change-Id: I3aadb12a787164146a95bc314e85fa73ad91e12b
|
c32447bcc8c36ee8ff265ed678c7df86936a9ebe |
|
27-Jul-2014 |
Bill Buzbee <buzbee@android.com> |
Revert "Enable Load Store Elimination for ARM and ARM64" On extended testing, I'm seeing a CHECK failure at utility_arm.cc:1201. This reverts commit fcc36ba2a2b8fd10e6eebd21ecb6329606443ded. Change-Id: Icae3d49cd7c8fcab09f2f989cbcb1d7e5c6d137a
|
fcc36ba2a2b8fd10e6eebd21ecb6329606443ded |
|
15-Jul-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
Enable Load Store Elimination for ARM and ARM64 This patch refactors the implementation of the LoadStoreElimination optimisation pass. Please note that this pass was disabled and not functional for any of the backends. The current implementation tracks aliases and handles DalvikRegs as well as Heap memory regions. It has been tested and it is known to optimise out the following: * Load - Load * Store - Load * Store - Store * Load Literals Change-Id: Iefae9b696f87f833ef35c451ed4d49c5a1b6fde0
|
984305917bf57b3f8d92965e4715a0370cc5bcfb |
|
28-Jul-2014 |
Andreas Gampe <agampe@google.com> |
ART: Rework quick entrypoint code in Mir2Lir, cleanup To reduce the complexity of calling trampolines in generic code, introduce an enumeration for entrypoints. Introduce a header that lists the entrypoint enum and exposes a templatized method that translates an enum value to the corresponding thread offset value. Call helpers are rewritten to have an enum parameter instead of the thread offset. Also rewrite LoadHelper and GenConversionCall this way. It is now LoadHelper's duty to select the right thread offset size. Introduce InvokeTrampoline virtual method to Mir2Lir. This allows to further simplify the call helpers, as well as make OpThreadMem specific to X86 only (removed from Mir2Lir). Make GenInlinedCharAt virtual, move a copy to X86 backend, and simplify both copies. Remove LoadBaseIndexedDisp and OpRegMem from Mir2Lir, as they are now specific to X86 only. Remove StoreBaseIndexedDisp from Mir2Lir, as it was only ever used in the X86 backend. Remove OpTlsCmp from Mir2Lir, as it was only ever used in the X86 backend. Remove OpLea from Mir2Lir, as it was only ever defined in the X86 backend. Remove GenImmedCheck from Mir2Lir as it was neither used nor implemented. Change-Id: If0a6182288c5d57653e3979bf547840a4c47626e
|
48f5c47907654350ce30a8dfdda0e977f5d3d39f |
|
27-Jun-2014 |
Hans Boehm <hboehm@google.com> |
Replace memory barriers to better reflect Java needs. Replaces barriers that enforce ordering of one access type (e.g. Load) with respect to another (e.g. store) with more general ones that better reflect both Java requirements and actual hardware barrier/fence instructions. The old code was inconsistent and unclear about which barriers implied which others. Sometimes multiple barriers were generated and then eliminated; sometimes it was assumed that certain barriers implied others. The new barriers closely parallel those in C++11, though, for now, we use something closer to the old naming. Bug: 14685856 Change-Id: Ie1c80afe3470057fc6f2b693a9831dfe83add831
|
a3fe7422d7ce8bfb01f95decef45f91a44d39264 |
|
09-Jul-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Fix and enable reverseBytes intrinsic. There is no revsh on arm64, use rev16 and sxth instead. Change-Id: I5f9879352f0ad76b386c82cbf476894af888a64c
|
63fe93d9f9d2956b1ee2b98cdd6ddd2153f5f9cf |
|
30-Jun-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Enable Inlining. This patch fixes the remaining issues with inlining for ARM64. Change-Id: I2d85b7c4f3fb2b667bf6029fbc271ab954378889 Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com> Signed-off-by: Matteo Franchin <matteo.franchin@arm.com>
|
b5860fb459f1ed71f39d8a87b45bee6727d79fe8 |
|
22-Jun-2014 |
buzbee <buzbee@google.com> |
Register promotion support for 64-bit targets Not sufficiently tested for 64-bit targets, but should be fairly close. A significant amount of refactoring could stil be done, (in later CLs). With this change we are not making any changes to the vmap scheme. As a result, it is a requirement that if a vreg is promoted to both a 32-bit view and the low half of a 64-bit view it must share the same physical register. We may change this restriction later on to allow for more flexibility for 32-bit Arm. For example, if v4, v5, v4/v5 and v5/v6 are all hot enough to promote, we'd end up with something like: v4 (as an int) -> r10 v4/v5 (as a long) -> r10 v5 (as an int) -> r11 v5/v6 (as a long) -> r11 Fix a couple of ARM64 bugs on the way... Change-Id: I6a152b9c164d9f1a053622266e165428045362f3
|
255e014542b2180620230e4d9d6000ae06846bbd |
|
04-Jul-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
Aarch64: fix references handling in Load*Indexed. Fix the way we handle references in Load/StoreBaseIndexed and friends. We assume references are 64-bit RegStorage entities, with the difference that they are load as 32-bit values. Change-Id: I7fe987ef9e97e9a5042b85378b33d1e85710d8b5
|
4b537a851b686402513a7c4a4e60f5457bb8d7c1 |
|
01-Jul-2014 |
Andreas Gampe <agampe@google.com> |
ART: Quick compiler: More size checks, add TargetReg variants Add variants for TargetReg for requesting specific register usage, e.g., wide and ref. More register size checks. With code adapted from https://android-review.googlesource.com/#/c/98605/. Change-Id: I852d3be509d4dcd242c7283da702a2a76357278d
|
baa7c88a34fdfd230a2a383c2e388945f4d907b6 |
|
30-Jun-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Rename A64_/A32_ register prefix to x/w. A64/A32 look like architecture name, but they are all for arm64. Use lower-case to name the registers defined in "ARM ARM" which can also be directly used in assembly file. Use upper-case to name the registers which are other aliases. Change-Id: I0ac38ed75f977fdc362288b01179b84feaee5614
|
903989dae724ec0b186ec98c2afd1e074ed41d4d |
|
01-Jul-2014 |
Vladimir Marko <vmarko@google.com> |
AArch64: Fix OpRegRegImm64 add/sub for large negative imm. Bug: 15837964 Change-Id: I401edf687352fae3dca03c0a807dac5750e454f6
|
de68676b24f61a55adc0b22fe828f036a5925c41 |
|
24-Jun-2014 |
Andreas Gampe <agampe@google.com> |
Revert "ART: Split out more cases of Load/StoreRef, volatile as parameter" This reverts commit 2689fbad6b5ec1ae8f8c8791a80c6fd3cf24144d. Breaks the build. Change-Id: I9faad4e9a83b32f5f38b2ef95d6f9a33345efa33
|
3c12c512faf6837844d5465b23b9410889e5eb11 |
|
24-Jun-2014 |
Andreas Gampe <agampe@google.com> |
Revert "Revert "ART: Split out more cases of Load/StoreRef, volatile as parameter"" This reverts commit de68676b24f61a55adc0b22fe828f036a5925c41. Fixes an API comment, and differentiates between inserting and appending. Change-Id: I0e9a21bb1d25766e3cbd802d8b48633ae251a6bf
|
2689fbad6b5ec1ae8f8c8791a80c6fd3cf24144d |
|
23-Jun-2014 |
Andreas Gampe <agampe@google.com> |
ART: Split out more cases of Load/StoreRef, volatile as parameter Splits out more cases of ref registers being loaded or stored. For code clarity, adds volatile as a flag parameter instead of a separate method. On ARM64, continue cleanup. Add flags to print/fatal on size mismatches. Change-Id: I30ed88433a6b4ff5399aefffe44c14a5e6f4ca4e
|
c61b3c984c509d5f7c8eb71b853c81a34b5c28ef |
|
18-Jun-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: implement easy division and reminder. This implements easy division and reminder for integer only (32-bit). The optimisation applies to div/rem by powers of 2 and to div by small literals (between 3-15). Change-Id: I71be7c4de5d2e2e738b88984f13efb08f4388a19
|
f987927a68d0489970c1eca6a32fd02ca9913357 |
|
19-Jun-2014 |
Andreas Gampe <agampe@google.com> |
ART: Reserve 8B for float literals on ARM64 This is a cludge to make sure literal pools will be 8B aligned. A better approach would be post-processing - that way we could pack the floats. Change-Id: Ia9c6f53e0f6e37b4be79e1efe7f14feb49f2052b
|
9f975bfe091e9592a1b6b5b46d224ec04b1183b6 |
|
19-Jun-2014 |
Andreas Gampe <agampe@google.com> |
ART: Change rrr add and sub for ARM64 OpRegRegImm will fall back to loading a constant into a register and then doing the operation with three registers. That is, for example, the case when we allocate large stack frames. However, the currently chosen operations are add/sub shifted, which does *not* allow to specify SP (x31 will be interpreted as xzr). Switch to add/sub extended. There won't be a practical difference, as we do not call with anything other than 0 shift. Change-Id: I2b78df9f044d2963e3e890777c855b339952f9f4
|
47b31aa855379471c06735b738396fa76e7c1988 |
|
19-Jun-2014 |
Andreas Gampe <agampe@google.com> |
ART: Start implementation of OpRegRegRegExtend for ARM64 We need a sign-extending add for packed-switch and sparse-switch, as the 32b values are signed offsets. This starts an implementation that is sufficient for the use cases. Change-Id: Ib5bae24b902077346a97d5e9e061533f9cdfcdb0
|
33ae5583bdd69847a7316ab38a8fa8ccd63093ef |
|
12-Jun-2014 |
buzbee <buzbee@google.com> |
Arm64 hard-float Basic enabling of hard-float for Arm64. In future CLs we'll consolidate the various targets - there is a lot of overlap. Compilation remains turned off in this CL, but I expect to enable a subset shortly. With compilation fully enabled (including the EXPERIMENTAL opcodes with the exception of REM and THROW), we get the following run-test results: 003-omnibus-opcode failures: Classes.checkCast Classes.arrayInstance UnresTest2 Haven't gone deep, but these appear to be related to throw/catch and/or stacktrace. For REM, the generated code looks reasonable to me - my guess is that we've got something wrong on the transition to the runtime. Haven't looked deeper yet, though. The bulk of the other failure also appear to be related to transitioning to the runtime system, or handling try/catch. run-test status: Status with optimizations disabled, REM_FLOAT/DOUBLE and THROW disabled: succeeded tests: 94 failed tests: 22 failed: 003-omnibus-opcodes failed: 004-annotations failed: 009-instanceof2 failed: 024-illegal-access failed: 025-access-controller failed: 031-class-attributes failed: 044-proxy failed: 045-reflect-array failed: 046-reflect failed: 058-enum-order failed: 062-character-encodings failed: 063-process-manager failed: 064-field-access failed: 068-classloader failed: 071-dexfile failed: 083-compiler-regressions failed: 084-class-init failed: 086-null-super failed: 087-gc-after-link failed: 100-reflect2 failed: 107-int-math2 failed: 201-built-in-exception-detail-messages Change-Id: Ib66209285cad8998d77a14781de300af02a96b15
|
c41e6dc89ec6593e9af9af524f2ec7be6e2d24a4 |
|
13-Jun-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: improve 64-bit immediates loads. Improve the quick backend to load immediates by choosing the best of the following strategies: - use wzr, xzr to load 0 (via mov) or -1 (via mvn), - use logical immediates (orr), - use one movz/movn optionally followed by one or more movk, - use the literal pool. Change-Id: I8e46e6d9eaf46b717761dd9d60e63ee3f2a5422b
|
e2eb29e98be3ba72cce7da40847ab3d605b9455d |
|
12-Jun-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Enable MOVE_*, some CONST_*, CMP_*. With the fixes of GenArithImmOpLong, GenShiftOpLong, OpRegImm, OpRegRegImm, OpRegRegImm64, EncodeLogicalImmediate and fmov. Change-Id: I8cae4f921d5150a6b8e4803ca4dee553928d1a58
|
f8ec48e8eff0050de1451fc8e9c3a71c26d5ce7e |
|
06-Jun-2014 |
Stuart Monteith <stuart.monteith@arm.com> |
ART: arm64 explicit stack overflow checks Implement only the explicit checks for the quick backend for arm64. Implicit checks require fault handlers, which are currently unimplemented. CMN + CMP have extended versions implemented for comparisons against the stack pointer. More extended opcode implementations will need to follow. Change-Id: I8db297aec73df818b20fe410297800c886701c76
|
169489b4f4be8c5dd880ba6f152948324d22ff79 |
|
11-Jun-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Add support for inlined methods This patch adds support for Arm64 inlined methods. Change-Id: Ic6aeed6d2d32f65cd1e63cf482f83cdcf958798a
|
8dea81ca9c0201ceaa88086b927a5838a06a3e69 |
|
06-Jun-2014 |
Vladimir Marko <vmarko@google.com> |
Rewrite use/def masks to support 128 bits. Reduce LIR memory usage by holding masks by pointers in the LIR rather than directly and using pre-defined const masks for the common cases, allocating very few on the arena. Change-Id: I0f6d27ef6867acd157184c8c74f9612cebfe6c16
|
fd2e291297463a3d5bdb18adc2a1eacbe2759152 |
|
06-Jun-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: fix MarkGCCard, enabling more MIR opcodes. Fixing register usage in MarkGCCard. Also enabling more MIR opcodes in the compiler filter. Change-Id: I877250f8deaefc69115e861344ca47cc5ccea8ff
|
2d41a655f4f0e4b2178bbd7e93901a5ed6eae4a6 |
|
09-Jun-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Fix kOpLsl, rem-float/double. Change-Id: I6f7293493c0f94f96882d2e559e3eef659a23aec
|
ffddfdf6fec0b9d98a692e27242eecb15af5ead2 |
|
03-Jun-2014 |
Tim Murray <timmurray@google.com> |
DO NOT MERGE Merge ART from AOSP to lmp-preview-dev. Change-Id: I0f578733a4b8756fd780d4a052ad69b746f687a9
|
0955f7e470fb733aef07096536e9fba7c99250aa |
|
23-May-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: fixing some assertions. Fixing some assertions while attempting to get libartd.so to work. Fixing also the shift logic in LoadBaseIndexed() and StoreBaseIndexed(). This commit only fixes a part of the assertion issues. Change-Id: I473194d4260dd59a8ee6d73114429728c977ee0e
|
ed65c5e982705defdb597d94d1aa3f2997239c9b |
|
22-May-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Enable LONG_* and INT_* opcodes. This patch fixes some of the issues with LONG and INT opcodes. The patch has been tested and passes all the dalvik tests except for 018 and 107. Change-Id: Idd1923ed935ee8236ab0c7e5fa969eaefeea8708 Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
bc6d197cdb02eeac0c98ec4ed37f530b003a4e7a |
|
13-May-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: fixes in A64 code generation. - Disabled special method compilation, as it requires hard-float ABI, - Disabled suspend checks, as runtime is not yet ready (e.g. trampolines are not setting the suspend register, etc), - Changing definition of zero register (the zero register has now 0x3f as its register number), - Fixing some issues with handling of cmp instructions in the assembler: we now use the shift-register rather than the extended-register variant of cmp and cmn, - Partially fixing register setup (register sN is now mapped to dN), - Fixing and completing implementation of register spills/unspills, - Fixing LoadBaseDispBody() and StoreBaseDispBody(). Change-Id: Ia49ba48b6ca0f782380066345b7a198cb6c1dc1d
|
2f244e9faccfcca68af3c5484c397a01a1c3a342 |
|
08-May-2014 |
Andreas Gampe <agampe@google.com> |
ART: Add more ThreadOffset in Mir2Lir and backends This duplicates all methods with ThreadOffset parameters, so that both ThreadOffset<4> and ThreadOffset<8> can be handled. Dynamic checks against the compilation unit's instruction set determine which pointer size to use and therefore which methods to call. Methods with unsupported pointer sizes should fatally fail, as this indicates an issue during method selection. Change-Id: Ifdb445b3732d3dc5e6a220db57374a55e91e1bf6
|
674744e635ddbdfb311fbd25b5a27356560d30c3 |
|
24-Apr-2014 |
Vladimir Marko <vmarko@google.com> |
Use atomic load/store for volatile IGET/IPUT/SGET/SPUT. Bug: 14112919 Change-Id: I79316f438dd3adea9b2653ffc968af83671ad282
|
e45fb9e7976c8462b94a58ad60b006b0eacec49f |
|
06-May-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: Change arm64 backend to produce A64 code. The arm backend clone is changed to produce A64 code. At the moment this backend can only compile simple methods (both leaf and non-leaf). Most of the work on the assembler (assembler_arm64.cc) has been done. Some work on the LIR generation layer (functions such as OpRegRegImm & friends) is still necessary. The register allocator still needs to be adapted to the A64 instruction set (it is mostly unchanged from the arm backend). Offsets for helpers in gen_invoke.cc still need to be changed to work on 64-bit. Change-Id: I388f99eeb832857981c7d9d5cb5b71af64a4b921
|
3bf7c60a86d49bf8c05c5d2ac5ca8e9f80bd9824 |
|
07-May-2014 |
Vladimir Marko <vmarko@google.com> |
Cleanup ARM load/store wide and remove unused param s_reg. Use a single LDRD/VLDR instruction for wide load/store on ARM, adjust the base pointer if needed. Remove unused parameter s_reg from LoadBaseDisp(), LoadBaseIndexedDisp() and StoreBaseIndexedDisp() on all architectures. Change-Id: I25a9a42d523a68addbc11abe44ddc55a4401df98
|
455759b5702b9435b91d1b4dada22c4cce7cae3c |
|
06-May-2014 |
Vladimir Marko <vmarko@google.com> |
Remove LoadBaseDispWide and StoreBaseDispWide. Just pass k64 or kDouble to non-wide versions. Change-Id: I000619c3b78d3a71db42edc747c8a0ba1ee229be
|
43ec8737d8356dbff0a90bee521fb0e73438da47 |
|
31-Mar-2014 |
Matteo Franchin <matteo.franchin@arm.com> |
AArch64: Added arm64 quick backend as an arm clone. Created a new directory arm64 under compiler/dex/quick which contains a copy of the 32-bit arm backend. In following CLs, this code will be replaced/modified to support Aarch64. Change-Id: I06c468db8d588e339eecf4d7d85276d5e334a17a
|