a0b23bbed9b2199e85da401e3b2d0ddef74cd9fb |
|
07-Jul-2015 |
Andreas Gampe <agampe@google.com> |
ART: Release inputs in Long.reverse intrinsic in x86 In the worst case we are using two temps each for input and output. Then we do not have a temp left over for the swap operations. The input is dead, however, after the first swap. So try to release it (a no-op if it isn't actually a temp). Bug: 22324327 (cherry picked from commit 575422fa5be7389bdaff5e2d25dd87b1d2d4de85) Change-Id: I1fc50159afdad14160e34abeaf4670958171d6b2
|
ce7d005c1ba0716423d44861d2d0f58f142ff06a |
|
08-May-2015 |
Andreas Gampe <agampe@google.com> |
ART: arm indexOf intrinsics for the optimizing compiler Add intrinsics implementations for indexOf in the optimizing compiler. These are mostly ported from Quick. Bug: 20889065 (cherry picked from commit ba6fdbcb764d5a8972f5ff2d7147e4d78226b347) Change-Id: I18ee849d41187a381f99529669e6f97040aaacf6
|
21030dd59b1e350f6f43de39e3c4ce0886ff539c |
|
07-May-2015 |
Andreas Gampe <agampe@google.com> |
ART: x86 indexOf intrinsics for the optimizing compiler Add intrinsics implementations for indexOf in the optimizing compiler. These are mostly ported from Quick. Add instruction support to assemblers where necessary. Change-Id: Ife90ed0245532a5c436a26fe84715dc357f353c8
|
678e6959d5af8e7b07bf51f1648516c146bdf8d2 |
|
08-May-2015 |
Andreas Gampe <agampe@google.com> |
ART: Refactor 082-inline-execute Refactor the indexOf intrinsics tests so that the optimizing compiler would actually compile them. Bug: 20889065 Change-Id: I69bfda7fa3eb4ce42c593203731e3ddd61f7e1ed
|
2bcf9bf784a0021630d8fe63d7230d46d6891780 |
|
29-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Arm intrinsics for Optimizing compiler Add arm32 intrinsics to the optimizing compiler. Change-Id: If4aeedbf560862074d8ee08ca4484b666d6b9bf0
|
878d58cbaf6b17a9e3dcab790754527f3ebc69e5 |
|
16-Jan-2015 |
Andreas Gampe <agampe@google.com> |
ART: Arm64 optimizing compiler intrinsics Implement most intrinsics for the optimizing compiler for Arm64. Change-Id: Idb459be09f0524cb9aeab7a5c7fccb1c6b65a707
|
24c846a0df02d4cc2ef8a9c476305dca96be40db |
|
26-Jan-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix range check for intrinsic String.charAt() on x86. Bug: 19125146 (cherry picked from commit 00ca84730a21578dcc6b47bd8e08b78ab9b2dded) Change-Id: I67184371597fdcc9d9186172c1cff4efd3ca3093
|
00ca84730a21578dcc6b47bd8e08b78ab9b2dded |
|
26-Jan-2015 |
Vladimir Marko <vmarko@google.com> |
Quick: Fix range check for intrinsic String.charAt() on x86. Bug: 19125146 Change-Id: I274190a7a60cd2e29a854738ed1ec99a9e611969
|
ff87d7bdc2c06bece8ea783dd4979360f1d51103 |
|
20-Jan-2015 |
Chao-ying Fu <chao-ying.fu@intel.com> |
ART: Fix GenInlined functions This patch fixes Mir2Lir::GenInlinedReverseBytes, Mir2Lir::GenInlinedAbsInt, Mir2Lir::GenInlinedAbsLong, Mir2Lir::GenInlinedFloatCvt, Mir2Lir::GenInlinedDoubleCvt, X86Mir2Lir::GenInlinedSqrt, X86Mir2Lir::GenInlinedMinMaxFP, X86Mir2Lir::GenInlinedMinMax, X86Mir2Lir::GenInlinedPeek, and X86Mir2Lir::GenInlinedReverseBits to generate no code, when results are unused. New calls without assignments are added to 082-inline-execute. Change-Id: I7076e9ddbea43545315f2aeb677c63a8a6e95224 Signed-off-by: Chao-ying Fu <chao-ying.fu@intel.com>
|
71fb52fee246b7d511f520febbd73dc7a9bbca79 |
|
30-Dec-2014 |
Andreas Gampe <agampe@google.com> |
ART: Optimizing compiler intrinsics Add intrinsics infrastructure to the optimizing compiler. Add almost all intrinsics supported by Quick to the x86-64 backend. Further intrinsics require more assembler support. Change-Id: I48de9b44c82886bb298d16e74e12a9506b8e8807
|
2eba1fa7e9e5f91e18ae3778d529520bd2c78d55 |
|
31-Jul-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Add inlining support for ceil(), floor(), rint(), round() This patch adds inlining support for the following Math, StrictMath methods in the ARM64 backend: * double ceil(double) * double floor(double) * double rint(double) * long round(double) * int round(float) Also some cleanup. Change-Id: I9f5a2f4065b1313649f4b0c4380b8176703c3fe1 Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
1222c96fafe98061cfc57d3bd115f46edb64e624 |
|
15-Jul-2014 |
Alexei Zavjalov <alexei.zavjalov@intel.com> |
ART: inline Math.Max/Min (float and double) This implements the inlined version of Math.Max/Min intrinsics. Change-Id: I2db8fa7603db3cdf01016ec26811a96f91b1e6ed Signed-off-by: Alexei Zavjalov <alexei.zavjalov@intel.com> Signed-off-by: Shou, Yixin <yixin.shou@intel.com>
|
f37a88b8e6db6c587fa449a12e40cb46be1689fc |
|
10-Jul-2014 |
Zuo Wang <zuo.wang@intel.com> |
ART: Compacting ROS/DlMalloc spaces with semispace copy GC Current semispace copy GC is mainly associated with bump pointer spaces. Though it squeezes fragmentation most aggressively, an extra copy is required to re-establish the data in the ROS/DlMalloc space to allow CMS GCs to happen afterwards. As semispace copy GC is still stop-the-world, this not only introduces unnecessary overheads but also longer response time. Response time indicates the time duration between the start of transition request and the start of transition animation, which may impact the user experience. Using semispace copy GC to compact the data in a ROS space to another ROS(or DlMalloc space to another DlMalloc) space solves this problem. Although it squeezes less fragmentation, CMS GCs can run immediately after the compaction. We apply this algorithm in two cases: 1) Right before throwing an OOM if -XX:EnableHSpaceCompactForOOM is passed in as true. 2) When app is switched to background if the -XX:BackgroundGC option has value HSpaceCompact. For case 1), OOMs are significantly delayed in the harmony GC stress test, with compaction ratio up to 0.87. For case 2), compaction ratio around 0.5 is observed in both built-in SMS and browser. Similar results have been obtained on other apps as well. Change-Id: Iad9eabc6d046659fda3535ae20f21bc31f89ded3 Signed-off-by: Wang, Zuo <zuo.wang@intel.com> Signed-off-by: Chang, Yang <yang.chang@intel.com> Signed-off-by: Lei Li <lei.l.li@intel.com> Signed-off-by: Lin Zang <lin.zang@intel.com>
|
a3fe7422d7ce8bfb01f95decef45f91a44d39264 |
|
09-Jul-2014 |
Zheng Xu <zheng.xu@arm.com> |
AArch64: Fix and enable reverseBytes intrinsic. There is no revsh on arm64, use rev16 and sxth instead. Change-Id: I5f9879352f0ad76b386c82cbf476894af888a64c
|
0cbfd44bd3dce9bc796e851237c5646336eee4d1 |
|
09-Jul-2014 |
Andreas Gampe <agampe@google.com> |
ART: Add simple tests for inlining of CAS Add simple test cases for the inlining of CAS in the quick compiler to run-test 082. The tests are not multi-threaded and will just establish that the baseline behavior is correct. For extensive evaluation consider tests available in libcore. Change-Id: I9f463599e48ab7abc725769dda84758c9c6a76c2
|
eb24baec056dbe5871f1bc64b793eb2e69907866 |
|
08-Jul-2014 |
Alexei Zavjalov <alexei.zavjalov@intel.com> |
x86_64: enable Peek and Poke intrinsics This implements intrinsics for: Memory.peekByte/Short/Int/Long() Memory.pokeByte/Short/Int/Long() Change-Id: I6da6250f262dfd7aded35c2e3ade2d0916bd73cb Signed-off-by: Alexei Zavjalov <alexei.zavjalov@intel.com>
|
7a94961d0917495644193b281b04a570a783bb07 |
|
08-Jul-2014 |
Andreas Gampe <agampe@google.com> |
ART: Do not emit load when inlining unused Thread.currentThread() When the result is not used, do not emit the load. This avoids uninitialized registers leading to size-check errors. Change-Id: I212392ffea7243720f120b2f12679df286106a02
|
23abec955e2e733999a1e2c30e4e384e46e5dde4 |
|
02-Jul-2014 |
Serban Constantinescu <serban.constantinescu@arm.com> |
AArch64: Add few more inline functions This patch adds inlining support for the following functions: * Math.max/min(long, long) * Math.max/min(float, float) * Math.max/min(double, double) * Integer.reverse(int) * Long.reverse(long) Change-Id: Ia2b1619fd052358b3a0d23e5fcbfdb823d2029b9 Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
|
a1758d83e298c9ee31848bcae07c2a35f6efd618 |
|
16-Apr-2014 |
Alexei Zavjalov <alexei.zavjalov@intel.com> |
String.IndexOf method handles negative start index value in incorrect way The standard implementation of String.IndexOf converts the negative value of the start index to 0 and searching will start from the beginning of the string. But current implementation may start searching from the incorrect memory offset, that can lead to sigsegv or return incorrect result. This patch adds the handler for cases when fromIndex is negative. Change-Id: I3ac86290712789559eaf5e46bef0006872395bfa Signed-off-by: Alexei Zavjalov <alexei.zavjalov@intel.com>
|
34fa0d935bed7a0e17bc6df4bd079e3428a179e7 |
|
12-Mar-2014 |
Yevgeny Rouban <yevgeny.y.rouban@intel.com> |
ART's intrinsic for String.indexOf use the incorrect register ART's intrinsic for String.indexOf of x86 platform use the incorrect register to compare start with the string length. It should be fixed. Change-Id: I22986b4d4b23f62b4bb97baab9fe43152d12145e Signed-off-by: Vladimir Ivanov <vladimir.a.ivanov@intel.com> Signed-off-by: Yevgeny Rouban <yevgeny.y.rouban@intel.com>
|
8249b425ba81d804c222c746e31bfcac9516e759 |
|
29-Oct-2013 |
Sebastien Hertz <shertz@google.com> |
Avoid verifier crash for quickened invoke on null. When verifying an invoke-virtual-quick on a "null" instance, we can't infer the class of the method being invoked. This CL handles this case and avoid a crash due to a failed check in RegType::GetClass. Also revert changes made to test 082-inline-execute since it succeeds with this CL now. Bug: 11427954 Change-Id: I4b2c1deaa43b144684539acea471543716f36fb3
|
ad3d996316dd90b84b4b29ccdfc4aeeb1ec890ee |
|
29-Oct-2013 |
Sebastien Hertz <shertz@google.com> |
Fix test failure in SMART mode. Update test 082-inline-execute to avoid a verifier issue with quickening. An invoke-virtual is quickened into an invoke-virtual-quick. However it is made on a "null" instance. During verification, we can't infer the method being invoked from the vtable index since we have no access to the class. This leads to fail on a check and abort. Bug: 11427954 Change-Id: I740d67c72abb617db67a2c35d9be8346358f78ce
|
bf1442d5445405ddc4f67cdac2b4ebe2d37888e0 |
|
05-Mar-2013 |
Sebastien Hertz <shertz@google.com> |
Update intrinsic inlining test. Adds invokes of StrictMath operations to reflect compiler inlining support. Change-Id: Ibb2205a41c1e79ddbeacc2e716a9d05b723eb532
|
28c384bc3bf7244f25cfe320c55db5d3d9171832 |
|
16-Jun-2012 |
Elliott Hughes <enh@google.com> |
Test all cases of all intrinsics. Bug: 6617283 Change-Id: I463ef1e2c09ad41af2e45f17f2f23e8d59f560e0
|
5d1ac920fdaef5d4ec8f66bb734488cd9660b024 |
|
30-Sep-2011 |
jeffhao <jeffhao@google.com> |
Adding old unit tests to test suite. These tests are copied straight over. They'll still run, but they're using the old system. Change-Id: If494519e52ddf858a9febfc55bdae830468cb3c8
|