174b2e27ebf933b80f4e8b64b4b024ab4306aaac |
|
12-Oct-2017 |
Vladimir Marko <vmarko@google.com> |
Use ScopedArenaAllocator for code generation. Reuse the memory previously allocated on the ArenaStack by optimization passes. This CL handles only the architecture-independent codegen and slow paths, architecture-dependent codegen allocations shall be moved to the ScopedArenaAllocator in a follow-up. Memory needed to compile the two most expensive methods for aosp_angler-userdebug boot image: BatteryStats.dumpCheckinLocked() : 19.6MiB -> 18.5MiB (-1189KiB) BatteryStats.dumpLocked(): 39.3MiB -> 37.0MiB (-2379KiB) Also move definitions of functions that use bit_vector-inl.h from bit_vector.h also to bit_vector-inl.h . Test: m test-art-host-gtest Test: testrunner.py --host --optimizing Bug: 64312607 Change-Id: I84688c3a5a95bf90f56bd3a150bc31fedc95f29c
|
121f1482c27e240ce09b2373d186831e4dff9689 |
|
12-May-2017 |
Andreas Gampe <agampe@google.com> |
ART: Add arena tracking mode Add an arena tracking mode to get better insight into allocation behavior. In this mode, the default size of arenas is very small so that arena use for multiple places is limited. At the same time, do not release arenas back into their pool and instead free them. Tracking in this context is wrt/ tools that analyze calls to allocation routines, e.g., massif and heaptrack. The goal of this CL is to enable more precise tracking with such tools. The smaller minimal arena sizes and deallocation instead of reuse will lead to actual malloc calls instead of bump-pointer behavior, exposing ArenaAllocator-based allocation to such tools. To limit the build-time impact of switching tracking, add an -inl file for the arena allocator that defines the controlling flag and the default arena size. Bug: 34053922 Test: m test-art-host Change-Id: I09bb5e743d7dc47e499a402d6fcac637c16a26ad
|
4fb3a42ae82859603209f23f2025551398eec6ba |
|
15-Feb-2016 |
Vladimir Marko <vmarko@google.com> |
ART: Fix ArenaStack::AllocWithMemoryTool(). MEMORY_TOOL_MAKE_NOACCESS() takes the size of the address range as the second argument, not the end the range. Bug: 27156726 Change-Id: I05c8224a1d3c619919b203f407fb770c7c49cc9f
|
75001934af9fa3f2538f564bb4073d711809f1ff |
|
10-Nov-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Fix arena allocation for valgrind. Move the zero-initialization check after marking the newly allocated chunk as defined and check only the allocated space without the red zone. Also mark unallocated space as inaccessible instead of just undefined. Change-Id: I74fc65f5b53acb74cec4e5a0146f41dacf4a1470
|
2a408a3bef330551818f9cec9a7c5aa7a3f1129e |
|
18-Sep-2015 |
Vladimir Marko <vmarko@google.com> |
ART: Mark deallocated arena memory as inaccessible. Mark arena and scoped arena memory freed by allocator adapters as inaccessible. This can help catch accesses to old storage of a container, for example the old data of an ArenaVector<> that's been resized. Together with debug-mode enforcement of destruction of all scoped arena containers, this provides strong verification of their memory usage. However, this does not apply to the normal (non-scoped) arena memory held by arena containers as they are typically not destroyed if they are themselves located in the arena. ArenaBitVector memory, whether in normal or scoped arena, isn't marked either. Change-Id: I4d2a80fedf7ceb7d4ce24ee8e7bcd53513171388
|
1e13374baf7dfaf442ffbf9809c37c131d681eaf |
|
20-May-2015 |
Evgenii Stepanov <eugenis@google.com> |
Generalize Valgrind annotations in ART to support ASan. Also add redzones around non-fixed mem_map(s). Also extend -Wframe-larger-than limit to enable arm64 ASan build. Change-Id: Ie572481a25fead59fc8978d2c317a33ac418516c
|
b666f4805c8ae707ea6fd7f6c7f375e0b000dba8 |
|
18-Feb-2015 |
Mathieu Chartier <mathieuc@google.com> |
Move arenas into runtime Moved arena pool into the runtime. Motivation: Allow GC to use arena allocators, recycle arena pool for linear alloc. Bug: 19264997 Change-Id: I8ddbb6d55ee923a980b28fb656c758c5d7697c2f
|