09ed09866da6d8c7448ef297c148bfa577a247c2 |
|
12-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Pack stack map entries on bit level to save space. Use only the minimum number of bits required to store stack map data. For example, if native_pc needs 5 bits and dex_pc needs 3 bits, they will share the first byte of the stack map entry. The header is changed to store bit offsets of the fields rather than byte sizes. Offsets also make it easier to access later fields without calculating sum of all previous sizes. All of the header fields are byte sized or encoded as ULEB128 instead of the previous fixed size encoding. This shrinks it by about half. It saves 3.6 MB from non-debuggable boot.oat (AOSP). It saves 3.1 MB from debuggable boot.oat (AOSP). It saves 2.8 MB (of 99.4 MB) from /system/framework/arm/ (GOOG). It saves 1.0 MB (of 27.8 MB) from /system/framework/oat/arm/ (GOOG). Field loads from stackmaps seem to get around 10% faster. (based on the time it takes to load all stackmap entries from boot.oat) Bug: 27640410 Change-Id: I8bf0996b4eb24300c1b0dfc6e9d99fe85d04a1b7
|
d89f605b1de929ae158b3844e44a5d57f7aad72d |
|
12-Mar-2016 |
David Srbecky <dsrbecky@google.com> |
Ignore empty maps when emitting DWARF variable locations. This is rewrite of https://android-review.googlesource.com/#/c/202115 The aim in both cases is to avoid gaps in generated locations. (which helps to keep the size of the generated DWARF down). However, the previous CL was a bit too eager with extending of variable scope and reporting locations. We might have reported variable in scope when in fact, it was not. This CL implements simpler solution by just filtering stackmaps without dex register maps at first opportunity. This should ensure that locations for breakpoint locations are completely accurate as originally intended. Change-Id: I98378716c0ef5ef46b12181502904621eb6ecf2f
|
197160d47f34238cb5e7444fa4c2de300db8e2c6 |
|
07-Mar-2016 |
David Srbecky <dsrbecky@google.com> |
Refactor MethodDebugInfo (input of DWARF writer). Do not pass CompiledMethod pointer through since it is only available during AOT compile but not during JIT compile or at runtime. Creating mock CompiledMethod just pass data is proving increasingly tricky, so copy the fields that we need to MethodDebugInfo instead. Change-Id: I820297b41e769fcac488c0ff2d2ea0492bb13ed8
|
2ed15b61105b0f8ce811c32725bb9a1b6142c3a7 |
|
04-Mar-2016 |
David Srbecky <dsrbecky@google.com> |
Cache DexRegisterMaps when writing native debug info. I might make the function more expensive in the future so I want to make sure it gets called only the minimum number of times. Change-Id: I1d09ecf1db7b54d28aaa11a152226d469f514fe7
|
7dc11782ff0a5dffcd8108f256f8975f0b3e8076 |
|
25-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Implement << operator for DexRegisterLocation::Kind. This makes it comparable in DCHECK_EQ and similar methods. Change-Id: I6b5b237be89325850ae6860d011fd6741189ab01
|
b396c735e0de984ab0dcbeed06765c75a75e8352 |
|
10-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Add simple deduplication for .debug_ranges. Variables with same scope can usually share .debug_ranges entries. Change-Id: I855f456782afdcc8ac5f622365d62ba8950a5c95
|
bfd26cdd6f8273618a3a3137ada579b03f96ae82 |
|
10-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Fill some gaps in .debug_loc. Use best-effort guess when the location is unknown. This is only relevant for PCs in the middle of the statement where the debugger should not usually stop in the first place. The main motivation is to reduce the size .debug_loc since the best-effort guesses allow us to merge consecutive entries. Change-Id: I94bfd01363404e72a2c953309e59020b1a6a4764
|
c5bfa97c47d656b76f297af8abcd5f7502987399 |
|
05-Feb-2016 |
David Srbecky <dsrbecky@google.com> |
Split elf_writer_debug.cc to several files. Refactoring only. The file has grown significantly over time, and it is time to split it so it can be better managed. Change-Id: Idce0231718add722292f4701df353d5baf31de5f
|