88c57e189ecf7d5ed64d1073a37839b250874aa5 |
|
11-Oct-2010 |
Carl Shapiro <cshapiro@google.com> |
Use the break position of the current mspace for sizing the zygote heap. Previously, the mspace footprint used the "overhead" of a heap which underestimates the size of the zygote heap by 16 bytes, the specific size of a descriptor deposited at the start of an mspace containing the control information about that mspace. If a heap is a multiple of a page or within 15 bytes of it, the size of the new heap would be underestimated. Bad things happened when this underestimate was used to create an application heap. The starting address of the application heap was based on a correctly computed value instead of the underestimate. This caused the application heap to be one page to large and end one page beyond where it should. This additional page happened to overlap the first page one of the heap bitmaps. Furthermore, the mspace routine would proceed access protect that page thinking it was unused free space. During the next GC reads to the first page of the bitmap would generate a SIGSEGV. By using the break position, correctly rounded, for all sizing computations this problem no longer exists. Change-Id: Icb3c82731e589747e8e4cf16d0797052e64b3ad5
|
718509c35413c01a866c848d019d8ca28b425bf6 |
|
29-Sep-2010 |
Carl Shapiro <cshapiro@google.com> |
After trimming, set the footprint to the number of pages in use. A trim can decrease the an mspace footprint but it will not decrease its max footprint. We need to decrease the max footprint to make any pages recovered by a trim available to external allocations. By setting the ideal footprint after a trim we lift any soft limit in effect and make the mspace footprint and max footprints equal. Change-Id: Ia6eb99634ce1d732b417a90291b110d1fc46c2e3
|
e8edf08f04ecbe37f3e18a650a7c9002ceee4275 |
|
28-Sep-2010 |
Carl Shapiro <cshapiro@google.com> |
Perform only one garbage collection before attempting a trim. Change-Id: Id7ea77fd8e6055a76a5f52bb96dd0544f88ce06b
|
812c1bed55e0ed9b092d320cb29d8adc17e5a10b |
|
27-Sep-2010 |
Carl Shapiro <cshapiro@google.com> |
Provide the required out parameter to the trim routine. Resolves http://b/issue?id=3040192. Change-Id: I886a2dc99956b06e953f03ac390865b118b634a3
|
1c9d0ab244da441d95a1f2abbb104a0b1015d5d5 |
|
25-Sep-2010 |
Carl Shapiro <cshapiro@google.com> |
Wait for the GC to quiesce before attempting foreground GCs. Previously, dvmTrackExternalAllocation waited for the GC to complete before retrying its allocation. However, there is no guarantee that the GC will not be active at the time we are woken. Furthermore, the code did not revalidate that the external allocation is still possible, an assumption made by all externalAlloc calls. With this change, the code loops until the GC is no longer active, validates that the allocation is still possible, and then proceeds with its routine for allocating additional storage. In addition, if we try a few heroic measures to establish the externalAllocPossible invariant rather than immediately failing the call. Change-Id: I2e3b8a6c9fab617990edc085f52d0df35ad6d0f6
|
2c81bdc3bb892d7d60855e14f61854f20a9f6cb8 |
|
08-Sep-2010 |
Carl Shapiro <cshapiro@google.com> |
Cherry pick new concurrent gc trigger change from dalvik-dev. git cherry-pick d370c7d8c5bd4f49274b5d306751c43c7bb44a0b --no-commit git cherry-pick 562cafca106d36ae910fafa87f3d5f245fe818ae --no-commit git cherry-pick ab46f94967a76a1c141c1e719d5f2cffe2780a8c --no-commit Change-Id: Iba35cd3afee5d575b8121f7ab3ef5b45b37f5278
|
b8c347d05fc8137d793df4dfdb98c429151d54a4 |
|
19-Aug-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove an assertion that cannot be guaranteed. The intention of this assert was to check that the address range spanned by the mark bitmap was a proper superset of the live bitmap. Because allocations can occur before these bitmaps are prepared, the live bitmap may legitimately expand beyond the range of the mark bitmap. As such, this check is not reliable. Change-Id: I2f23e9e7f3716a61ecf155ba81fd8baa5a82100d
|
0d615c3ce5bf97ae65b9347ee77968f38620d5e8 |
|
18-Aug-2010 |
Andy McFadden <fadden@android.com> |
Always support debugging and profiling. This eliminates the use of the WITH_DEBUGGER and WITH_PROFILER conditional compilation flags. We've never shipped a device without these features, and it's unlikely we ever will. They're not worth the code clutter they cause. As usual, since I can't test the x86-atom code I left that alone and added an item to the TODO list. Bug 2923442. Change-Id: I335ebd5193bc86f7641513b1b41c0378839be1fe
|
e6a1b4dfc33732368bc6045501acd5e6e95f32a4 |
|
18-Aug-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix a critical space leak introduced by concurrent sweeping. When computing the bitmaps for each heap, the live bitmap was assumed to have greater extent than the mark bitmap. With the concurrent sweep the mark and live bitmaps are swapped before the sweep bitmaps are computed. As such, the live bitmap extent is always less than or equal to the mark bitmap. A benchmark which loops creating objects just to drop them on the floor will exclude most objects in the heap as candidates for sweeping and will exhaust the heap. The change fixes the extent computation and reintroduces an assert to check that the bitmap we assume to be the largest is the largest. Change-Id: I78694d2a0550de70c85e2087d482050a147a207a
|
8881a8098e259a1faf392d20c1fefc1ee4a63b20 |
|
11-Aug-2010 |
Carl Shapiro <cshapiro@google.com> |
Sweep concurrently. After marking, exchange the mark and live bitmaps and resume all threads. The sweep proceeds concurrently viewing the new live bitmap as the old mark bitmap thereby permitting allocations performed while sweeping to update the live bitmap. Change-Id: I9c307190a14ce417413175db016be41c38aeeaf3
|
4dc622c5b4ce48b5f2f0a0f92f316dd557fc950f |
|
31-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove the dvmHeapSizeChanged no-op routine. Change-Id: I6deb4ea858610edee6e7aa44d49d91cae5a25404
|
81010a40820b7c74a09d11d612d12a19e0c0488d |
|
19-Jul-2010 |
Barry Hayes <bhayes@google.com> |
Break apart the swapping of the mark and live bitmaps and zeroing of the mark bitmap. This paves the way for concurrent sweep. Change-Id: I93a95188fecfd69d1d1933391a332537649206fa
|
fe6087708cdeafecf223dd1d2d5e6606db70a4e1 |
|
29-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix a crash during VM shutdown. The code to shutdown the GC daemon thread was not checking to see if the GC daemon thread had been initialized. This caused pthread_join to crash waiting for an uninitialized thread object. Change-Id: Iac338a054775aa024d74fbb4a5de35e12d95b862
|
8a0b52386b1b34c132b11eba8e534ca26665077e |
|
21-Jul-2010 |
Barry Hayes <bhayes@google.com> |
Fixup a failing assert. YET AGAIN bit by max being inclusive. Only failed occasionally, and failures quickly restart and roll off the log. I wasn't watching closely enough when I tested, I expect. Change-Id: I2c8e9c0b49ffd9b575e8fa4c882bdae9c64e8304
|
73f3e6f5ef9eda738324bcd5634df172d9c6e977 |
|
15-Jul-2010 |
Barry Hayes <bhayes@google.com> |
When aliasing a bitmap, use smallest available limit. dvmHeapSourceGetObjectBitmaps was using each Heap's allocation limit to set the max for the bitmap. This causes the HeapBitmap iterators to do unnecessary work, over parts of the HeapBitmap which could be known to be all zeros. Making the max for a HeapBitmap also take into account the liveBits's max will inform the HeapBitmap iterators that these regions are all zeros. This will only change the calculation for the active allocation Heap; but when there are two Heaps, the Zygote Heap is expected to be pretty well packed, with not a lot of extra words of zeroes in the HeapBitmap. Also fixes a defect in aliasBitmap's calculation of bitsLen. See http://b/issue?id=2857152 Change-Id: Iacb6bc400318702d760a774c6ca5eab67b8bdfd3
|
4ab3f7540e6d4f344f244e4bae2e08895d1e6e92 |
|
20-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Revert "When aliasing a bitmap, use smallest available limit." This did not correctly compute the intended limit. This reverts commit 740cd296284e682d9d92bd638d266efccf817bbe.
|
d7de450bfc47208409da7614e8f904a4a7cb20dc |
|
15-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix a crash during the VM shutdown. The routine to shutdown the threads in a heap source was not checking to see whether a heap source had been created. This caused a crash when VM requested a shutdown before it had fully initialized. Change-Id: Iea0c7e9ed86ede881986a6576d9f973b2ec8c36d
|
b874ab98306a109c4988bb1cde687a24f4f8201f |
|
14-Jul-2010 |
Barry Hayes <bhayes@google.com> |
Make dvmCardTableStartup be more independant of HeapSource startup. Also added a call to dvmCardTableShutdown. Also removed the dependency on GcHeap from CardTable.h. Change-Id: Icf0293572371cc8a30b55672816fdd75a151e82c
|
f16e9ed8b59e623c46177db4e13129095d4210fd |
|
14-Jul-2010 |
Barry Hayes <bhayes@google.com> |
Revert "Fix the bitsLen calculation in aliasBitmap." This reverts commit 87a48d25f4451824261153bfdde9dc4c4e51cbb2.
|
87a48d25f4451824261153bfdde9dc4c4e51cbb2 |
|
14-Jul-2010 |
Barry Hayes <bhayes@google.com> |
Fix the bitsLen calculation in aliasBitmap. Since the max in inclusive, not exclusive, this was making the bits appear too short. HeapBitmap's walk routines were using this short value for the finger in the final FLUSH_POINTERBUF call. That caused scanObject to believe that it could mark objects at the top of the heap, rather than having to push them on the maek stack. But, alas, as we have noted, it is the final FLUSH_POINTERBUF. Change-Id: Ie562defc3c2e5304cf4c96026447359b3f4fcb59
|
9da2fc026533b476fd2db167c83abad62dd9e01d |
|
13-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove a stray log statement. Change-Id: I4c4c6111cc67480377374ee3c10b729cc93622f9
|
740cd296284e682d9d92bd638d266efccf817bbe |
|
10-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
When aliasing a bitmap, use smallest available limit. The original implementation of the bitmap aliasing routine chose the conservative max value of the heap limit. This is perfect for the zygote but is oversized for application heaps that are not anywhere near full. Now the code consults the live bitmap and will use its max value if it is smaller than the heap limit. Change-Id: I7cf223efdeaed318922a8a5f9e147092e539da6c
|
6e5cf6021b2f3e00e18ab402f23ab93b27c6061b |
|
22-Jun-2010 |
Barry Hayes <bhayes@google.com> |
Quicker partial collection by using card marking. Add calls to the card marking from the write barrier routines, so that a write to an Object marks the appropriate card. Add code in the GC to use and rebuild the cards at a partial GC, clearing cards in the Zygote heap which do not in fact contain references to the application heap. Change-Id: Ie6f29fd096e029f48085715b282b6db8a7122555
|
ec47e2e081dcd43dca10d5e2c6856f73e94b0460 |
|
02-Jul-2010 |
Carl Shapiro <cshapiro@google.com> |
Allow allocation during a concurrent GC. Previously, any thread performing a GC held the heap lock for the entire GC. If the GC performed was a concurrent GC, mutator threads that allocate during the GC would be blocked until the GC completed. With this change, if the GC performed is a concurrent GC, the heap lock is released while the roots are being traced. If a mutator thread allocates an object from available storage, the allocation proceeds. If a mutator thread attempts to allocate an object larger than available storage, the thread will block until the GC completes. Change-Id: I91a04179c6f583f878b685405a6fdd16b9995017
|
9e59477ceac80df2e00ee4a387df31c7c5e8d05f |
|
29-Jun-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix a deadlock in when shutting down the GC thread. Change-Id: Ief3ec542db45c911ece7b9d02359a9326c5e14c8
|
ec805eaed940e40212e85b58b163c7649feaca56 |
|
29-Jun-2010 |
Carl Shapiro <cshapiro@google.com> |
Add a mode for concurrently marking and sweeping. When enabled, the mutator threads will be resumed while tracing proceeds from the roots. After the trace completes the mutator threads are suspended, the roots are remarked, and marked objects are scanned for updates and re-marked. There are two limitations to this implementation. For the sake of expediency, mutators are not permitted to allocate during the concurrent marking phase. This will be addressed in a subsequent change. As there is no write barrier, all objects, rather than just those objects assigned to during the concurrent phase, are scanned for updates. This will be addressed after a write barrier is implemented. Change-Id: I82dba23b58a1cf985589ed21ec0cffb5ebf48aae
|
864e8d0e8828cd7e3560ac3e77b5716f7a1d8ca0 |
|
26-Jun-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove an obsolete TODO. Change-Id: Ie1b79eec88bf410548d42faca4f511387859e02d
|
364f9d924cbd9d392744a66f80cc084c3d80caf0 |
|
12-Jun-2010 |
Barry Hayes <bhayes@google.com> |
Put wrappers on all stores of Object pointers into heap Objects. Also: Changed ++ loops to [i] loops where I'm touching. Added some asserts. Added dvmHeapSourceContainsAddress Added dvmIsValidObjectAddress Change-Id: I6586688246064aecabb1e22e1dca276fecee7795
|
c5f53e2c1107e8a62638038bbff163731908da34 |
|
12-Jun-2010 |
Elliott Hughes <enh@google.com> |
Consistently use strerror(3) rather than reporting raw errno values. We were doing the decoding almost everywhere, leaving just this handful of places to fix. Change-Id: I76100c213ded50e544eb6f740fea0b1633068b92
|
08651de086dc33b55485c0b1ae3b08960641c0b9 |
|
09-Jun-2010 |
Carl Shapiro <cshapiro@google.com> |
Rename the heap virtual memory allocation to "dalvik-heap". Change-Id: I05b9af92c4026be9445aff87520b2529b4757fe7
|
de75089fb7216d19e9c22cce4dc62a49513477d3 |
|
09-Jun-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove trailing whitespace. Change-Id: I95534bb2b88eaf48f2329282041118cd034c812b
|
fbdcfb9ea9e2a78f295834424c3f24986ea45dac |
|
29-May-2010 |
Brian Carlstrom <bdc@google.com> |
Merge remote branch 'goog/dalvik-dev' into dalvik-dev-to-master Change-Id: I0c0edb3ebf0d5e040d6bbbf60269fab0deb70ef9
|
e3c01dac83e6eea7f82fe81ed89cfbdd9791dbc9 |
|
21-May-2010 |
Carl Shapiro <cshapiro@google.com> |
Remove unused labels, variables, and functions. Enable warnings. Change-Id: Icbe24eaf1ad499f28b68b6a5f05368271a0a7e86
|
e168ebd5a7cfc57936c16ff7d7f7063e967bdb9d |
|
07-May-2010 |
Barry Hayes <bhayes@google.com> |
Remove the HeapBitmap List routines. Change-Id: Ic518798ba8574534746ada9e8757948ae2e1bab9
|
425848f6b64603f79c336c08b6e1cbca1a9f6048 |
|
04-May-2010 |
Barry Hayes <bhayes@google.com> |
The "partial GC" code should not copy immune bits when doing a full GC. Change-Id: I09f032e9a1dda585bd2475fc6d6f93f3ef1fc036
|
d77f7fdc429c3aa5c8ff429278d0178392c760b5 |
|
06-Apr-2010 |
Carl Shapiro <cshapiro@google.com> |
Rename the objBits to the more descriptive liveBits. Change-Id: Ic1a2915f943cb9541fcc52c6b6a6d9d3b7354b5f
|
a0f1d13ae7319f7e8821b022c793a9aaa762c102 |
|
04-Apr-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix two issues with the partial gc code. * Eliminate a fence error in bitmap aliasing. The heap structure uses asymetric bounds but the bitmap structure uses symmetric bounds. Bias accordingly. * Explicitly covert the immune limit to a uintptr_t to muffle a compiler warning. Change-Id: I05e74ab57035ee06eb6d88e773b1d680531a7e2f
|
d25566d9278e6424e521f4b7148ac31480e60c5c |
|
12-Mar-2010 |
Carl Shapiro <cshapiro@google.com> |
Add the ability to treat the zygote heap as a root collect just the forked application heap. Change-Id: I8807897ae426f8274018d950fec44a2182a90525
|
962adba4e5db286a36bc8024f5c023bcf6f29312 |
|
17-Mar-2010 |
Barry Hayes <bhayes@google.com> |
Added flags to the vm: -Xgc:[no]preverify -Xgc:[no]postverify to run verify routines over the heap pre- and post-gc. Changed the Verify.h interface. It now publishes an entry point for verifying a HeapBitmap, rather than the HeapBitmap callback. Added a dvmHeapSuspendAndVerify to Heap.h for verification outside of the GC. Added callbacks before and after GC, under the locks, under flag control. Processing of properties to produce flags is in a different project, frameworks/base Change-Id: I3f3896583fe9e7239bbe2f374d7ed4c5dd5d3e82
|
f373efd3321307f54c102e02b3ee7eb922c4765c |
|
19-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Allocate an object and mark bitmap which span the entire virtual memory reservation of the heap. Eliminates indirection during the marking phase.
|
8d72480298d3bab7ca6087482ca62af66974a2a1 |
|
15-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Densely allocate mspaces. New heaps are started on the page following the break of the previously allocated heap. In addition, we shrink the reported size of the previous object bitmap so it covers only the committed range of the heap. We now separately track the size of the bitmap virtual memory reservation so we may unmap it all at shutdown time.
|
98389d0893ad3d3e06cfb38296b01de39e52db31 |
|
15-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Eliminate unused variable warnings in the alloc code. In the places where unused attributes have been added, arguably, the return code should be passed up to the caller or, if the return code indicates an error, we should fast-fail.
|
a199eb70871a8c142a723d76b1b08939286a3199 |
|
10-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Allocate a contiguous region of virtual memory to be subdivided among the various heaps managed by the garbage collector. Because we cannot tell how far the break has been advanced by morecore, we over allocate virtual memory and grain each heap on a multiple of the maximum heap size. If we could reckon the position of the break, we could allocate just as many pages as required. This requires exporting more state from mspace.c, a refinement I will reserve for a future change list.
|
c8e06c8dba30f01d3cf6506c553041c340b312af |
|
05-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Eliminate the post-zygote heap and reuse the zygote allocation heap for application allocations. Previously, applications were given their own heap separate from the zygote. However, the zygote never allocates more than 10s of objects most of which quickly become garbage. After an application fork, these objects are reclaimed, dirtying the pages they and their malloc structures reside on. This is a further win for the GC as it results in one fewer mspace to considered for range checks and bitmap traversals.
|
ddd000b91fc22fb1e85b5687545e231231351fd1 |
|
02-Feb-2010 |
Carl Shapiro <cshapiro@google.com> |
Fix a long standing bug within dvmHeapSourceGetObjectBitmaps. All callers of this function assign the return value to an unsigned value even though this function returns -1 in the error case. This causes the error checks to succeed in cases where it should otherwise fail. Rather than return -1 on error, I have elected to return 0 instead which just happens to be compatible with all current uses.
|
1b9b4e4b89e1c682b6684ae5e2a637e4497a67e9 |
|
04-Jan-2010 |
Barry Hayes <bhayes@google.com> |
Percolate the reason for a GC up far enough to print out in logging messages.
|
06f254ec0102b43fa4faed2483befd945bc12996 |
|
17-Dec-2009 |
Barry Hayes <bhayes@google.com> |
Clean up some misunderstanding about what mspaces are: They are already pointers.
|
dced79474902ffa57fbd48121eb794aad7d24ddc |
|
17-Nov-2009 |
Andy McFadden <fadden@android.com> |
Reduce logging. This cuts out some unnecessarily verbose dalvikvm chatter, notably: Trying to load lib /system/lib/librs_jni.so 0x0 Added shared lib /system/lib/librs_jni.so 0x0 These messages can be useful for people trying to get their apps to work with the NDK, so I'm only suppressing them when the path starts with "/system". The result is that you can boot the system and run all standard apps without seeing them, but we'll still see app-private libs being loaded. Also LOGI->LOGV for "Debugger thread not active, ignoring DDM send", which seemed to be firing on startup for APp NaMe events. Ditto for "Splitting out new zygote heap", which only happens once, but doesn't strike me as a particularly useful thing to log.
|
96516932f1557d8f48a8b2dbbb885af01a11ef6e |
|
29-Oct-2009 |
Andy McFadden <fadden@android.com> |
Change the way breakpoints work. This replaces the breakpoint mechanism with a more efficient approach. We now insert breakpoint instructions into the bytecode stream instead of maintaining a table. This requires mapping DEX files as private instead of shared, which allows copy-on-write to work. mprotect() is used to guard the pages against inadvertent writes. Unused opcode EC is now OP_BREAKPOINT. It's not recognized by dexdump or any interpreter except portdbg, but it can be encountered by the bytecode verifier (the debugger can request breakpoints in unverified code). Breakpoint changes are blocked while the verifier runs to avoid races. This eliminates method->debugBreakpointCount, which is no longer needed. (Also, it clashed with LinearAlloc's read-only mode.) The deferred verification error mechanism was using a code-copying approach to modify the bytecode stream. That has been changed to use the same copy-on-write modification mechanism. Also, normalized all PAGE_SIZE/PAGESIZE references to a single SYSTEM_PAGE_SIZE define. Simple Fibonacci computation test times (opal-eng): JIT, no debugger: 10.6ms Fast interp, no debugger: 36ms Portable interp, no debugger: 43.8ms ORIG debug interp, no breakpoints set: 458ms ORIG debug interp, breakpoint set nearby: 697ms NEW debug interp, no breakpoints set: 341ms NEW debug interp, breakpoints set nearby: 341ms Where "nearby" means there's a breakpoint in the method doing the computation that isn't actually hit -- the VM had an optimization where it flagged methods with breakpoints and skipped some of the processing when possible. The bottom line is that code should run noticeably faster while a debugger is attached.
|
72e93344b4d1ffc71e9c832ec23de0657e5b04a5 |
|
13-Nov-2009 |
Jean-Baptiste Queru <jbq@google.com> |
eclair snapshot
|
dde8ab037540aaec554a471d67613b959cc0e9f4 |
|
20-May-2009 |
Barry Hayes <bhayes@google.com> |
Change strategy for freeing objects in the sweep. dlfree() in dlmalloc.c is fairly expensive. It checks the previous and next block to see if the curent block can be merged into one of the free blocks, and if it does merge, that involves manipulating the internal free lists. The sweep phase of the GC is already visiting free objects in address order, and has a list of objects to be freed. The new strategy is: Loop over the list of objects to be freed, merging when possible, and only calling free() when a either the list has run out or a non-adjacent free object is found.
|
f6c387128427e121477c1b32ad35cdcaa5101ba3 |
|
04-Mar-2009 |
The Android Open Source Project <initial-contribution@android.com> |
auto import from //depot/cupcake/@135843
|
f72d5de56a522ac3be03873bdde26f23a5eeeb3c |
|
04-Mar-2009 |
The Android Open Source Project <initial-contribution@android.com> |
auto import from //depot/cupcake/@135843
|
5d709784bbf5001012d7f25172927d46f6c1abe1 |
|
11-Feb-2009 |
The Android Open Source Project <initial-contribution@android.com> |
auto import from //branches/cupcake/...@130745
|
2ad60cfc28e14ee8f0bb038720836a4696c478ad |
|
21-Oct-2008 |
The Android Open Source Project <initial-contribution@android.com> |
Initial Contribution
|