5c77af98b16a0f5b15bc807f2b323a91fe2a048b |
|
15-Nov-2016 |
Jason Evans <jasone@canonware.com> |
Add extent serial numbers. Add extent serial numbers and use them where appropriate as a sort key that is higher priority than address, so that the allocation policy prefers older extents. This resolves #147.
/external/jemalloc/include/jemalloc/internal/extent.h
|
19ff2cefba48d1ddab8fb52e3d78f309ca2553cf |
|
22-Apr-2016 |
Jason Evans <jasone@canonware.com> |
Implement the arena.<i>.reset mallctl. This makes it possible to discard all of an arena's allocations in a single operation. This resolves #146.
/external/jemalloc/include/jemalloc/internal/extent.h
|
6bdeddb6976a9e372caafa6c5b270007b07c41ae |
|
11-Aug-2015 |
Jason Evans <jasone@canonware.com> |
Fix build failure. This regression was introduced by de249c8679a188065949f2560b1f0015ea6534b4 (Arena chunk decommit cleanups and fixes.). This resolves #254.
/external/jemalloc/include/jemalloc/internal/extent.h
|
de249c8679a188065949f2560b1f0015ea6534b4 |
|
10-Aug-2015 |
Jason Evans <jasone@canonware.com> |
Arena chunk decommit cleanups and fixes. Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
/external/jemalloc/include/jemalloc/internal/extent.h
|
8fadb1a8c2d0219aded566bc5fac7d29cff9bb67 |
|
04-Aug-2015 |
Jason Evans <jasone@canonware.com> |
Implement chunk hook support for page run commit/decommit. Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
/external/jemalloc/include/jemalloc/internal/extent.h
|
b49a334a645b854dbb1649f15c38d646fee66738 |
|
28-Jul-2015 |
Jason Evans <je@fb.com> |
Generalize chunk management hooks. Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
/external/jemalloc/include/jemalloc/internal/extent.h
|
f5c8f37259d7697c3f850ac1e5ef63b724cf7689 |
|
11-Mar-2015 |
Jason Evans <je@fb.com> |
Normalize rdelm/rd structure field naming.
/external/jemalloc/include/jemalloc/internal/extent.h
|
38e42d311c1844a66e8ced84551621de41e42b85 |
|
11-Mar-2015 |
Jason Evans <je@fb.com> |
Refactor dirty run linkage to reduce sizeof(extent_node_t).
/external/jemalloc/include/jemalloc/internal/extent.h
|
738e089a2e707dbfc70286f7deeebc68e03d2347 |
|
18-Feb-2015 |
Jason Evans <jasone@canonware.com> |
Rename "dirty chunks" to "cached chunks". Rename "dirty chunks" to "cached chunks", in order to avoid overloading the term "dirty". Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771 (Fix chunk_unmap() to propagate dirty state.), and actually address what that change attempted, which is to only purge chunks once, and propagate whether zeroed pages resulted into chunk_record().
/external/jemalloc/include/jemalloc/internal/extent.h
|
47701b22ee7c0df5e99efa0fcdcf98b9ff805b59 |
|
18-Feb-2015 |
Jason Evans <jasone@canonware.com> |
arena_chunk_dirty_node_init() --> extent_node_dirty_linkage_init()
/external/jemalloc/include/jemalloc/internal/extent.h
|
a4e1888d1a12d864f42350f2859e33eb3a0033f2 |
|
18-Feb-2015 |
Jason Evans <jasone@canonware.com> |
Simplify extent_node_t and add extent_node_init().
/external/jemalloc/include/jemalloc/internal/extent.h
|
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb |
|
16-Feb-2015 |
Jason Evans <jasone@canonware.com> |
Integrate whole chunks into unused dirty page purging machinery. Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
/external/jemalloc/include/jemalloc/internal/extent.h
|
2195ba4e1f8f262b7e6586106d90f4dc0aea7630 |
|
16-Feb-2015 |
Jason Evans <jasone@canonware.com> |
Normalize *_link and link_* fields to all be *_link.
/external/jemalloc/include/jemalloc/internal/extent.h
|
cbf3a6d70371d2390b8b0e76814e04cc6088002c |
|
11-Feb-2015 |
Jason Evans <jasone@canonware.com> |
Move centralized chunk management into arenas. Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
/external/jemalloc/include/jemalloc/internal/extent.h
|
918a1a5b3f09cb456c25be9a2555a8fea6a9bb94 |
|
31-Jan-2015 |
Jason Evans <jasone@canonware.com> |
Reduce extent_node_t size to fit in one cache line.
/external/jemalloc/include/jemalloc/internal/extent.h
|
e12eaf93dca308a426c182956197b0eeb5f2cff3 |
|
08-Dec-2014 |
Jason Evans <je@fb.com> |
Style and spelling fixes.
/external/jemalloc/include/jemalloc/internal/extent.h
|
602c8e0971160e4b85b08b16cf8a2375aa24bc04 |
|
19-Aug-2014 |
Jason Evans <jasone@canonware.com> |
Implement per thread heap profiling. Rename data structures (prof_thr_cnt_t-->prof_tctx_t, prof_ctx_t-->prof_gctx_t), and convert to storing a prof_tctx_t for sampled objects. Convert PROF_ALLOC_PREP() to prof_alloc_prep(), since precise backtrace depth within jemalloc functions is no longer an issue (pprof prunes irrelevant frames). Implement mallctl's: - prof.reset implements full sample data reset, and optional change of sample interval. - prof.lg_sample reads the current sample interval (opt.lg_prof_sample was the permanent source of truth prior to prof.reset). - thread.prof.name provides naming capability for threads within heap profile dumps. - thread.prof.active makes it possible to activate/deactivate heap profiling for individual threads. Modify the heap dump files to contain per thread heap profile data. This change is incompatible with the existing pprof, which will require enhancements to read and process the enriched data.
/external/jemalloc/include/jemalloc/internal/extent.h
|
fb7fe50a88ca9bde74e9a401ae17ad3b15bbae28 |
|
06-May-2014 |
aravind <aravind@fb.com> |
Add support for user-specified chunk allocators/deallocators. Add new mallctl endpoints "arena<i>.chunk.alloc" and "arena<i>.chunk.dealloc" to allow userspace to configure jemalloc's chunk allocator and deallocator on a per-arena basis.
/external/jemalloc/include/jemalloc/internal/extent.h
|
7de92767c20cb72c94609b9c78985526fb84a679 |
|
09-Oct-2012 |
Jason Evans <je@fb.com> |
Fix mlockall()/madvise() interaction. mlockall(2) can cause purging via madvise(2) to fail. Fix purging code to check whether madvise() succeeded, and base zeroed page metadata on the result. Reported by Olivier Lecomte.
/external/jemalloc/include/jemalloc/internal/extent.h
|
7372b15a31c63ac5cb9ed8aeabc2a0a3c005e8bf |
|
11-Feb-2012 |
Jason Evans <je@fb.com> |
Reduce cpp conditional logic complexity. Convert configuration-related cpp conditional logic to use static constant variables, e.g.: #ifdef JEMALLOC_DEBUG [...] #endif becomes: if (config_debug) { [...] } The advantage is clearer, more concise code. The main disadvantage is that data structures no longer have conditionally defined fields, so they pay the cost of all fields regardless of whether they are used. In practice, this is only a minor concern; config_stats will go away in an upcoming change, and config_prof is the only other major feature that depends on more than a few special-purpose fields.
/external/jemalloc/include/jemalloc/internal/extent.h
|
7427525c28d58c423a68930160e3b0fe577fe953 |
|
01-Apr-2011 |
Jason Evans <jasone@canonware.com> |
Move repo contents in jemalloc/ to top level.
/external/jemalloc/include/jemalloc/internal/extent.h
|