History log of /external/jemalloc/src/chunk_dss.c
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
1ae9287a1aec534fa0a805a717f1c4e058ae8433 31-Mar-2016 Jason Evans <je@fb.com> Fix potential chunk leaks.

Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs. This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).

Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).

Bug: 28590121

(cherry picked from commit 8d8960f635c63b918ac54e0d1005854ed7a2692b)

Change-Id: I70758757b3342e0623918bb56c602873a367a192
/external/jemalloc/src/chunk_dss.c
78ae1ac486ffd7953536786c9a5f9dc2bda78858 08-Sep-2015 Dmitry-Me <wipedout@yandex.ru> Reduce variable scope.

This resolves #274.
/external/jemalloc/src/chunk_dss.c
03bf5b67be92db3a49f81816dccb5c18c0f2a0c0 12-Aug-2015 Jason Evans <jasone@canonware.com> Try to decommit new chunks.

Always leave decommit disabled on non-Windows systems.
/external/jemalloc/src/chunk_dss.c
8fadb1a8c2d0219aded566bc5fac7d29cff9bb67 04-Aug-2015 Jason Evans <jasone@canonware.com> Implement chunk hook support for page run commit/decommit.

Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging. For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.

This resolves #251.
/external/jemalloc/src/chunk_dss.c
b49a334a645b854dbb1649f15c38d646fee66738 28-Jul-2015 Jason Evans <je@fb.com> Generalize chunk management hooks.

Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.

Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.

Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.

This resolves #176 and #201.
/external/jemalloc/src/chunk_dss.c
738e089a2e707dbfc70286f7deeebc68e03d2347 18-Feb-2015 Jason Evans <jasone@canonware.com> Rename "dirty chunks" to "cached chunks".

Rename "dirty chunks" to "cached chunks", in order to avoid overloading
the term "dirty".

Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771
(Fix chunk_unmap() to propagate dirty state.), and actually address what
that change attempted, which is to only purge chunks once, and propagate
whether zeroed pages resulted into chunk_record().
/external/jemalloc/src/chunk_dss.c
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb 16-Feb-2015 Jason Evans <jasone@canonware.com> Integrate whole chunks into unused dirty page purging machinery.

Extend per arena unused dirty page purging to manage unused dirty chunks
in aaddtion to unused dirty runs. Rather than immediately unmapping
deallocated chunks (or purging them in the --disable-munmap case), store
them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially
allocate dirty chunks. When excessive unused dirty pages accumulate,
purge runs and chunks in ingegrated LRU order (and unmap chunks in the
--enable-munmap case).

Refactor extent_node_t to provide accessor functions.
/external/jemalloc/src/chunk_dss.c
cbf3a6d70371d2390b8b0e76814e04cc6088002c 11-Feb-2015 Jason Evans <jasone@canonware.com> Move centralized chunk management into arenas.

Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree. Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled. The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering. These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
/external/jemalloc/src/chunk_dss.c
879e76a9e57e725e927e77900940967d301a4958 03-Nov-2014 Daniel Micay <danielmicay@gmail.com> teach the dss chunk allocator to handle new_addr

This provides in-place expansion of huge allocations when the end of the
allocation is at the end of the sbrk heap. There's already the ability
to extend in-place via recycled chunks but this handles the initial
growth of the heap via repeated vector / string reallocations.

A possible future extension could allow realloc to go from the following:

| huge allocation | recycled chunks |
^ dss_end

To a larger allocation built from recycled *and* new chunks:

| huge allocation |
^ dss_end

Doing that would involve teaching the chunk recycling code to request
new chunks to satisfy the request. The chunk_dss code wouldn't require
any further changes.

#include <stdlib.h>

int main(void) {
size_t chunk = 4 * 1024 * 1024;
void *ptr = NULL;
for (size_t size = chunk; size < chunk * 128; size *= 2) {
ptr = realloc(ptr, size);
if (!ptr) return 1;
}
}

dss:secondary: 0.083s
dss:primary: 0.083s

After:

dss:secondary: 0.083s
dss:primary: 0.003s

The dss heap grows in the upwards direction, so the oldest chunks are at
the low addresses and they are used first. Linux prefers to grow the
mmap heap downwards, so the trick will not work in the *current* mmap
chunk allocator as a huge allocation will only be at the top of the heap
in a contrived case.
/external/jemalloc/src/chunk_dss.c
551ebc43647521bdd0bc78558b106762b3388928 03-Oct-2014 Jason Evans <jasone@canonware.com> Convert to uniform style: cond == false --> !cond
/external/jemalloc/src/chunk_dss.c
bd87b01999416ec7418ff8bdb504d9b6c009ff68 16-Apr-2014 Jason Evans <je@fb.com> Optimize Valgrind integration.

Forcefully disable tcache if running inside Valgrind, and remove
Valgrind calls in tcache-specific code.

Restructure Valgrind-related code to move most Valgrind calls out of the
fast path functions.

Take advantage of static knowledge to elide some branches in
JEMALLOC_VALGRIND_REALLOC().
/external/jemalloc/src/chunk_dss.c
4d434adb146375ad17f0d5e994ed5728d2942e3f 15-Apr-2014 Jason Evans <je@fb.com> Make dss non-optional, and fix an "arena.<i>.dss" mallctl bug.

Make dss non-optional on all platforms which support sbrk(2).

Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
/external/jemalloc/src/chunk_dss.c
66688535969c6dcb234448e590f27df38b4eebdf 04-Dec-2013 Jason Evans <jasone@canonware.com> Avoid deprecated sbrk(2) on OS X.

Avoid referencing sbrk(2) on OS X, because it is deprecated as of OS X
10.9 (Mavericks), and the compiler warns against using it.
/external/jemalloc/src/chunk_dss.c
06912756cccd0064a9c5c59992dbac1cec68ba3f 01-Feb-2013 Jason Evans <je@fb.com> Fix Valgrind integration.

Fix Valgrind integration to annotate all internally allocated memory in
a way that keeps Valgrind happy about internal data structure access.
/external/jemalloc/src/chunk_dss.c
38067483c542adfe092644d1ecc103c6bc74add0 22-Jan-2013 Jason Evans <jasone@canonware.com> Tighten valgrind integration.

Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state. The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
/external/jemalloc/src/chunk_dss.c
609ae595f0358157b19311b0f9f9591db7cee705 11-Oct-2012 Jason Evans <je@fb.com> Add arena-specific and selective dss allocation.

Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.

Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.

Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.

Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.

Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".

Add the "stats.arenas.<i>.dss" mallctl.
/external/jemalloc/src/chunk_dss.c
7ad54c1c30e0805e0758690115875f982de46cf2 22-Apr-2012 Jason Evans <jasone@canonware.com> Fix chunk allocation/deallocation bugs.

Fix chunk_alloc_dss() to zero memory when requested.

Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
memory.

Fix huge_palloc() to always junk fill when requested.

Improve chunk_recycle() to report that memory is zeroed as a side effect
of pages_purge().
/external/jemalloc/src/chunk_dss.c
8f0e0eb1c01d5d934586ea62e519ca8b8637aebc 21-Apr-2012 Jason Evans <jasone@canonware.com> Fix a memory corruption bug in chunk_alloc_dss().

Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.

Reverse order of preference between mmap() and sbrk() to prefer mmap().

Clean up management of 'zero' parameter in chunk_alloc*().
/external/jemalloc/src/chunk_dss.c
7ca0fdfb85b2a9fc7a112e158892c098e004385b 13-Apr-2012 Jason Evans <je@fb.com> Disable munmap() if it causes VM map holes.

Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
/external/jemalloc/src/chunk_dss.c
83c324acd8bd5f32e0ce9b4d3df2f1a0ae46f487 12-Apr-2012 Mike Hommey <mh@glandium.org> Use a stub replacement and disable dss when sbrk is not supported
/external/jemalloc/src/chunk_dss.c
5ff709c264e52651de25b788692c62ff1f6f389c 12-Apr-2012 Jason Evans <jasone@canonware.com> Normalize aligned allocation algorithms.

Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.

Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.

Remove the run_size_p parameter from sa2u().

Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to
chunk_alloc()).
/external/jemalloc/src/chunk_dss.c
a1ee7838e14b321a97bfacb1f1cf5004198f2203 11-Apr-2012 Jason Evans <je@fb.com> Rename labels.

Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.

Reported by Mike Hommey.
/external/jemalloc/src/chunk_dss.c
eae269036c9f702d9fa9be497a1a2aa1be13a29e 10-Apr-2012 Mike Hommey <mh@glandium.org> Add alignment support to chunk_alloc().
/external/jemalloc/src/chunk_dss.c
4e2e3dd9cf19ed5991938a708a8b50611aa5bbf8 14-Mar-2012 Jason Evans <je@fb.com> Fix fork-related bugs.

Acquire/release arena bin locks as part of the prefork/postfork. This
bug made deadlock in the child between fork and exec a possibility.

Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so
that the child can reinitialize mutexes rather than unlocking them. In
practice, this bug tended not to cause problems.
/external/jemalloc/src/chunk_dss.c
7372b15a31c63ac5cb9ed8aeabc2a0a3c005e8bf 11-Feb-2012 Jason Evans <je@fb.com> Reduce cpp conditional logic complexity.

Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:

#ifdef JEMALLOC_DEBUG
[...]
#endif

becomes:

if (config_debug) {
[...]
}

The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
/external/jemalloc/src/chunk_dss.c
7427525c28d58c423a68930160e3b0fe577fe953 01-Apr-2011 Jason Evans <jasone@canonware.com> Move repo contents in jemalloc/ to top level.
/external/jemalloc/src/chunk_dss.c