History log of /external/jemalloc/src/arena.c
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
e98a620c59ac20b13e2de796164cc67f050ed2bf 17-Nov-2016 Jason Evans <jasone@canonware.com> Mark partially purged arena chunks as non-hugepage.

Add the pages_[no]huge() functions, which toggle huge page state via
madvise(..., MADV_[NO]HUGEPAGE) calls.

The first time a page run is purged from within an arena chunk, call
pages_nohuge() to tell the kernel to make no further attempts to back
the chunk with huge pages. Upon arena chunk deletion, restore the
associated virtual memory to its original state via pages_huge().

This resolves #243.
/external/jemalloc/src/arena.c
2379479225e5be5f93626e13a37577c76a670fb3 15-Nov-2016 Jason Evans <jasone@canonware.com> Consistently use size_t rather than uint64_t for extent serial numbers.
/external/jemalloc/src/arena.c
5c77af98b16a0f5b15bc807f2b323a91fe2a048b 15-Nov-2016 Jason Evans <jasone@canonware.com> Add extent serial numbers.

Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.

This resolves #147.
/external/jemalloc/src/arena.c
5d6cb6eb66b05261cccd2b416f50ad98d1735229 07-Nov-2016 Jason Evans <jasone@canonware.com> Refactor prng to not use 64-bit atomics on 32-bit platforms.

This resolves #495.
/external/jemalloc/src/arena.c
a4e83e859353ea19dc8377088eae31520d291550 07-Nov-2016 Jason Evans <jasone@canonware.com> Fix run leak.

Fix arena_run_first_best_fit() to search all potentially non-empty
runs_avail heaps, rather than ignoring the heap that contains runs
larger than large_maxclass, but less than chunksize.

This fixes a regression caused by
f193fd80cf1f99bce2bc9f5f4a8b149219965da2 (Refactor runs_avail.).

This resolves #493.
/external/jemalloc/src/arena.c
28b7e42e44a1a77218a941d9dfe5bb643d884219 04-Nov-2016 Jason Evans <jasone@canonware.com> Fix arena data structure size calculation.

Fix paren placement so that QUANTUM_CEILING() applies to the correct
portion of the expression that computes how much memory to base_alloc().
In practice this bug had no impact. This was caused by
5d8db15db91c85d47b343cfc07fc6ea736f0de48 (Simplify run quantization.),
which in turn fixed an over-allocation regression caused by
3c4d92e82a31f652a7c77ca937a02d0185085b06 (Add per size class huge
allocation statistics.).
/external/jemalloc/src/arena.c
32896a902bb962a06261d81c9be22e16210692db 04-Nov-2016 Jason Evans <jasone@canonware.com> Fix large allocation to search optimal size class heap.

Fix arena_run_alloc_large_helper() to not convert size to usize when
searching for the first best fit via arena_run_first_best_fit(). This
allows the search to consider the optimal quantized size class, so that
e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
same memory.

This regression was nominally caused by
5707d6f952c71baa2f19102479859012982ac821 (Quantize szad trees by size
class.), but it did not commonly cause problems until
8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index
randomization for large allocations.). These regressions were first
released in 4.0.0.

This resolves #487.
/external/jemalloc/src/arena.c
e9012630acf897ce7016e427354bb46fbe893fe1 04-Nov-2016 Jason Evans <jasone@canonware.com> Fix chunk_alloc_cache() to support decommitted allocation.

Fix chunk_alloc_cache() to support decommitted allocation, and use this
ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
chunks don't get permanently stuck in a hybrid state.

This resolves #487.
/external/jemalloc/src/arena.c
e2bcf037d445a84a71c7997670819ebd0a893b4a 13-Oct-2016 Jason Evans <jasone@canonware.com> Make dss operations lockless.

Rather than protecting dss operations with a mutex, use atomic
operations. This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be
called multiple times during chunk deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
/external/jemalloc/src/arena.c
d419bb09ef6700dde95c74e1f1752f81e5d15d92 12-Oct-2016 Jason Evans <jasone@canonware.com> Fix and simplify decay-based purging.

Simplify decay-based purging attempts to only be triggered when the
epoch is advanced, rather than every time purgeable memory increases.
In a correctly functioning system (not previously the case; see below),
this only causes a behavior difference if during subsequent purge
attempts the least recently used (LRU) purgeable memory extent is
initially too large to be purged, but that memory is reused between
attempts and one or more of the next LRU purgeable memory extents are
small enough to be purged. In practice this is an arbitrary behavior
change that is within the set of acceptable behaviors.

As for the purging fix, assure that arena->decay.ndirty is recorded
*after* the epoch advance and associated purging occurs. Prior to this
fix, it was possible for purging during epoch advance to cause a
substantially underrepresentative (arena->ndirty - arena->decay.ndirty),
i.e. the number of dirty pages attributed to the current epoch was too
low, and a series of unintended purges could result. This fix is also
relevant in the context of the simplification described above, but the
bug's impact would be limited to over-purging at epoch advances.
/external/jemalloc/src/arena.c
45a5bf677299eb152c3c47836bd5d946234ce40e 11-Oct-2016 Jason Evans <jasone@canonware.com> Do not advance decay epoch when time goes backwards.

Instead, move the epoch backward in time. Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
/external/jemalloc/src/arena.c
94e7ffa9794792d2ec70269a0ab9c282a32aa2ec 11-Oct-2016 Jason Evans <jasone@canonware.com> Refactor arena->decay_* into arena->decay.* (arena_decay_t).
/external/jemalloc/src/arena.c
5d8db15db91c85d47b343cfc07fc6ea736f0de48 08-Apr-2016 Jason Evans <jasone@canonware.com> Simplify run quantization.
/external/jemalloc/src/arena.c
f193fd80cf1f99bce2bc9f5f4a8b149219965da2 08-Apr-2016 Jason Evans <jasone@canonware.com> Refactor runs_avail.

Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements. This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
/external/jemalloc/src/arena.c
1abb49f09d98e265ad92a831a056ccdfb4cf6041 18-Apr-2016 Jason Evans <jasone@canonware.com> Implement pz2ind(), pind2sz(), and psz2u().

These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size. Note that pszind_t and
szind_t are not interchangeable.
/external/jemalloc/src/arena.c
05a9e4ac651eb0c728e83fd883425c4894a2ae2b 07-Jun-2016 Jason Evans <jasone@canonware.com> Fix potential VM map fragmentation regression.

Revert 245ae6036c09cc11a72fab4335495d95cddd5beb (Support --with-lg-page
values larger than actual page size.), because it could cause VM map
fragmentation if the kernel grows mmap()ed memory downward.

This resolves #391.
/external/jemalloc/src/arena.c
c1e00ef2a6442d1d047950247c757821560db329 11-May-2016 Jason Evans <jasone@canonware.com> Resolve bootstrapping issues when embedded in FreeBSD libc.

b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers. Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not. All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
/external/jemalloc/src/arena.c
3ef51d7f733ac6432e80fa902a779ab5b98d74f6 06-May-2016 Jason Evans <jasone@canonware.com> Optimize the fast paths of calloc() and [m,d,sd]allocx().

This is a broader application of optimizations to malloc() and free() in
f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement:
reduce # of branches and unnecessary operations.).

This resolves #321.
/external/jemalloc/src/arena.c
04c3c0f9a0c910589a75604d8d0405407f1f035d 04-May-2016 Jason Evans <jasone@canonware.com> Add the stats.retained and stats.arenas.<i>.retained statistics.

This resolves #367.
/external/jemalloc/src/arena.c
90827a3f3ef2099dcd480d542aacc9f44a0787e8 04-May-2016 Jason Evans <jasone@canonware.com> Fix huge_palloc() regression.

Split arena_choose() into arena_[i]choose() and use arena_ichoose() for
arena lookup during internal allocation. This fixes huge_palloc() so
that it always succeeds during extent node allocation.

This regression was introduced by
66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via
non-auto arenas, nor tcaches.).
/external/jemalloc/src/arena.c
174c0c3a9c63b3a0bfa32381148b537e9b9af96d 26-Apr-2016 Jason Evans <jasone@canonware.com> Fix fork()-related lock rank ordering reversals.
/external/jemalloc/src/arena.c
7e6749595a570ed6686603a1bcfdf8cf49147f19 25-Apr-2016 Jason Evans <je@fb.com> Fix arena reset effects on large/huge stats.

Reset large curruns to 0 during arena reset.

Do not increase huge ndalloc stats during arena reset.
/external/jemalloc/src/arena.c
19ff2cefba48d1ddab8fb52e3d78f309ca2553cf 22-Apr-2016 Jason Evans <jasone@canonware.com> Implement the arena.<i>.reset mallctl.

This makes it possible to discard all of an arena's allocations in a
single operation.

This resolves #146.
/external/jemalloc/src/arena.c
66cd953514a18477eb49732e40d5c2ab5f1b12c5 22-Apr-2016 Jason Evans <jasone@canonware.com> Do not allocate metadata via non-auto arenas, nor tcaches.

This assures that all internally allocated metadata come from the
first opt_narenas arenas, i.e. the automatically multiplexed arenas.
/external/jemalloc/src/arena.c
c9a4bf91702b351e73e2cd7cf9125afd076d59fe 22-Apr-2016 Jason Evans <jasone@canonware.com> Reduce a variable scope.
/external/jemalloc/src/arena.c
ab0cfe01fa354597d28303952d3b0f87d932f6d6 19-Apr-2016 Jason Evans <jasone@canonware.com> Update private_symbols.txt.

Change test-related mangling to simplify symbol filtering.

The following commands can be used to detect missing/obsolete symbol
mangling, with the caveat that the full set of symbols is based on the
union of symbols generated by all configurations, some of which are
platform-specific:

./autogen.sh --enable-debug --enable-prof --enable-lazy-lock
make all tests
nm -a lib/libjemalloc.a src/*.jet.o \
|grep " [TDBCR] " \
|awk '{print $3}' \
|sed -e 's/^\(je_\|jet_\(n_\)\?\)\([a-zA-Z0-9_]*\)/\3/g' \
|LC_COLLATE=C sort -u \
|grep -v \
-e '^\(malloc\|calloc\|posix_memalign\|aligned_alloc\|realloc\|free\)$' \
-e '^\(m\|r\|x\|s\|d\|sd\|n\)allocx$' \
-e '^mallctl\(\|nametomib\|bymib\)$' \
-e '^malloc_\(stats_print\|usable_size\|message\)$' \
-e '^\(memalign\|valloc\)$' \
-e '^__\(malloc\|memalign\|realloc\|free\)_hook$' \
-e '^pthread_create$' \
> /tmp/private_symbols.txt
/external/jemalloc/src/arena.c
b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 14-Apr-2016 Jason Evans <jasone@canonware.com> Add witness, a simple online locking validator.

This resolves #358.
/external/jemalloc/src/arena.c
00432331b83526e3bb82f7c2aba493bf254cb9c0 12-Apr-2016 rustyx <me@rustyx.org> Fix 64-to-32 conversion warnings in 32-bit mode
/external/jemalloc/src/arena.c
245ae6036c09cc11a72fab4335495d95cddd5beb 06-Apr-2016 Jason Evans <jasone@canonware.com> Support --with-lg-page values larger than actual page size.

During over-allocation in preparation for creating aligned mappings,
allocate one more page than necessary if PAGE is the actual page size,
so that trimming still succeeds even if the system returns a mapping
that has less than PAGE alignment. This allows compiling with e.g. 64
KiB "pages" on systems that actually use 4 KiB pages.

Note that for e.g. --with-lg-page=21, it is also necessary to increase
the chunk size (e.g. --with-malloc-conf=lg_chunk:22) so that there are
at least two "pages" per chunk. In practice this isn't a particularly
compelling configuration because so much (unusable) virtual memory is
dedicated to chunk headers.
/external/jemalloc/src/arena.c
c6a2c39404df9a3fb27735b93cf4cb3a76a2d4a7 27-Mar-2016 Jason Evans <jasone@canonware.com> Refactor/fix ph.

Refactor ph to support configurable comparison functions. Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.

Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.

Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO. Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children). This fixes both degenerate merge behavior and the potential
for deep recursion.

This regression was introduced by
6bafa6678fc36483e638f1c3a0a9bf79fb89bfc9 (Pairing heap).

This resolves #371.
/external/jemalloc/src/arena.c
a82070ef5fc3aa81fda43086cdcc22bfa826b894 28-Mar-2016 Chris Peterson <cpeterson@mozilla.com> Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros

Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
/external/jemalloc/src/arena.c
f86bc081d6190be14c64aeaae9d02863b440bfb3 31-Mar-2016 Jason Evans <je@fb.com> Update a comment.
/external/jemalloc/src/arena.c
ce7c0f999bf7634078ec759f3d13290dbb34170c 31-Mar-2016 Jason Evans <je@fb.com> Fix potential chunk leaks.

Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs. This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).

Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
/external/jemalloc/src/arena.c
61a6dfcd5fd89d21f04c99fabaf7269d05f61adf 24-Mar-2016 Jason Evans <jasone@canonware.com> Constify various internal arena APIs.
/external/jemalloc/src/arena.c
613cdc80f6b61f698b3b0c3f2d22442044473f9b 08-Mar-2016 Jason Evans <je@fb.com> Convert arena_bin_t's runs from a tree to a heap.
/external/jemalloc/src/arena.c
4a0dbb5ac844830ebd7f89af20203a574ce1b3da 29-Feb-2016 Dave Watson <davejwatson@fb.com> Use pairing heap for arena->runs_avail

Use pairing heap instead of red black tree in arena runs_avail. The
extra links are unioned with the bitmap_t, so this change doesn't use
any extra memory.

Canaries show this change to be a 1% cpu win, and 2% latency win. In
particular, large free()s, and small bin frees are now O(1) (barring
coalescing).

I also tested changing bin->runs to be a pairing heap, but saw a much
smaller win, and it would mean increasing the size of arena_run_s by two
pointers, so I left that as an rb-tree for now.
/external/jemalloc/src/arena.c
022f6891faf1fffa435f2bc613c25e8482a32702 03-Mar-2016 Jason Evans <jasone@canonware.com> Avoid a potential innocuous compiler warning.

Add a cast to avoid comparing a ssize_t value to a uint64_t value that
is always larger than a 32-bit ssize_t. This silences an innocuous
compiler warning from e.g. gcc 4.2.1 about the comparison always having
the same result.
/external/jemalloc/src/arena.c
33184bf69813087bf1885b0993685f9d03320c69 29-Feb-2016 Dmitri Smirnov <dmitrism@microsoft.com> Fix stack corruption and uninitialized var warning

Stack corruption happens in x64 bit

This resolves #347.
/external/jemalloc/src/arena.c
3c07f803aa282598451eb0664cc94717b769a5e6 28-Feb-2016 Jason Evans <jasone@canonware.com> Fix stats.arenas.<i>.[...] for --disable-stats case.

Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.

Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
/external/jemalloc/src/arena.c
40ee9aa9577ea5eb6616c10b9e6b0fa7e6796821 27-Feb-2016 Jason Evans <jasone@canonware.com> Fix stats.cactive accounting regression.

Fix stats.cactive accounting to always increase/decrease by multiples of
the chunk size, even for huge size classes that are not multiples of the
chunk size, e.g. {2.5, 3, 3.5, 5, 7} MiB with 2 MiB chunk size. This
regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d
(Normalize size classes.) and first released in 4.0.0.

This resolves #336.
/external/jemalloc/src/arena.c
3763d3b5f92d855596e111a339c1fa9583c4602a 27-Feb-2016 Jason Evans <jasone@canonware.com> Refactor arena_cactive_update() into arena_cactive_{add,sub}().

This removes an implicit conversion from size_t to ssize_t. For cactive
decreases, the size_t value was intentionally underflowed to generate
"negative" values (actually positive values above the positive range of
ssize_t), and the conversion to ssize_t was undefined according to C
language semantics.

This regression was perpetuated by
1522937e9cbcfa24c881dc439cc454f9a34a7e88 (Fix the cactive statistic.)
and first release in 4.0.0, which in retrospect only fixed one of two
problems introduced by aa5113b1fdafd1129c22512837c6c3d66c295fc8
(Refactor overly large/complex functions) and first released in 3.5.0.
/external/jemalloc/src/arena.c
42ce80e15a5aa2ab6f2ec7e5f7c18164803f3076 26-Feb-2016 Jason Evans <jasone@canonware.com> Silence miscellaneous 64-to-32-bit data loss warnings.

This resolves #341.
/external/jemalloc/src/arena.c
8282a2ad979a9e72ffb645321c8a0b58a09eb9d8 26-Feb-2016 Jason Evans <jasone@canonware.com> Remove a superfluous comment.
/external/jemalloc/src/arena.c
0c516a00c4cb28cff55ce0995f756b5aae074c9e 26-Feb-2016 Jason Evans <je@fb.com> Make *allocx() size class overflow behavior defined.

Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.

This resolves #278 and #295.
/external/jemalloc/src/arena.c
767d85061a6fb88ec977bbcd9b429a43aff391e6 25-Feb-2016 Jason Evans <je@fb.com> Refactor arenas array (fixes deadlock).

Refactor the arenas array, which contains pointers to all extant arenas,
such that it starts out as a sparse array of maximum size, and use
double-checked atomics-based reads as the basis for fast and simple
arena_get(). Additionally, reduce arenas_lock's role such that it only
protects against arena initalization races. These changes remove the
possibility for arena lookups to trigger locking, which resolves at
least one known (fork-related) deadlock.

This resolves #315.
/external/jemalloc/src/arena.c
38127291670af8d12a21eb78ba49201f3a5af7d1 25-Feb-2016 Dave Watson <davejwatson@fb.com> Fix arena_size computation.

Fix arena_size arena_new() computation to incorporate
runs_avail_nclasses elements for runs_avail, rather than
(runs_avail_nclasses - 1) elements. Since offsetof(arena_t, runs_avail)
is used rather than sizeof(arena_t) for the first term of the
computation, all of the runs_avail elements must be added into the
second term.

This bug was introduced (by Jason Evans) while merging pull request #330
as 3417a304ccde61ac1f68b436ec22c03f1d6824ec (Separate arena_avail
trees).
/external/jemalloc/src/arena.c
cd86c1481ad7356a7bbcd14549e938769f474fd6 24-Feb-2016 Dave Watson <davejwatson@fb.com> Fix arena_run_first_best_fit

Merge of 3417a304ccde61ac1f68b436ec22c03f1d6824ec looks like a small
bug: first_best_fit doesn't scan through all the classes, since ind is
offset from runs_avail_nclasses by run_avail_bias.
/external/jemalloc/src/arena.c
9e1810ca9dc4a5f5f0841b9a6c1abb4337753552 24-Feb-2016 Jason Evans <je@fb.com> Silence miscellaneous 64-to-32-bit data loss warnings.
/external/jemalloc/src/arena.c
9f4ee6034c3ac6a8c8b5f9a0d76822fb2fd90c41 24-Feb-2016 Jason Evans <je@fb.com> Refactor jemalloc_ffs*() into ffs_*().

Use appropriate versions to resolve 64-to-32-bit data loss warnings.
/external/jemalloc/src/arena.c
ae45142adc12d39793c45ecac4dafad5674a4591 24-Feb-2016 Jason Evans <je@fb.com> Collapse arena_avail_tree_* into arena_run_tree_*.

These tree types converged to become identical, yet they still had
independently generated red-black tree implementations.
/external/jemalloc/src/arena.c
3417a304ccde61ac1f68b436ec22c03f1d6824ec 23-Feb-2016 Dave Watson <davejwatson@fb.com> Separate arena_avail trees

Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu. This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache. A
linear scan of the indicies appears to be fast enough.

The only cost of this is an extra tree array in each arena.
/external/jemalloc/src/arena.c
0da8ce1e96bedff697f7133c8cfb328390b6d11d 23-Feb-2016 Jason Evans <je@fb.com> Use table lookup for run_quantize_{floor,ceil}().

Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
/external/jemalloc/src/arena.c
08551eee586eefa8c98f33b97679f259af50afab 23-Feb-2016 Jason Evans <je@fb.com> Fix run_quantize_ceil().

In practice this bug had limited impact (and then only by increasing
chunk fragmentation) because run_quantize_ceil() returned correct
results except for inputs that could only arise from aligned allocation
requests that required more than page alignment.

This bug existed in the original run quantization implementation, which
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).
/external/jemalloc/src/arena.c
a9a46847925e38373e6a5da250c0cecb11a8277b 22-Feb-2016 Jason Evans <je@fb.com> Test run quantization.

Also rename run_quantize_*() to improve clarity. These tests
demonstrate that run_quantize_ceil() is flawed.
/external/jemalloc/src/arena.c
9bad07903962962de9f656d281b9b1e7e9501c87 21-Feb-2016 Jason Evans <jasone@canonware.com> Refactor time_* into nstime_*.

Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec. This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
/external/jemalloc/src/arena.c
243f7a0508bb014c2a7bf592c466a923911db234 20-Feb-2016 Jason Evans <jasone@canonware.com> Implement decay-based unused dirty page purging.

This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.

Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time

This resolves #325.
/external/jemalloc/src/arena.c
1a4ad3c0fab470c9a720a40c4433532d98bd9adc 20-Feb-2016 Jason Evans <jasone@canonware.com> Refactor out arena_compute_npurge().

Refactor out arena_compute_npurge() by integrating its logic into
arena_stash_dirty() as an incremental computation.
/external/jemalloc/src/arena.c
4985dc681e2e44f9d43c902647371790acac3ad4 20-Feb-2016 Jason Evans <jasone@canonware.com> Refactor arena_ralloc_no_move().

Refactor early return logic in arena_ralloc_no_move() to return early on
failure rather than on success.
/external/jemalloc/src/arena.c
578cd165812a11cd7250bfe5051cddc30ffec6e5 20-Feb-2016 Jason Evans <jasone@canonware.com> Refactor arena_malloc_hard() out of arena_malloc().
/external/jemalloc/src/arena.c
34676d33690f6cc6885ff769e537ca940aacf886 10-Feb-2016 Jason Evans <je@fb.com> Refactor prng* from cpp macros into inline functions.

Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
/external/jemalloc/src/arena.c
f4a0f32d340985de477bbe329ecdaecd69ed1055 27-Oct-2015 Qi Wang <interwq@gwu.edu> Fast-path improvement: reduce # of branches and unnecessary operations.

- Combine multiple runtime branches into a single malloc_slow check.
- Avoid calling arena_choose / size2index / index2size on fast path.
- A few micro optimizations.
/external/jemalloc/src/arena.c
13b401553172942c3cc1d89c70fd965be71c1540 18-Sep-2015 Joshua Kahn <jkahn@barracuda.com> Allow const keys for lookup

Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>

This resolves #281.
/external/jemalloc/src/arena.c
f97298bfc1c6edbb4fd00820e9e028e8d213af73 03-Sep-2015 Mike Hommey <mh@glandium.org> Remove arena_run_dalloc_decommit().

This resolves #284.
/external/jemalloc/src/arena.c
a784e411f21f4dc827c8c411b7afa7df949c2233 25-Sep-2015 Jason Evans <jasone@canonware.com> Fix a xallocx(..., MALLOCX_ZERO) bug.

Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of
large allocations that have been randomly assigned an offset of 0 when
--enable-cache-oblivious configure option is enabled. This addresses a
special case missed in d260f442ce693de4351229027b37b3293fcbfd7d (Fix
xallocx(..., MALLOCX_ZERO) bugs.).
/external/jemalloc/src/arena.c
d260f442ce693de4351229027b37b3293fcbfd7d 25-Sep-2015 Jason Evans <jasone@canonware.com> Fix xallocx(..., MALLOCX_ZERO) bugs.

Zero all trailing bytes of large allocations when
--enable-cache-oblivious configure option is enabled. This regression
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).

Zero trailing bytes of huge allocations when resizing from/to a size
class that is not a multiple of the chunk size.
/external/jemalloc/src/arena.c
e56b24e3a2db1edde23ede2477a94962ed006ae2 20-Sep-2015 Jason Evans <jasone@canonware.com> Make arena_dalloc_large_locked_impl() static.
/external/jemalloc/src/arena.c
9a505b768cd50bffbfaa3a993df9117e7454134e 15-Sep-2015 Jason Evans <jasone@canonware.com> Centralize xallocx() size[+extra] overflow checks.
/external/jemalloc/src/arena.c
676df88e48ae5ab77b05d78cb511cfa2e57d277f 12-Sep-2015 Jason Evans <jasone@canonware.com> Rename arena_maxclass to large_maxclass.

arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
/external/jemalloc/src/arena.c
560a4e1e01d3733c2f107cdb3cc3580f3ed84442 12-Sep-2015 Jason Evans <jasone@canonware.com> Fix xallocx() bugs.

Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
/external/jemalloc/src/arena.c
a306a60651db0bd835d4009271e0be236b450fb3 04-Sep-2015 Dmitry-Me <wipedout@yandex.ru> Reduce variables scope
/external/jemalloc/src/arena.c
d01fd19755bc0c2f5be3143349016dd0d7de7b36 20-Aug-2015 Jason Evans <jasone@canonware.com> Rename index_t to szind_t to avoid an existing type on Solaris.

This resolves #256.
/external/jemalloc/src/arena.c
5ef33a9f2b9f4fb56553529f7b31f4f5f57ce014 19-Aug-2015 Jason Evans <jasone@canonware.com> Don't bitshift by negative amounts.

Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps. This affected systems with page sizes greater than 8
KiB.

Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
/external/jemalloc/src/arena.c
1f27abc1b1f3583d9c6f999374613dc5319aeb12 11-Aug-2015 Jason Evans <jasone@canonware.com> Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.

Fix arena_run_split_large_helper() to treat newly committed memory as
zeroed.
/external/jemalloc/src/arena.c
45186f0c074a5fba345d04ac1df1b77b60bb3eb6 11-Aug-2015 Jason Evans <jasone@canonware.com> Refactor arena_mapbits unzeroed flag management.

Only set the unzeroed flag when initializing the entire mapbits entry,
rather than mutating just the unzeroed bit. This simplifies the
possible mapbits state transitions.
/external/jemalloc/src/arena.c
de249c8679a188065949f2560b1f0015ea6534b4 10-Aug-2015 Jason Evans <jasone@canonware.com> Arena chunk decommit cleanups and fixes.

Decommit arena chunk header during chunk deallocation if the rest of the
chunk is decommitted.
/external/jemalloc/src/arena.c
8fadb1a8c2d0219aded566bc5fac7d29cff9bb67 04-Aug-2015 Jason Evans <jasone@canonware.com> Implement chunk hook support for page run commit/decommit.

Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging. For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.

This resolves #251.
/external/jemalloc/src/arena.c
5716d97f7575708453ca477651eff6f1ac653dd1 07-Aug-2015 Jason Evans <jasone@canonware.com> Fix an in-place growing large reallocation regression.

Fix arena_ralloc_large_grow() to properly account for large_pad, so that
in-place large reallocation succeeds when possible, rather than always
failing. This regression was introduced by
8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index
randomization for large allocations.)
/external/jemalloc/src/arena.c
b49a334a645b854dbb1649f15c38d646fee66738 28-Jul-2015 Jason Evans <je@fb.com> Generalize chunk management hooks.

Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.

Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.

Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.

This resolves #176 and #201.
/external/jemalloc/src/arena.c
50883deb6eb532e5a16529a1ca009fb2ad4a0dc3 24-Jul-2015 Jason Evans <jasone@canonware.com> Change arena_palloc_large() parameter from size to usize.

This change merely documents that arena_palloc_large() always receives
usize as its argument.
/external/jemalloc/src/arena.c
5fae7dc1b316d0e93aa20cc3aaf050f509aec705 23-Jul-2015 Jason Evans <jasone@canonware.com> Fix MinGW-related portability issues.

Create and use FMT* macros that are equivalent to the PRI* macros that
inttypes.h defines. This allows uniform use of the Unix-specific format
specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions
of e.g. PRIu64.

Add ffs()/ffsl() support for compiling with gcc.

Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM,
ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and
use the file for tests as well as for core jemalloc code.
/external/jemalloc/src/arena.c
aa2826621e1793db9faea31e803690ccbe36f14c 16-Jul-2015 Jason Evans <jasone@canonware.com> Revert to first-best-fit run/chunk allocation.

This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use
first-fit rather than first-best-fit run/chunk allocation.). In some
pathological cases, first-fit search dominates allocation time, and it
also tends not to converge as readily on a steady state of memory
layout, since precise allocation order has a bigger effect than for
first-best-fit.
/external/jemalloc/src/arena.c
0313607e663294cd335da2545f10e949ee546fbc 07-Jul-2015 Jason Evans <jasone@canonware.com> Fix MinGW build warnings.

Conditionally define ENOENT, EINVAL, etc. (was unconditional).

Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued
(harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the
alternative to this workaround would have been to disable the function
attributes which cause gcc to look for type mismatches in formatted printing
function calls.
/external/jemalloc/src/arena.c
bce61d61bbe1b1b4ea15d1cbd3e24252b7e79c47 07-Jul-2015 Jason Evans <jasone@canonware.com> Move a variable declaration closer to its use.
/external/jemalloc/src/arena.c
0a9f9a4d511e0c3343ff26e04d9592fefd96c2bc 23-Jun-2015 Jason Evans <jasone@canonware.com> Convert arena_maybe_purge() recursion to iteration.

This resolves #235.
/external/jemalloc/src/arena.c
5154175cf1e6e7b1a2ed0295c232e60384944b3f 20-May-2015 Jason Evans <jasone@canonware.com> Fix performance regression in arena_palloc().

Pass large allocation requests to arena_malloc() when possible. This
regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d
(Normalize size classes.).
/external/jemalloc/src/arena.c
8a03cf039cd06f9fa6972711195055d865673966 04-May-2015 Jason Evans <jasone@canonware.com> Implement cache index randomization for large allocations.

Extract szad size quantization into {extent,run}_quantize(), and .
quantize szad run sizes to the union of valid small region run sizes and
large run sizes.

Refactor iteration in arena_run_first_fit() to use
run_quantize{,_first,_next(), and add support for padded large runs.

For large allocations that have no specified alignment constraints,
compute a pseudo-random offset from the beginning of the first backing
page that is a multiple of the cache line size. Under typical
configurations with 4-KiB pages and 64-byte cache lines this results in
a uniform distribution among 64 page boundary offsets.

Add the --disable-cache-oblivious option, primarily intended for
performance testing.

This resolves #13.
/external/jemalloc/src/arena.c
65db63cf3f0c5dd5126a1b3786756486eaf931ba 26-Mar-2015 Jason Evans <je@fb.com> Fix in-place shrinking huge reallocation purging bugs.

Fix the shrinking case of huge_ralloc_no_move_similar() to purge the
correct number of pages, at the correct offset. This regression was
introduced by 8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement
dynamic per arena control over dirty page purging.).

Fix huge_ralloc_no_move_shrink() to purge the correct number of pages.
This bug was introduced by 9673983443a0782d975fbcb5d8457cfd411b8b56
(Purge/zero sub-chunk huge allocations as necessary.).
/external/jemalloc/src/arena.c
562d266511053a51406e91c78eba640cb46ad9c8 25-Mar-2015 Jason Evans <jasone@canonware.com> Add the "stats.arenas.<i>.lg_dirty_mult" mallctl.
/external/jemalloc/src/arena.c
bd16ea49c3e36706a52ef9c8f560813c167fa085 24-Mar-2015 Jason Evans <jasone@canonware.com> Fix signed/unsigned comparison in arena_lg_dirty_mult_valid().
/external/jemalloc/src/arena.c
8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 19-Mar-2015 Jason Evans <je@fb.com> Implement dynamic per arena control over dirty page purging.

Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
modified to change the initial lg_dirty_mult setting for newly created
arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
purging threshold, and synchronously triggers any purging that may be
necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
to be replaced.

This resolves #93.
/external/jemalloc/src/arena.c
bc45d41d23bac598dbd38e5aac5a85b43d24bc04 12-Mar-2015 Jason Evans <je@fb.com> Fix a declaration-after-statement regression.
/external/jemalloc/src/arena.c
f5c8f37259d7697c3f850ac1e5ef63b724cf7689 11-Mar-2015 Jason Evans <je@fb.com> Normalize rdelm/rd structure field naming.
/external/jemalloc/src/arena.c
38e42d311c1844a66e8ced84551621de41e42b85 11-Mar-2015 Jason Evans <je@fb.com> Refactor dirty run linkage to reduce sizeof(extent_node_t).
/external/jemalloc/src/arena.c
97c04a93838c4001688fe31bf018972b4696efe2 07-Mar-2015 Jason Evans <jasone@canonware.com> Use first-fit rather than first-best-fit run/chunk allocation.

This tends to more effectively pack active memory toward low addresses.
However, additional tree searches are required in many cases, so whether
this change stands the test of time will depend on real-world
benchmarks.
/external/jemalloc/src/arena.c
5707d6f952c71baa2f19102479859012982ac821 07-Mar-2015 Jason Evans <jasone@canonware.com> Quantize szad trees by size class.

Treat sizes that round down to the same size class as size-equivalent
in trees that are used to search for first best fit, so that there are
only as many "firsts" as there are size classes. This comes closer to
the ideal of first fit.
/external/jemalloc/src/arena.c
99bd94fb65a0b6423c4efcc3e3e501179b92a4db 19-Feb-2015 Jason Evans <jasone@canonware.com> Fix chunk cache races.

These regressions were introduced by
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into
unused dirty page purging machinery.).
/external/jemalloc/src/arena.c
738e089a2e707dbfc70286f7deeebc68e03d2347 18-Feb-2015 Jason Evans <jasone@canonware.com> Rename "dirty chunks" to "cached chunks".

Rename "dirty chunks" to "cached chunks", in order to avoid overloading
the term "dirty".

Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771
(Fix chunk_unmap() to propagate dirty state.), and actually address what
that change attempted, which is to only purge chunks once, and propagate
whether zeroed pages resulted into chunk_record().
/external/jemalloc/src/arena.c
339c2b23b2d61993ac768afcc72af135662c6771 18-Feb-2015 Jason Evans <jasone@canonware.com> Fix chunk_unmap() to propagate dirty state.

Fix chunk_unmap() to propagate whether a chunk is dirty, and modify
dirty chunk purging to record this information so it can be passed to
chunk_unmap(). Since the broken version of chunk_unmap() claimed that
all chunks were clean, this resulted in potential memory corruption for
purging implementations that do not zero (e.g. MADV_FREE).

This regression was introduced by
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into
unused dirty page purging machinery.).
/external/jemalloc/src/arena.c
47701b22ee7c0df5e99efa0fcdcf98b9ff805b59 18-Feb-2015 Jason Evans <jasone@canonware.com> arena_chunk_dirty_node_init() --> extent_node_dirty_linkage_init()
/external/jemalloc/src/arena.c
a4e1888d1a12d864f42350f2859e33eb3a0033f2 18-Feb-2015 Jason Evans <jasone@canonware.com> Simplify extent_node_t and add extent_node_init().
/external/jemalloc/src/arena.c
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb 16-Feb-2015 Jason Evans <jasone@canonware.com> Integrate whole chunks into unused dirty page purging machinery.

Extend per arena unused dirty page purging to manage unused dirty chunks
in aaddtion to unused dirty runs. Rather than immediately unmapping
deallocated chunks (or purging them in the --disable-munmap case), store
them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially
allocate dirty chunks. When excessive unused dirty pages accumulate,
purge runs and chunks in ingegrated LRU order (and unmap chunks in the
--enable-munmap case).

Refactor extent_node_t to provide accessor functions.
/external/jemalloc/src/arena.c
2195ba4e1f8f262b7e6586106d90f4dc0aea7630 16-Feb-2015 Jason Evans <jasone@canonware.com> Normalize *_link and link_* fields to all be *_link.
/external/jemalloc/src/arena.c
88fef7ceda6269598cef0cee8b984c8765673c27 12-Feb-2015 Jason Evans <je@fb.com> Refactor huge_*() calls into arena internals.

Make redirects to the huge_*() API the arena code's responsibility,
since arenas now take responsibility for all allocation sizes.
/external/jemalloc/src/arena.c
cbf3a6d70371d2390b8b0e76814e04cc6088002c 11-Feb-2015 Jason Evans <jasone@canonware.com> Move centralized chunk management into arenas.

Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree. Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled. The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering. These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
/external/jemalloc/src/arena.c
1cb181ed632e7573fb4eab194e4d216867222d27 30-Jan-2015 Jason Evans <je@fb.com> Implement explicit tcache support.

Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be
used in conjunction with the *allocx() API.

Add the tcache.create, tcache.flush, and tcache.destroy mallctls.

This resolves #145.
/external/jemalloc/src/arena.c
6505733012458d8fcd0ae8e1f1acdc9ffe33ff35 03-Feb-2015 Mike Hommey <mh@glandium.org> Make opt.lg_dirty_mult work as documented

The documentation for opt.lg_dirty_mult says:
Per-arena minimum ratio (log base 2) of active to dirty
pages. Some dirty unused pages may be allowed to accumulate,
within the limit set by the ratio (or one chunk worth of dirty
pages, whichever is greater) (...)

The restriction in parentheses currently doesn't happen. This makes
jemalloc aggressively madvise(), which in turns increases the amount
of page faults significantly.

For instance, this resulted in several(!) hundred(!) milliseconds
startup regression on Firefox for Android.

This may require further tweaking, but starting with actually doing
what the documentation says is a good start.
/external/jemalloc/src/arena.c
4581b97809e7e545c38b996870a4e7284a620bc5 27-Nov-2014 Jason Evans <je@fb.com> Implement metadata statistics.

There are three categories of metadata:

- Base allocations are used for bootstrap-sensitive internal allocator
data structures.
- Arena chunk headers comprise pages which track the states of the
non-metadata pages.
- Internal allocations differ from application-originated allocations
in that they are for internal use, and that they are omitted from heap
profiles.

The metadata statistics comprise the metadata categories as follows:

- stats.metadata: All metadata -- base + arena chunk headers + internal
allocations.
- stats.arenas.<i>.metadata.mapped: Arena chunk headers.
- stats.arenas.<i>.metadata.allocated: Internal allocations. This is
reported separately from the other metadata statistics because it
overlaps with the allocated and active statistics, whereas the other
metadata statistics do not.

Base allocations are not reported separately, though their magnitude can
be computed by subtracting the arena-specific metadata.

This resolves #163.
/external/jemalloc/src/arena.c
9c6a8d3b0cc14fd26b119ad08f190e537771464f 17-Dec-2014 Guilherme Goncalves <guilherme.p.gonc@gmail.com> Move variable declaration to the top its block for MSVC compatibility.
/external/jemalloc/src/arena.c
2c5cb613dfbdf58f88152321b63e60c58cd23972 08-Dec-2014 Guilherme Goncalves <guilherme.p.gonc@gmail.com> Introduce two new modes of junk filling: "alloc" and "free".

In addition to true/false, opt.junk can now be either "alloc" or "free",
giving applications the possibility of junking memory only on allocation
or deallocation.

This resolves #172.
/external/jemalloc/src/arena.c
e12eaf93dca308a426c182956197b0eeb5f2cff3 08-Dec-2014 Jason Evans <je@fb.com> Style and spelling fixes.
/external/jemalloc/src/arena.c
d49cb68b9e8b57169240e16686f4f60d6b5a089f 17-Nov-2014 Jason Evans <je@fb.com> Fix more pointer arithmetic undefined behavior.

Reported by Guilherme Gonçalves.

This resolves #166.
/external/jemalloc/src/arena.c
2012d5a5601c787ce464fac0cbd2b16e3754cfa2 17-Nov-2014 Jason Evans <je@fb.com> Fix pointer arithmetic undefined behavior.

Reported by Denis Denisov.
/external/jemalloc/src/arena.c
2b2f6dc1e45808c31fb2f3ae33306d224ec0b2d2 01-Nov-2014 Jason Evans <jasone@canonware.com> Disable arena_dirty_count() validation.
/external/jemalloc/src/arena.c
809b0ac3919da60c20ad59517ef560d0df639f3b 23-Oct-2014 Daniel Micay <danielmicay@gmail.com> mark huge allocations as unlikely

This cleans up the fast path a bit more by moving away more code.
/external/jemalloc/src/arena.c
af1f5927633ee2cb98c095de0fcc67b8aacdc9c0 31-Oct-2014 Jason Evans <jasone@canonware.com> Use JEMALLOC_INLINE_C everywhere it's appropriate.
/external/jemalloc/src/arena.c
a9ea10d27c320926cab2e59c66ebcd25c49df24c 16-Oct-2014 Daniel Micay <danielmicay@gmail.com> use sized deallocation internally for ralloc

The size of the source allocation is known at this point, so reading the
chunk header can be avoided for the small size class fast path. This is
not very useful right now, but it provides a significant performance
boost with an alternate ralloc entry point taking the old size.
/external/jemalloc/src/arena.c
9b41ac909facf4f09bb1b637b78ba647348e572e 15-Oct-2014 Jason Evans <je@fb.com> Fix huge allocation statistics.
/external/jemalloc/src/arena.c
3c4d92e82a31f652a7c77ca937a02d0185085b06 13-Oct-2014 Jason Evans <jasone@canonware.com> Add per size class huge allocation statistics.

Add per size class huge allocation statistics, and normalize various
stats:
- Change the arenas.nlruns type from size_t to unsigned.
- Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's.
- Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with
stats.arenas.<i>.bins.<j>.curregs .
- Add the stats.arenas.<i>.hchunks.<j>.nmalloc,
stats.arenas.<i>.hchunks.<j>.ndalloc,
stats.arenas.<i>.hchunks.<j>.nrequests, and
stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
/external/jemalloc/src/arena.c
381c23dd9d3bf019cc4c7523a900be1e888802a7 11-Oct-2014 Jason Evans <jasone@canonware.com> Remove arena_dalloc_bin_run() clean page preservation.

Remove code in arena_dalloc_bin_run() that preserved the "clean" state
of trailing clean pages by splitting them into a separate run during
deallocation. This was a useful mechanism for reducing dirty page
churn when bin runs comprised many pages, but bin runs are now quite
small.

Remove the nextind field from arena_run_t now that it is no longer
needed, and change arena_run_t's bin field (arena_bin_t *) to binind
(index_t). These two changes remove 8 bytes of chunk header overhead
per page, which saves 1/512 of all arena chunk memory.
/external/jemalloc/src/arena.c
fc0b3b7383373d66cfed2cd4e2faa272a6868d32 10-Oct-2014 Jason Evans <jasone@canonware.com> Add configure options.

Add:
--with-lg-page
--with-lg-page-sizes
--with-lg-size-class-group
--with-lg-quantum

Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE.

Fix various edge conditions exposed by the configure options.
/external/jemalloc/src/arena.c
8bb3198f72fc7587dc93527f9f19fb5be52fa553 08-Oct-2014 Jason Evans <jasone@canonware.com> Refactor/fix arenas manipulation.

Abstract arenas access to use arena_get() (or a0get() where appropriate)
rather than directly reading e.g. arenas[ind]. Prior to the addition of
the arenas.extend mallctl, the worst possible outcome of directly
accessing arenas was a stale read, but arenas.extend may allocate and
assign a new array to arenas.

Add a tsd-based arenas_cache, which amortizes arenas reads. This
introduces some subtle bootstrapping issues, with tsd_boot() now being
split into tsd_boot[01]() to support tsd wrapper allocation
bootstrapping, as well as an arenas_cache_bypass tsd variable which
dynamically terminates allocation of arenas_cache itself.

Promote a0malloc(), a0calloc(), and a0free() to be generally useful for
internal allocation, and use them in several places (more may be
appropriate).

Abstract arena->nthreads management and fix a missing decrement during
thread destruction (recent tsd refactoring left arenas_cleanup()
unused).

Change arena_choose() to propagate OOM, and handle OOM in all callers.
This is important for providing consistent allocation behavior when the
MALLOCX_ARENA() flag is being used. Prior to this fix, it was possible
for an OOM to result in allocation silently allocating from a different
arena than the one specified.
/external/jemalloc/src/arena.c
155bfa7da18cab0d21d87aa2dce4554166836f5d 06-Oct-2014 Jason Evans <jasone@canonware.com> Normalize size classes.

Normalize size classes to use the same number of size classes per size
doubling (currently hard coded to 4), across the intire range of size
classes. Small size classes already used this spacing, but in order to
support this change, additional small size classes now fill [4 KiB .. 16
KiB). Large size classes range from [16 KiB .. 4 MiB). Huge size
classes now support non-multiples of the chunk size in order to fill (4
MiB .. 16 MiB).
/external/jemalloc/src/arena.c
a95018ee819abf897562d9d1f3bc31d4dd725a8d 04-Oct-2014 Daniel Micay <danielmicay@gmail.com> Attempt to expand huge allocations in-place.

This adds support for expanding huge allocations in-place by requesting
memory at a specific address from the chunk allocator.

It's currently only implemented for the chunk recycling path, although
in theory it could also be done by optimistically allocating new chunks.
On Linux, it could attempt an in-place mremap. However, that won't work
in practice since the heap is grown downwards and memory is not unmapped
(in a normal build, at least).

Repeated vector reallocation micro-benchmark:

#include <string.h>
#include <stdlib.h>

int main(void) {
for (size_t i = 0; i < 100; i++) {
void *ptr = NULL;
size_t old_size = 0;
for (size_t size = 4; size < (1 << 30); size *= 2) {
ptr = realloc(ptr, size);
if (!ptr) return 1;
memset(ptr + old_size, 0xff, size - old_size);
old_size = size;
}
free(ptr);
}
}

The glibc allocator fails to do any in-place reallocations on this
benchmark once it passes the M_MMAP_THRESHOLD (default 128k) but it
elides the cost of copies via mremap, which is currently not something
that jemalloc can use.

With this improvement, jemalloc still fails to do any in-place huge
reallocations for the first outer loop, but then succeeds 100% of the
time for the remaining 99 iterations. The time spent doing allocations
and copies drops down to under 5%, with nearly all of it spent doing
purging + faulting (when huge pages are disabled) and the array memset.

An improved mremap API (MREMAP_RETAIN - #138) would be far more general
but this is a portable optimization and would still be useful on Linux
for xallocx.

Numbers with transparent huge pages enabled:

glibc (copies elided via MREMAP_MAYMOVE): 8.471s

jemalloc: 17.816s
jemalloc + no-op madvise: 13.236s

jemalloc + this commit: 6.787s
jemalloc + this commit + no-op madvise: 6.144s

Numbers with transparent huge pages disabled:

glibc (copies elided via MREMAP_MAYMOVE): 15.403s

jemalloc: 39.456s
jemalloc + no-op madvise: 12.768s

jemalloc + this commit: 15.534s
jemalloc + this commit + no-op madvise: 6.354s

Closes #137
/external/jemalloc/src/arena.c
f11a6776c78a09059f8418b718c996a065b33fca 05-Oct-2014 Jason Evans <jasone@canonware.com> Fix OOM-related regression in arena_tcache_fill_small().

Fix an OOM-related regression in arena_tcache_fill_small() that caused
cache corruption that would almost certainly expose the application to
undefined behavior, usually in the form of an allocation request
returning an already-allocated region, or somewhat less likely, a freed
region that had already been returned to the arena, thus making it
available to the arena for any purpose.

This regression was introduced by
9c43c13a35220c10d97a886616899189daceb359 (Reverse tcache fill order.),
and was present in all releases from 2.2.0 through 3.6.0.

This resolves #98.
/external/jemalloc/src/arena.c
551ebc43647521bdd0bc78558b106762b3388928 03-Oct-2014 Jason Evans <jasone@canonware.com> Convert to uniform style: cond == false --> !cond
/external/jemalloc/src/arena.c
0c5dd03e889d0269170b5db9fa872738d906eb78 29-Sep-2014 Jason Evans <jasone@canonware.com> Move small run metadata into the arena chunk header.

Move small run metadata into the arena chunk header, with multiple
expected benefits:
- Lower run fragmentation due to reduced run sizes; runs are more likely
to completely drain when there are fewer total regions.
- Improved cache behavior. Prior to this change, run headers were
always page-aligned, which put extra pressure on some CPU cache sets.
The degree to which this was a problem was hardware dependent, but it
likely hurt some even for the most advanced modern hardware.
- Buffer overruns/underruns are less likely to corrupt allocator
metadata.
- Size classes between 4 KiB and 16 KiB become reasonable to support
without any special handling, and the runs are small enough that dirty
unused pages aren't a significant concern.
/external/jemalloc/src/arena.c
5460aa6f6676c7f253bfcb75c028dfd38cae8aaf 23-Sep-2014 Jason Evans <jasone@canonware.com> Convert all tsd variables to reside in a single tsd structure.
/external/jemalloc/src/arena.c
9c640bfdd4e2f25180a32ed3704ce8e4c4cc21f1 12-Sep-2014 Jason Evans <jasone@canonware.com> Apply likely()/unlikely() to allocation/deallocation fast paths.
/external/jemalloc/src/arena.c
b718cf77e9917f6ae1995c2e2b219ff4219c9f46 07-Sep-2014 Jason Evans <je@fb.com> Optimize [nmd]alloc() fast paths.

Optimize [nmd]alloc() fast paths such that the (flags == 0) case is
streamlined, flags decoding only happens to the minimum degree
necessary, and no conditionals are repeated.
/external/jemalloc/src/arena.c
ff6a31d3b92b7c63446ce645341d2bbd77b67dc6 29-Aug-2014 Qinfan Wu <wqfish@fb.com> Refactor chunk map.

Break the chunk map into two separate arrays, in order to improve cache
locality. This is related to issue #23.
/external/jemalloc/src/arena.c
070b3c3fbd90296610005c111ec6060e8bb23d31 14-Aug-2014 Jason Evans <jasone@canonware.com> Fix and refactor runs_dirty-based purging.

Fix runs_dirty-based purging to also purge dirty pages in the spare
chunk.

Refactor runs_dirty manipulation into arena_dirty_{insert,remove}(), and
move the arena->ndirty accounting into those functions.

Remove the u.ql_link field from arena_chunk_map_t, and get rid of the
enclosing union for u.rb_link, since only rb_link remains.

Remove the ndirty field from arena_chunk_t.
/external/jemalloc/src/arena.c
e8a2fd83a2ddc082fcd4e49373ea05bd79213c71 22-Jul-2014 Qinfan Wu <wqfish@fb.com> arena->npurgatory is no longer needed since we drop arena's lock
after stashing all the purgeable runs.
/external/jemalloc/src/arena.c
90737fcda150a5da3f4db1c3144ea24eed8de55b 22-Jul-2014 Qinfan Wu <wqfish@fb.com> Remove chunks_dirty tree, nruns_avail and nruns_adjac since we no
longer need to maintain the tree for dirty page purging.
/external/jemalloc/src/arena.c
e970800c780df918b80f8b914eeac475dd5f1ec4 22-Jul-2014 Qinfan Wu <wqfish@fb.com> Purge dirty pages from the beginning of the dirty list.
/external/jemalloc/src/arena.c
a244e5078e8505978b5f63cfe6dcb3c9d63d2cb5 21-Jul-2014 Qinfan Wu <wqfish@fb.com> Add dirty page counting for debug
/external/jemalloc/src/arena.c
04d60a132beed9e8c33f73b94fb9251b919073c8 18-Jul-2014 Qinfan Wu <wqfish@fb.com> Maintain all the dirty runs in a linked list for each arena
/external/jemalloc/src/arena.c
1522937e9cbcfa24c881dc439cc454f9a34a7e88 07-Aug-2014 Jason Evans <je@fb.com> Fix the cactive statistic.

Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was introduced by
aa5113b1fdafd1129c22512837c6c3d66c295fc8 (Refactor overly large/complex
functions) and first released in 3.5.0.
/external/jemalloc/src/arena.c
ea73eb8f3e029f0a5697e78c6771b49063cf4138 07-Aug-2014 Qinfan Wu <wqfish@fb.com> Reintroduce the comment that was removed in f9ff603.
/external/jemalloc/src/arena.c
55c9aa10386b21af92f323d04bddc15691d48756 07-Aug-2014 Qinfan Wu <wqfish@fb.com> Fix the bug that causes not allocating free run with lowest address.
/external/jemalloc/src/arena.c
9c3a10fdf6baa5ddb042b6adbef1ff1b3c613ce3 29-May-2014 Richard Diamond <wichard@vitalitystudios.com> Try to use __builtin_ffsl if ffsl is unavailable.

Some platforms (like those using Newlib) don't have ffs/ffsl. This
commit adds a check to configure.ac for __builtin_ffsl if ffsl isn't
found. __builtin_ffsl performs the same function as ffsl, and has the
added benefit of being available on any platform utilizing
Gcc-compatible compiler.

This change does not address the used of ffs in the MALLOCX_ARENA()
macro.
/external/jemalloc/src/arena.c
d04047cc29bbc9d1f87a9346d1601e3dd87b6ca0 29-May-2014 Jason Evans <je@fb.com> Add size class computation capability.

Add size class computation capability, currently used only as validation
of the size class lookup tables. Generalize the size class spacing used
for bins, for eventual use throughout the full range of allocation
sizes.
/external/jemalloc/src/arena.c
e2deab7a751c8080c2b2cdcfd7b11887332be1bb 16-May-2014 Jason Evans <je@fb.com> Refactor huge allocation to be managed by arenas.

Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation). This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.

Remove the top level huge stats, and replace them with per arena huge
stats.

Normalize function names and types to *dalloc* (some were *dealloc*).

Remove the --enable-mremap option. As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance. The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
/external/jemalloc/src/arena.c
fb7fe50a88ca9bde74e9a401ae17ad3b15bbae28 06-May-2014 aravind <aravind@fb.com> Add support for user-specified chunk allocators/deallocators.

Add new mallctl endpoints "arena<i>.chunk.alloc" and
"arena<i>.chunk.dealloc" to allow userspace to configure
jemalloc's chunk allocator and deallocator on a per-arena
basis.
/external/jemalloc/src/arena.c
3541a904d6fb949f3f0aea05418ccce7cbd4b705 17-Apr-2014 Jason Evans <je@fb.com> Refactor small_size2bin and small_bin2size.

Refactor small_size2bin and small_bin2size to be inline functions rather
than directly accessed arrays.
/external/jemalloc/src/arena.c
3e3caf03af6ca579e473ace4daf25f63102aca4f 17-Apr-2014 Jason Evans <jasone@canonware.com> Merge pull request #73 from bmaurer/smallmalloc

Smaller malloc hot path
021136ce4db79f50031a1fd5dd751891888fbc7b 16-Apr-2014 Ben Maurer <bmaurer@fb.com> Create a const array with only a small bin to size map
/external/jemalloc/src/arena.c
bd87b01999416ec7418ff8bdb504d9b6c009ff68 16-Apr-2014 Jason Evans <je@fb.com> Optimize Valgrind integration.

Forcefully disable tcache if running inside Valgrind, and remove
Valgrind calls in tcache-specific code.

Restructure Valgrind-related code to move most Valgrind calls out of the
fast path functions.

Take advantage of static knowledge to elide some branches in
JEMALLOC_VALGRIND_REALLOC().
/external/jemalloc/src/arena.c
4d434adb146375ad17f0d5e994ed5728d2942e3f 15-Apr-2014 Jason Evans <je@fb.com> Make dss non-optional, and fix an "arena.<i>.dss" mallctl bug.

Make dss non-optional on all platforms which support sbrk(2).

Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
/external/jemalloc/src/arena.c
9b0cbf0850b130a9b0a8c58bd10b2926b2083510 11-Apr-2014 Jason Evans <je@fb.com> Remove support for non-prof-promote heap profiling metadata.

Make promotion of sampled small objects to large objects mandatory, so
that profiling metadata can always be stored in the chunk map, rather
than requiring one pointer per small region in each small-region page
run. In practice the non-prof-promote code was only useful when using
jemalloc to track all objects and report them as leaks at program exit.
However, Valgrind is at least as good a tool for this particular use
case.

Furthermore, the non-prof-promote code is getting in the way of
some optimizations that will make heap profiling much cheaper for the
predominant use case (sampling a small representative proportion of all
allocations).
/external/jemalloc/src/arena.c
f9ff60346d7c25ad653ea062e496a5d0864233b2 06-Apr-2014 Ben Maurer <bmaurer@fb.com> refactoring for bits splitting
/external/jemalloc/src/arena.c
20a8c78bfe3310e0f0f72b596d4e10ca7336063b 26-Mar-2014 Chris Pride <cpride@cpride.net> Fix a crashing case where arena_chunk_init_hard returns NULL.

This happens when it fails to allocate a new chunk. Which
arena_chunk_alloc then passes into arena_avail_insert without any
checks. This then causes a crash when arena_avail_insert tries
to check chunk->ndirty.

This was introduced by the refactoring of arena_chunk_alloc
which previously would have returned NULL immediately after
calling chunk_alloc. This is now the return from
arena_chunk_init_hard so we need to check that return, and
not continue if it was NULL.
/external/jemalloc/src/arena.c
69e9fbb9c143e0d60670c68e29076a5c5c76ca3c 14-Feb-2014 Erwan Legrand <elegrand@cloudmark.com> Fix typo
/external/jemalloc/src/arena.c
aa5113b1fdafd1129c22512837c6c3d66c295fc8 15-Jan-2014 Jason Evans <je@fb.com> Refactor overly large/complex functions.

Refactor overly large functions by breaking out helper functions.

Refactor overly complex multi-purpose functions into separate more
specific functions.
/external/jemalloc/src/arena.c
b2c31660be917ea6d59cd54e6f650b06b5e812ed 13-Jan-2014 Jason Evans <je@fb.com> Extract profiling code from [re]allocation functions.

Extract profiling code from malloc(), imemalign(), calloc(), realloc(),
mallocx(), rallocx(), and xallocx(). This slightly reduces the amount
of code compiled into the fast paths, but the primary benefit is the
combinatorial complexity reduction.

Simplify iralloc[t]() by creating a separate ixalloc() that handles the
no-move cases.

Further simplify [mrxn]allocx() (and by implication [mrn]allocm()) to
make request size overflows due to size class and/or alignment
constraints trigger undefined behavior (detected by debug-only
assertions).

Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
backtrace creation in imemalign(). This bug impacted posix_memalign()
and aligned_alloc().
/external/jemalloc/src/arena.c
6b694c4d47278cddfaaedeb7ee49fa5757e35ed5 08-Jan-2014 Jason Evans <je@fb.com> Add junk/zero filling unit tests, and fix discovered bugs.

Fix growing large reallocation to junk fill new space.

Fix huge deallocation to junk fill when munmap is disabled.
/external/jemalloc/src/arena.c
0d6c5d8bd0d866a0ce4ce321259cec65d6459821 18-Dec-2013 Jason Evans <jasone@canonware.com> Add quarantine unit tests.

Verify that freed regions are quarantined, and that redzone corruption
is detected.

Introduce a testing idiom for intercepting/replacing internal functions.
In this case the replaced function is ordinarily a static function, but
the idiom should work similarly for library-private functions.
/external/jemalloc/src/arena.c
6e62984ef6ca4312cf0a2e49ea2cc38feb94175b 16-Dec-2013 Jason Evans <jasone@canonware.com> Don't junk-fill reallocations unless usize changes.

Don't junk fill reallocations for which the request size is less than
the current usable size, but not enough smaller to cause a size class
change. Unlike malloc()/calloc()/realloc(), *allocx() contractually
treats the full usize as the allocation, so a caller can ask for zeroed
memory via mallocx() and a series of rallocx() calls that all specify
MALLOCX_ZERO, and be assured that all newly allocated bytes will be
zeroed and made available to the application without danger of allocator
mutation until the size class decreases enough to cause usize reduction.
/external/jemalloc/src/arena.c
d82a5e6a34f20698ab9368bb2b4953b81d175552 13-Dec-2013 Jason Evans <jasone@canonware.com> Implement the *allocx() API.

Implement the *allocx() API, which is a successor to the *allocm() API.
The *allocx() functions are slightly simpler to use because they have
fewer parameters, they directly return the results of primary interest,
and mallocx()/rallocx() avoid the strict aliasing pitfall that
allocm()/rallocx() share with posix_memalign(). The following code
violates strict aliasing rules:

foo_t *foo;
allocm((void **)&foo, NULL, 42, 0);

whereas the following is safe:

foo_t *foo;
void *p;
allocm(&p, NULL, 42, 0);
foo = (foo_t *)p;

mallocx() does not have this problem:

foo_t *foo = (foo_t *)mallocx(42, 0);
/external/jemalloc/src/arena.c
c368f8c8a243248feb7771f4d32691e7b2aa6f1a 30-Oct-2013 Jason Evans <je@fb.com> Remove unnecessary zeroing in arena_palloc().
/external/jemalloc/src/arena.c
dda90f59e2b67903668a2799970f64df163e9ccf 20-Oct-2013 Jason Evans <jasone@canonware.com> Fix a Valgrind integration flaw.

Fix a Valgrind integration flaw that caused Valgrind warnings about
reads of uninitialized memory in internal zero-initialized data
structures (relevant to tcache and prof code).
/external/jemalloc/src/arena.c
87a02d2bb18dbcb2955541b849bc95862e864803 20-Oct-2013 Jason Evans <jasone@canonware.com> Fix a Valgrind integration flaw.

Fix a Valgrind integration flaw that caused Valgrind warnings about
reads of uninitialized memory in arena chunk headers.
/external/jemalloc/src/arena.c
88c222c8e91499bf5d3fba53b24222df0cda5771 06-Feb-2013 Jason Evans <je@fb.com> Fix a prof-related locking order bug.

Fix a locking order bug that could cause deadlock during fork if heap
profiling were enabled.
/external/jemalloc/src/arena.c
06912756cccd0064a9c5c59992dbac1cec68ba3f 01-Feb-2013 Jason Evans <je@fb.com> Fix Valgrind integration.

Fix Valgrind integration to annotate all internally allocated memory in
a way that keeps Valgrind happy about internal data structure access.
/external/jemalloc/src/arena.c
38067483c542adfe092644d1ecc103c6bc74add0 22-Jan-2013 Jason Evans <jasone@canonware.com> Tighten valgrind integration.

Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state. The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
/external/jemalloc/src/arena.c
a3b3386ddde8048b9d6b54c397bb93da5e806cef 13-Nov-2012 Jason Evans <je@fb.com> Avoid arena_prof_accum()-related locking when possible.

Refactor arena_prof_accum() and its callers to avoid arena locking when
prof_interval is 0 (as when profiling is disabled).

Reported by Ben Maurer.
/external/jemalloc/src/arena.c
abf6739317742ca4677bf885178984a8757ee14a 07-Nov-2012 Jason Evans <je@fb.com> Tweak chunk purge order according to fragmentation.

Tweak chunk purge order to purge unfragmented chunks from high to low
memory. This facilitates dirty run reuse.
/external/jemalloc/src/arena.c
e3d13060c8a04f08764b16b003169eb205fa09eb 30-Oct-2012 Jason Evans <je@fb.com> Purge unused dirty pages in a fragmentation-reducing order.

Purge unused dirty pages in an order that first performs clean/dirty run
defragmentation, in order to mitigate available run fragmentation.

Remove the limitation that prevented purging unless at least one chunk
worth of dirty pages had accumulated in an arena. This limitation was
intended to avoid excessive purging for small applications, but the
threshold was arbitrary, and the effect of questionable utility.

Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased
likelihood of allocating clean runs, given the same ratio of clean:dirty
runs, and reduces the potential for repeated purging in pathological
large malloc/free loops that push the active:dirty page ratio just over
the purge threshold.
/external/jemalloc/src/arena.c
609ae595f0358157b19311b0f9f9591db7cee705 11-Oct-2012 Jason Evans <je@fb.com> Add arena-specific and selective dss allocation.

Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.

Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.

Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.

Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.

Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".

Add the "stats.arenas.<i>.dss" mallctl.
/external/jemalloc/src/arena.c
7de92767c20cb72c94609b9c78985526fb84a679 09-Oct-2012 Jason Evans <je@fb.com> Fix mlockall()/madvise() interaction.

mlockall(2) can cause purging via madvise(2) to fail. Fix purging code
to check whether madvise() succeeded, and base zeroed page metadata on
the result.

Reported by Olivier Lecomte.
/external/jemalloc/src/arena.c
f1966e1dc7543543e98386180f2b8530bf9725ab 15-May-2012 Jason Evans <jasone@canonware.com> Update a comment.
/external/jemalloc/src/arena.c
d8ceef6c5558fdab8f9448376ae065a9e5ffcbdd 11-May-2012 Jason Evans <jasone@canonware.com> Fix large calloc() zeroing bugs.

Refactor code such that arena_mapbits_{large,small}_set() always
preserves the unzeroed flag, and manually manipulate the unzeroed flag
in the one case where it actually gets reset (in arena_chunk_purge()).
This fixes unzeroed preservation bugs in arena_run_split() and
arena_ralloc_large_grow(). These bugs caused large calloc() to return
non-zeroed memory under some circumstances.
/external/jemalloc/src/arena.c
30fe12b866edbc2cf9aaef299063b392ea125aac 11-May-2012 Jason Evans <jasone@canonware.com> Add arena chunk map assertions.
/external/jemalloc/src/arena.c
5b0c99649fa71674daadf4dd53b1ab05428483fb 11-May-2012 Jason Evans <jasone@canonware.com> Refactor arena_run_alloc().

Refactor duplicated arena_run_alloc() code into
arena_run_alloc_helper().
/external/jemalloc/src/arena.c
80737c3323dabc45232affcaeb99ac2bad6ea647 03-May-2012 Jason Evans <je@fb.com> Further optimize and harden arena_salloc().

Further optimize arena_salloc() to only look at the binind chunk map
bits in the common case.

Add more sanity checks to arena_salloc() that detect chunk map
inconsistencies for large allocations (whether due to allocator bugs or
application bugs).
/external/jemalloc/src/arena.c
203484e2ea267e068a68fd2922263f0ff1d5ac6f 02-May-2012 Jason Evans <je@fb.com> Optimize malloc() and free() fast paths.

Embed the bin index for small page runs into the chunk page map, in
order to omit [...] in the following dependent load sequence:
ptr-->mapelm-->[run-->bin-->]bin_info

Move various non-critcal code out of the inlined function chain into
helper functions (tcache_event_hard(), arena_dalloc_small(), and
locking).
/external/jemalloc/src/arena.c
da99e31105eb709ef4ec8a120b115c32a6b9723a 30-Apr-2012 Mike Hommey <mh@glandium.org> Replace JEMALLOC_ATTR with various different macros when it makes sense

Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
/external/jemalloc/src/arena.c
8b49971d0ce0819af78aa2a278c26ecb298ee134 24-Apr-2012 Mike Hommey <mh@glandium.org> Avoid variable length arrays and remove declarations within code

MSVC doesn't support C99, and building as C++ to be able to use them is
dangerous, as C++ and C99 are incompatible.

Introduce a VARIABLE_ARRAY macro that either uses VLA when supported,
or alloca() otherwise. Note that using alloca() inside loops doesn't
quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged.
It might be worth investigating ways to check whether VARIABLE_ARRAY is
used in such context at runtime in debug builds and bail out if that
happens.
/external/jemalloc/src/arena.c
f54166e7ef5313c3b5c773cbb0ca2af95f5a15ae 24-Apr-2012 Jason Evans <je@fb.com> Add missing Valgrind annotations.
/external/jemalloc/src/arena.c
f7088e6c992d079bc3162e0c48ed4dc5def6d263 20-Apr-2012 Jason Evans <je@fb.com> Make arena_salloc() an inline function.
/external/jemalloc/src/arena.c
666c5bf7a8baaa842da69cb402948411432a9d00 18-Apr-2012 Mike Hommey <mh@glandium.org> Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) calls

This will be used to implement the feature on mingw, which doesn't have
madvise.
/external/jemalloc/src/arena.c
78f7352259768f670f8e1f9b000388dd32b62493 18-Apr-2012 Jason Evans <je@fb.com> Clean up a few config-related conditionals/asserts.

Clean up a few config-related conditionals to avoid unnecessary
dependencies on prof symbols. Use cassert() rather than assert()
everywhere that it's appropriate.
/external/jemalloc/src/arena.c
7ca0fdfb85b2a9fc7a112e158892c098e004385b 13-Apr-2012 Jason Evans <je@fb.com> Disable munmap() if it causes VM map holes.

Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
/external/jemalloc/src/arena.c
5ff709c264e52651de25b788692c62ff1f6f389c 12-Apr-2012 Jason Evans <jasone@canonware.com> Normalize aligned allocation algorithms.

Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.

Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.

Remove the run_size_p parameter from sa2u().

Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to
chunk_alloc()).
/external/jemalloc/src/arena.c
122449b073bcbaa504c4f592ea2d733503c272d2 06-Apr-2012 Jason Evans <je@fb.com> Implement Valgrind support, redzones, and quarantine.

Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors. Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.

Merge arena_salloc() and arena_salloc_demote().

Refactor i[v]salloc() to expose the 'demote' option.
/external/jemalloc/src/arena.c
eae269036c9f702d9fa9be497a1a2aa1be13a29e 10-Apr-2012 Mike Hommey <mh@glandium.org> Add alignment support to chunk_alloc().
/external/jemalloc/src/arena.c
01b3fe55ff3ac8e4aa689f09fcb0729da8037638 03-Apr-2012 Jason Evans <jasone@canonware.com> Add a0malloc(), a0calloc(), and a0free().

Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's
libc to allocate/deallocate TLS in static binaries.
/external/jemalloc/src/arena.c
ae4c7b4b4092906c641d69b4bf9fcb4a7d50790d 02-Apr-2012 Jason Evans <jasone@canonware.com> Clean up *PAGE* macros.

s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.

Remove remnants of the dynamic-page-shift code.

Rename the "arenas.pagesize" mallctl to "arenas.page".

Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
/external/jemalloc/src/arena.c
4e2e3dd9cf19ed5991938a708a8b50611aa5bbf8 14-Mar-2012 Jason Evans <je@fb.com> Fix fork-related bugs.

Acquire/release arena bin locks as part of the prefork/postfork. This
bug made deadlock in the child between fork and exec a possibility.

Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so
that the child can reinitialize mutexes rather than unlocking them. In
practice, this bug tended not to cause problems.
/external/jemalloc/src/arena.c
bdcadf41e961a3c6bbb37d8d24e4b68a27f2b952 29-Feb-2012 Jason Evans <je@fb.com> Remove unused variable in arena_run_split().

Submitted by Mike Hommey.
/external/jemalloc/src/arena.c
b172610317babc7f365584ddd7fdaf4eb8d9d04c 29-Feb-2012 Jason Evans <je@fb.com> Simplify small size class infrastructure.

Program-generate small size class tables for all valid combinations of
LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate
all relevant data structures, and remove the distinction between
tiny/quantum/cacheline/subpage bins.

Remove --enable-dynamic-page-shift. This option didn't prove useful in
practice, and it prevented optimizations.

Add Tilera architecture support.
/external/jemalloc/src/arena.c
e7a1058aaa6b2cbdd19da297bf2250f86dcdac89 14-Feb-2012 Jason Evans <je@fb.com> Fix bin->runcur management.

Fix an interaction between arena_dissociate_bin_run() and
arena_bin_lower_run() that made it possible for bin->runcur to point to
a run other than the lowest non-full run. This bug violated jemalloc's
layout policy, but did not affect correctness.
/external/jemalloc/src/arena.c
746868929afae3e346b47d0fa8a78d7fb131d5a4 14-Feb-2012 Jason Evans <je@fb.com> Remove highruns statistics.
/external/jemalloc/src/arena.c
ef8897b4b938111fcc9b54725067f1dbb33a4c20 13-Feb-2012 Jason Evans <je@fb.com> Make 8-byte tiny size class non-optional.

When tiny size class support was first added, it was intended to support
truly tiny size classes (even 2 bytes). However, this wasn't very
useful in practice, so the minimum tiny size class has been limited to
sizeof(void *) for a long time now. This is too small to be standards
compliant, but other commonly used malloc implementations do not even
bother using a 16-byte quantum on systems with vector units (SSE2+,
AltiVEC, etc.). As such, it is safe in practice to support an 8-byte
tiny size class on 64-bit systems that support 16-byte types.
/external/jemalloc/src/arena.c
962463d9b57bcc65de2fa108a691b4183b9b2faf 13-Feb-2012 Jason Evans <je@fb.com> Streamline tcache-related malloc/free fast paths.

tcache_get() is inlined, so do the config_tcache check inside
tcache_get() and simplify its callers.

Make arena_malloc() an inline function, since it is part of the malloc()
fast path.

Remove conditional logic that cause build issues if --disable-tcache was
specified.
/external/jemalloc/src/arena.c
4162627757889ea999264c2ddbc3c354768774e2 13-Feb-2012 Jason Evans <je@fb.com> Remove the swap feature.

Remove the swap feature, which enabled per application swap files. In
practice this feature has not proven itself useful to users.
/external/jemalloc/src/arena.c
fd56043c53f1cd1335ae6d1c0ee86cc0fbb9f12e 13-Feb-2012 Jason Evans <je@fb.com> Remove magic.

Remove structure magic, because 1) it is no longer conditional, and 2)
it stopped being very effective at detecting memory corruption several
years ago.
/external/jemalloc/src/arena.c
7372b15a31c63ac5cb9ed8aeabc2a0a3c005e8bf 11-Feb-2012 Jason Evans <je@fb.com> Reduce cpp conditional logic complexity.

Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:

#ifdef JEMALLOC_DEBUG
[...]
#endif

becomes:

if (config_debug) {
[...]
}

The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
/external/jemalloc/src/arena.c
12a488782681cbd740a5f54e0b7e74ea84858e21 11-Nov-2011 Jason Evans <je@fb.com> Fix huge_ralloc to maintain chunk statistics.

Fix huge_ralloc() to properly maintain chunk statistics when using
mremap(2).
/external/jemalloc/src/arena.c
183ba50c1940a95080f6cf890ae4ae40200301e7 12-Aug-2011 Jason Evans <je@fb.com> Fix two prof-related bugs in rallocm().

Properly handle boundary conditions for sampled region promotion in
rallocm(). Prior to this fix, some combinations of 'size' and 'extra'
values could cause erroneous behavior. Additionally, size class
recording for promoted regions was incorrect.
/external/jemalloc/src/arena.c
f9a8edbb50f8cbcaf8ed62b36e8d7191ed223d2a 13-Jun-2011 Jason Evans <je@fb.com> Fix assertions in arena_purge().

Fix assertions in arena_purge() to accurately reflect the constraints in
arena_maybe_purge(). There were two bugs here, one of which merely
weakened the assertion, and the other of which referred to an
uninitialized variable (typo; used npurgatory instead of
arena->npurgatory).
/external/jemalloc/src/arena.c
7427525c28d58c423a68930160e3b0fe577fe953 01-Apr-2011 Jason Evans <jasone@canonware.com> Move repo contents in jemalloc/ to top level.
/external/jemalloc/src/arena.c