History log of /external/jemalloc/src/tcache.c
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
e42940346e47de63bfc47470c86c3c132ec2db8c 02-Mar-2016 Christopher Ferris <cferris@google.com> Merge remote-tracking branch 'aosp/upstream-dev' into merge

Bug: 26807329
(cherry picked from commit fb9c9c8d5230956caa48501dad4fde4b90e00319)

Change-Id: I428ae6395d8c00db6baef5313b3dd47b68444bd9
/external/jemalloc/src/tcache.c
f4a0f32d340985de477bbe329ecdaecd69ed1055 27-Oct-2015 Qi Wang <interwq@gwu.edu> Fast-path improvement: reduce # of branches and unnecessary operations.

- Combine multiple runtime branches into a single malloc_slow check.
- Avoid calling arena_choose / size2index / index2size on fast path.
- A few micro optimizations.
/external/jemalloc/src/tcache.c
676df88e48ae5ab77b05d78cb511cfa2e57d277f 12-Sep-2015 Jason Evans <jasone@canonware.com> Rename arena_maxclass to large_maxclass.

arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
/external/jemalloc/src/tcache.c
d01fd19755bc0c2f5be3143349016dd0d7de7b36 20-Aug-2015 Jason Evans <jasone@canonware.com> Rename index_t to szind_t to avoid an existing type on Solaris.

This resolves #256.
/external/jemalloc/src/tcache.c
836bbe9951a903b2d76af53dfb3ad53ad186f8b9 20-May-2015 Jason Evans <jasone@canonware.com> Impose a minimum tcache count for small size classes.

Now that small allocation runs have fewer regions due to run metadata
residing in chunk headers, an explicit minimum tcache count is needed to
make sure that tcache adequately amortizes synchronization overhead.
/external/jemalloc/src/tcache.c
5aa50a2834fb09c5338f0e7b9db49cc0edd1a38a 20-May-2015 Jason Evans <jasone@canonware.com> Fix nhbins calculation.

This regression was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
/external/jemalloc/src/tcache.c
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb 16-Feb-2015 Jason Evans <jasone@canonware.com> Integrate whole chunks into unused dirty page purging machinery.

Extend per arena unused dirty page purging to manage unused dirty chunks
in aaddtion to unused dirty runs. Rather than immediately unmapping
deallocated chunks (or purging them in the --disable-munmap case), store
them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially
allocate dirty chunks. When excessive unused dirty pages accumulate,
purge runs and chunks in ingegrated LRU order (and unmap chunks in the
--enable-munmap case).

Refactor extent_node_t to provide accessor functions.
/external/jemalloc/src/tcache.c
41cfe03f39740fe61cf46d86982f66c24168de32 14-Feb-2015 Jason Evans <je@fb.com> If MALLOCX_ARENA(a) is specified, use it during tcache fill.
/external/jemalloc/src/tcache.c
cbf3a6d70371d2390b8b0e76814e04cc6088002c 11-Feb-2015 Jason Evans <jasone@canonware.com> Move centralized chunk management into arenas.

Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree. Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled. The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering. These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
/external/jemalloc/src/tcache.c
064dbfbaf76617643bbbe66cbcc880e7ee9ec00f 12-Feb-2015 Jason Evans <jasone@canonware.com> Fix a regression in tcache_bin_flush_small().

Fix a serious regression in tcache_bin_flush_small() that was introduced
by 1cb181ed632e7573fb4eab194e4d216867222d27 (Implement explicit tcache
support.).
/external/jemalloc/src/tcache.c
9e561e8d3f3c625b98b57df069eeac0fa2f522fb 10-Feb-2015 Jason Evans <jasone@canonware.com> Test and fix tcache ID recycling.
/external/jemalloc/src/tcache.c
1cb181ed632e7573fb4eab194e4d216867222d27 30-Jan-2015 Jason Evans <je@fb.com> Implement explicit tcache support.

Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be
used in conjunction with the *allocx() API.

Add the tcache.create, tcache.flush, and tcache.destroy mallctls.

This resolves #145.
/external/jemalloc/src/tcache.c
4581b97809e7e545c38b996870a4e7284a620bc5 27-Nov-2014 Jason Evans <je@fb.com> Implement metadata statistics.

There are three categories of metadata:

- Base allocations are used for bootstrap-sensitive internal allocator
data structures.
- Arena chunk headers comprise pages which track the states of the
non-metadata pages.
- Internal allocations differ from application-originated allocations
in that they are for internal use, and that they are omitted from heap
profiles.

The metadata statistics comprise the metadata categories as follows:

- stats.metadata: All metadata -- base + arena chunk headers + internal
allocations.
- stats.arenas.<i>.metadata.mapped: Arena chunk headers.
- stats.arenas.<i>.metadata.allocated: Internal allocations. This is
reported separately from the other metadata statistics because it
overlaps with the allocated and active statistics, whereas the other
metadata statistics do not.

Base allocations are not reported separately, though their magnitude can
be computed by subtracting the arena-specific metadata.

This resolves #163.
/external/jemalloc/src/tcache.c
fc0b3b7383373d66cfed2cd4e2faa272a6868d32 10-Oct-2014 Jason Evans <jasone@canonware.com> Add configure options.

Add:
--with-lg-page
--with-lg-page-sizes
--with-lg-size-class-group
--with-lg-quantum

Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE.

Fix various edge conditions exposed by the configure options.
/external/jemalloc/src/tcache.c
8bb3198f72fc7587dc93527f9f19fb5be52fa553 08-Oct-2014 Jason Evans <jasone@canonware.com> Refactor/fix arenas manipulation.

Abstract arenas access to use arena_get() (or a0get() where appropriate)
rather than directly reading e.g. arenas[ind]. Prior to the addition of
the arenas.extend mallctl, the worst possible outcome of directly
accessing arenas was a stale read, but arenas.extend may allocate and
assign a new array to arenas.

Add a tsd-based arenas_cache, which amortizes arenas reads. This
introduces some subtle bootstrapping issues, with tsd_boot() now being
split into tsd_boot[01]() to support tsd wrapper allocation
bootstrapping, as well as an arenas_cache_bypass tsd variable which
dynamically terminates allocation of arenas_cache itself.

Promote a0malloc(), a0calloc(), and a0free() to be generally useful for
internal allocation, and use them in several places (more may be
appropriate).

Abstract arena->nthreads management and fix a missing decrement during
thread destruction (recent tsd refactoring left arenas_cleanup()
unused).

Change arena_choose() to propagate OOM, and handle OOM in all callers.
This is important for providing consistent allocation behavior when the
MALLOCX_ARENA() flag is being used. Prior to this fix, it was possible
for an OOM to result in allocation silently allocating from a different
arena than the one specified.
/external/jemalloc/src/tcache.c
155bfa7da18cab0d21d87aa2dce4554166836f5d 06-Oct-2014 Jason Evans <jasone@canonware.com> Normalize size classes.

Normalize size classes to use the same number of size classes per size
doubling (currently hard coded to 4), across the intire range of size
classes. Small size classes already used this spacing, but in order to
support this change, additional small size classes now fill [4 KiB .. 16
KiB). Large size classes range from [16 KiB .. 4 MiB). Huge size
classes now support non-multiples of the chunk size in order to fill (4
MiB .. 16 MiB).
/external/jemalloc/src/tcache.c
029d44cf8b22aa7b749747bfd585887fb59e0030 04-Oct-2014 Jason Evans <jasone@canonware.com> Fix tsd cleanup regressions.

Fix tsd cleanup regressions that were introduced in
5460aa6f6676c7f253bfcb75c028dfd38cae8aaf (Convert all tsd variables to
reside in a single tsd structure.). These regressions were twofold:

1) tsd_tryget() should never (and need never) return NULL. Rename it to
tsd_fetch() and simplify all callers.
2) tsd_*_set() must only be called when tsd is in the nominal state,
because cleanup happens during the nominal-->purgatory transition,
and re-initialization must not happen while in the purgatory state.
Add tsd_nominal() and use it as needed. Note that tsd_*{p,}_get()
can still be used as long as no re-initialization that would require
cleanup occurs. This means that e.g. the thread_allocated counter
can be updated unconditionally.
/external/jemalloc/src/tcache.c
551ebc43647521bdd0bc78558b106762b3388928 03-Oct-2014 Jason Evans <jasone@canonware.com> Convert to uniform style: cond == false --> !cond
/external/jemalloc/src/tcache.c
5460aa6f6676c7f253bfcb75c028dfd38cae8aaf 23-Sep-2014 Jason Evans <jasone@canonware.com> Convert all tsd variables to reside in a single tsd structure.
/external/jemalloc/src/tcache.c
ff6a31d3b92b7c63446ce645341d2bbd77b67dc6 29-Aug-2014 Qinfan Wu <wqfish@fb.com> Refactor chunk map.

Break the chunk map into two separate arrays, in order to improve cache
locality. This is related to issue #23.
/external/jemalloc/src/tcache.c
58799f6d1c1f58053f4aac1b100ce9049c868039 27-Aug-2014 Qinfan Wu <wqfish@fb.com> Remove junk filling in tcache_bin_flush_small().

Junk filling is done in arena_dalloc_bin_locked(), so arena_alloc_junk_small()
is redundant. Also, we should use arena_dalloc_junk_small() instead of
arena_alloc_junk_small().
/external/jemalloc/src/tcache.c
a7619b7fa56f98d1ca99a23b458696dd37c12b77 15-Apr-2014 Ben Maurer <bmaurer@fb.com> outline rare tcache_get codepaths
/external/jemalloc/src/tcache.c
d82a5e6a34f20698ab9368bb2b4953b81d175552 13-Dec-2013 Jason Evans <jasone@canonware.com> Implement the *allocx() API.

Implement the *allocx() API, which is a successor to the *allocm() API.
The *allocx() functions are slightly simpler to use because they have
fewer parameters, they directly return the results of primary interest,
and mallocx()/rallocx() avoid the strict aliasing pitfall that
allocm()/rallocx() share with posix_memalign(). The following code
violates strict aliasing rules:

foo_t *foo;
allocm((void **)&foo, NULL, 42, 0);

whereas the following is safe:

foo_t *foo;
void *p;
allocm(&p, NULL, 42, 0);
foo = (foo_t *)p;

mallocx() does not have this problem:

foo_t *foo = (foo_t *)mallocx(42, 0);
/external/jemalloc/src/tcache.c
30e7cb11186554eb3ee860856eb5b8d541d7740c 22-Oct-2013 Jason Evans <je@fb.com> Fix a data race for large allocation stats counters.

Reported by Pat Lynch.
/external/jemalloc/src/tcache.c
88c222c8e91499bf5d3fba53b24222df0cda5771 06-Feb-2013 Jason Evans <je@fb.com> Fix a prof-related locking order bug.

Fix a locking order bug that could cause deadlock during fork if heap
profiling were enabled.
/external/jemalloc/src/tcache.c
a3b3386ddde8048b9d6b54c397bb93da5e806cef 13-Nov-2012 Jason Evans <je@fb.com> Avoid arena_prof_accum()-related locking when possible.

Refactor arena_prof_accum() and its callers to avoid arena locking when
prof_interval is 0 (as when profiling is disabled).

Reported by Ben Maurer.
/external/jemalloc/src/tcache.c
609ae595f0358157b19311b0f9f9591db7cee705 11-Oct-2012 Jason Evans <je@fb.com> Add arena-specific and selective dss allocation.

Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.

Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.

Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.

Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.

Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".

Add the "stats.arenas.<i>.dss" mallctl.
/external/jemalloc/src/tcache.c
203484e2ea267e068a68fd2922263f0ff1d5ac6f 02-May-2012 Jason Evans <je@fb.com> Optimize malloc() and free() fast paths.

Embed the bin index for small page runs into the chunk page map, in
order to omit [...] in the following dependent load sequence:
ptr-->mapelm-->[run-->bin-->]bin_info

Move various non-critcal code out of the inlined function chain into
helper functions (tcache_event_hard(), arena_dalloc_small(), and
locking).
/external/jemalloc/src/tcache.c
f7088e6c992d079bc3162e0c48ed4dc5def6d263 20-Apr-2012 Jason Evans <je@fb.com> Make arena_salloc() an inline function.
/external/jemalloc/src/tcache.c
122449b073bcbaa504c4f592ea2d733503c272d2 06-Apr-2012 Jason Evans <je@fb.com> Implement Valgrind support, redzones, and quarantine.

Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors. Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.

Merge arena_salloc() and arena_salloc_demote().

Refactor i[v]salloc() to expose the 'demote' option.
/external/jemalloc/src/tcache.c
3701367e4ca6b77109e1cce0a5b98a8ac69cf505 06-Apr-2012 Jason Evans <je@fb.com> Always initialize tcache data structures.

Always initialize tcache data structures if the tcache configuration
option is enabled, regardless of opt_tcache. This fixes
"thread.tcache.enabled" mallctl manipulation in the case when opt_tcache
is false.
/external/jemalloc/src/tcache.c
ae4c7b4b4092906c641d69b4bf9fcb4a7d50790d 02-Apr-2012 Jason Evans <jasone@canonware.com> Clean up *PAGE* macros.

s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.

Remove remnants of the dynamic-page-shift code.

Rename the "arenas.pagesize" mallctl to "arenas.page".

Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
/external/jemalloc/src/tcache.c
d4be8b7b6ee2e21d079180455d4ccbf45cc1cee7 27-Mar-2012 Jason Evans <jasone@canonware.com> Add the "thread.tcache.enabled" mallctl.
/external/jemalloc/src/tcache.c
cd9a1346e96f71bdecdc654ea50fc62d76371e74 22-Mar-2012 Jason Evans <je@fb.com> Implement tsd.

Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally. Modify bootstrapping such that no tsd's are utilized until
allocation is safe.

Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.

Fix %p argument size handling in malloc_vsnprintf().

Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
/external/jemalloc/src/tcache.c
e24c7af35d1e9d24d02166ac98cfca7cf807ff13 19-Mar-2012 Jason Evans <je@fb.com> Invert NO_TLS to JEMALLOC_TLS.
/external/jemalloc/src/tcache.c
4507f34628dfae26e6b0a6faa13e5f9a49600616 05-Mar-2012 Jason Evans <je@fb.com> Remove the lg_tcache_gc_sweep option.

Remove the lg_tcache_gc_sweep option, because it is no longer
very useful. Prior to the addition of dynamic adjustment of tcache fill
count, it was possible for fill/flush overhead to be a problem, but this
problem no longer occurs.
/external/jemalloc/src/tcache.c
b172610317babc7f365584ddd7fdaf4eb8d9d04c 29-Feb-2012 Jason Evans <je@fb.com> Simplify small size class infrastructure.

Program-generate small size class tables for all valid combinations of
LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate
all relevant data structures, and remove the distinction between
tiny/quantum/cacheline/subpage bins.

Remove --enable-dynamic-page-shift. This option didn't prove useful in
practice, and it prevented optimizations.

Add Tilera architecture support.
/external/jemalloc/src/tcache.c
962463d9b57bcc65de2fa108a691b4183b9b2faf 13-Feb-2012 Jason Evans <je@fb.com> Streamline tcache-related malloc/free fast paths.

tcache_get() is inlined, so do the config_tcache check inside
tcache_get() and simplify its callers.

Make arena_malloc() an inline function, since it is part of the malloc()
fast path.

Remove conditional logic that cause build issues if --disable-tcache was
specified.
/external/jemalloc/src/tcache.c
7372b15a31c63ac5cb9ed8aeabc2a0a3c005e8bf 11-Feb-2012 Jason Evans <je@fb.com> Reduce cpp conditional logic complexity.

Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:

#ifdef JEMALLOC_DEBUG
[...]
#endif

becomes:

if (config_debug) {
[...]
}

The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
/external/jemalloc/src/tcache.c
7427525c28d58c423a68930160e3b0fe577fe953 01-Apr-2011 Jason Evans <jasone@canonware.com> Move repo contents in jemalloc/ to top level.
/external/jemalloc/src/tcache.c