8d8f9aeeaa77514d5732db5bd0111232af21fcfd |
|
29-May-2014 |
Jason Evans <je@fb.com> |
Add size class computation capability. Add size class computation capability, currently used only as validation of the size class lookup tables. Generalize the size class spacing used for bins, for eventual use throughout the full range of allocation sizes.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
12141150fdbda57651a53ae2fe0edaea4891d814 |
|
16-May-2014 |
Jason Evans <je@fb.com> |
Refactor huge allocation to be managed by arenas. Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
59113bcc94b9fc7549611afb99ca99cad1a7f196 |
|
06-May-2014 |
aravind <aravind@fb.com> |
Add support for user-specified chunk allocators/deallocators. Add new mallctl endpoints "arena<i>.chunk.alloc" and "arena<i>.chunk.dealloc" to allow userspace to configure jemalloc's chunk allocator and deallocator on a per-arena basis.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
3541a904d6fb949f3f0aea05418ccce7cbd4b705 |
|
17-Apr-2014 |
Jason Evans <je@fb.com> |
Refactor small_size2bin and small_bin2size. Refactor small_size2bin and small_bin2size to be inline functions rather than directly accessed arrays.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
3e3caf03af6ca579e473ace4daf25f63102aca4f |
|
17-Apr-2014 |
Jason Evans <jasone@canonware.com> |
Merge pull request #73 from bmaurer/smallmalloc Smaller malloc hot path
|
021136ce4db79f50031a1fd5dd751891888fbc7b |
|
16-Apr-2014 |
Ben Maurer <bmaurer@fb.com> |
Create a const array with only a small bin to size map
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
6c39f9e059d0825f4c29d8cec9f318b798912c3c |
|
15-Apr-2014 |
Ben Maurer <bmaurer@fb.com> |
refactor profiling. only use a bytes till next sample variable.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
a7619b7fa56f98d1ca99a23b458696dd37c12b77 |
|
15-Apr-2014 |
Ben Maurer <bmaurer@fb.com> |
outline rare tcache_get codepaths
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
bd87b01999416ec7418ff8bdb504d9b6c009ff68 |
|
16-Apr-2014 |
Jason Evans <je@fb.com> |
Optimize Valgrind integration. Forcefully disable tcache if running inside Valgrind, and remove Valgrind calls in tcache-specific code. Restructure Valgrind-related code to move most Valgrind calls out of the fast path functions. Take advantage of static knowledge to elide some branches in JEMALLOC_VALGRIND_REALLOC().
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
ecd3e59ca351d7111ec72a327fe0c009f2aa69a0 |
|
15-Apr-2014 |
Jason Evans <je@fb.com> |
Remove the "opt.valgrind" mallctl. Remove the "opt.valgrind" mallctl because it is unnecessary -- jemalloc automatically detects whether it is running inside valgrind.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
9790b9667fd975b1f9a4f108f9d0a20ab265c6b6 |
|
15-Apr-2014 |
Jason Evans <jasone@canonware.com> |
Remove the *allocm() API, which is superceded by the *allocx() API.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
9b0cbf0850b130a9b0a8c58bd10b2926b2083510 |
|
11-Apr-2014 |
Jason Evans <je@fb.com> |
Remove support for non-prof-promote heap profiling metadata. Make promotion of sampled small objects to large objects mandatory, so that profiling metadata can always be stored in the chunk map, rather than requiring one pointer per small region in each small-region page run. In practice the non-prof-promote code was only useful when using jemalloc to track all objects and report them as leaks at program exit. However, Valgrind is at least as good a tool for this particular use case. Furthermore, the non-prof-promote code is getting in the way of some optimizations that will make heap profiling much cheaper for the predominant use case (sampling a small representative proportion of all allocations).
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
8a26eaca7f4c95771ecbf096caeeba14fbe1122f |
|
31-Mar-2014 |
Jason Evans <jasone@canonware.com> |
Add private namespace mangling for huge_dss_prec_get().
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
772163b4f3d8e9a12343e9215f6b070068507604 |
|
18-Jan-2014 |
Jason Evans <je@fb.com> |
Add heap profiling tests. Fix a regression in prof_dump_ctx() due to an uninitized variable. This was caused by revision 4f37ef693e3d5903ce07dc0b61c0da320b35e3d9, so no releases are affected.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
b2c31660be917ea6d59cd54e6f650b06b5e812ed |
|
13-Jan-2014 |
Jason Evans <je@fb.com> |
Extract profiling code from [re]allocation functions. Extract profiling code from malloc(), imemalign(), calloc(), realloc(), mallocx(), rallocx(), and xallocx(). This slightly reduces the amount of code compiled into the fast paths, but the primary benefit is the combinatorial complexity reduction. Simplify iralloc[t]() by creating a separate ixalloc() that handles the no-move cases. Further simplify [mrxn]allocx() (and by implication [mrn]allocm()) to make request size overflows due to size class and/or alignment constraints trigger undefined behavior (detected by debug-only assertions). Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling backtrace creation in imemalign(). This bug impacted posix_memalign() and aligned_alloc().
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
6b694c4d47278cddfaaedeb7ee49fa5757e35ed5 |
|
08-Jan-2014 |
Jason Evans <je@fb.com> |
Add junk/zero filling unit tests, and fix discovered bugs. Fix growing large reallocation to junk fill new space. Fix huge deallocation to junk fill when munmap is disabled.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
b980cc774a9ccb208a82f4e9ccdcc695d06a960a |
|
03-Jan-2014 |
Jason Evans <je@fb.com> |
Add rtree unit tests.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
0d6c5d8bd0d866a0ce4ce321259cec65d6459821 |
|
18-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Add quarantine unit tests. Verify that freed regions are quarantined, and that redzone corruption is detected. Introduce a testing idiom for intercepting/replacing internal functions. In this case the replaced function is ordinarily a static function, but the idiom should work similarly for library-private functions.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
d82a5e6a34f20698ab9368bb2b4953b81d175552 |
|
13-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Implement the *allocx() API. Implement the *allocx() API, which is a successor to the *allocm() API. The *allocx() functions are slightly simpler to use because they have fewer parameters, they directly return the results of primary interest, and mallocx()/rallocx() avoid the strict aliasing pitfall that allocm()/rallocx() share with posix_memalign(). The following code violates strict aliasing rules: foo_t *foo; allocm((void **)&foo, NULL, 42, 0); whereas the following is safe: foo_t *foo; void *p; allocm(&p, NULL, 42, 0); foo = (foo_t *)p; mallocx() does not have this problem: foo_t *foo = (foo_t *)mallocx(42, 0);
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|
86abd0dcd8e478759fe409d338d11558c4cec427 |
|
01-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Refactor to support more varied testing. Refactor the test harness to support three types of tests: - unit: White box unit tests. These tests have full access to all internal jemalloc library symbols. Though in actuality all symbols are prefixed by jet_, macro-based name mangling abstracts this away from test code. - integration: Black box integration tests. These tests link with the installable shared jemalloc library, and with the exception of some utility code and configure-generated macro definitions, they have no access to jemalloc internals. - stress: Black box stress tests. These tests link with the installable shared jemalloc library, as well as with an internal allocator with symbols prefixed by jet_ (same as for unit tests) that can be used to allocate data structures that are internal to the test code. Move existing tests into test/{unit,integration}/ as appropriate. Split out internal parts of jemalloc_defs.h.in and put them in jemalloc_internal_defs.h.in. This reduces internals exposure to applications that #include <jemalloc/jemalloc.h>. Refactor jemalloc.h header generation so that a single header file results, and the prototypes can be used to generate jet_ prototypes for tests. Split jemalloc.h.in into multiple parts (jemalloc_defs.h.in, jemalloc_macros.h.in, jemalloc_protos.h.in, jemalloc_mangle.h.in) and use a shell script to generate a unified jemalloc.h at configure time. Change the default private namespace prefix from "" to "je_". Add missing private namespace mangling. Remove hard-coded private_namespace.h. Instead generate it and private_unnamespace.h from private_symbols.txt. Use similar logic for public symbols, which aids in name mangling for jet_ symbols. Add test_warn() and test_fail(). Replace existing exit(1) calls with test_fail() calls.
/external/jemalloc/include/jemalloc/internal/private_symbols.txt
|