962a2979e353f876f3725417179f201e671d9dbb |
|
21-Oct-2016 |
Jason Evans <jasone@canonware.com> |
Do not (recursively) allocate within tsd_fetch(). Refactor tsd so that tsdn_fetch() does not trigger allocation, since allocation could cause infinite recursion. This resolves #458.
/external/jemalloc/include/jemalloc/internal/prof.h
|
20cd2de5ef622c3af8b3e4aba897aff7ddd451a7 |
|
02-Jun-2016 |
Jason Evans <jasone@canonware.com> |
Add a missing prof_alloc_rollback() call. In the case where prof_alloc_prep() is called with an over-estimate of allocation size, and sampling doesn't end up being triggered, the tctx must be discarded.
/external/jemalloc/include/jemalloc/internal/prof.h
|
c1e00ef2a6442d1d047950247c757821560db329 |
|
11-May-2016 |
Jason Evans <jasone@canonware.com> |
Resolve bootstrapping issues when embedded in FreeBSD libc. b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
/external/jemalloc/include/jemalloc/internal/prof.h
|
174c0c3a9c63b3a0bfa32381148b537e9b9af96d |
|
26-Apr-2016 |
Jason Evans <jasone@canonware.com> |
Fix fork()-related lock rank ordering reversals.
/external/jemalloc/include/jemalloc/internal/prof.h
|
b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 |
|
14-Apr-2016 |
Jason Evans <jasone@canonware.com> |
Add witness, a simple online locking validator. This resolves #358.
/external/jemalloc/include/jemalloc/internal/prof.h
|
f4a0f32d340985de477bbe329ecdaecd69ed1055 |
|
27-Oct-2015 |
Qi Wang <interwq@gwu.edu> |
Fast-path improvement: reduce # of branches and unnecessary operations. - Combine multiple runtime branches into a single malloc_slow check. - Avoid calling arena_choose / size2index / index2size on fast path. - A few micro optimizations.
/external/jemalloc/include/jemalloc/internal/prof.h
|
708ed79834fc3b8e5b14dbb0128a0ebfce63a38f |
|
15-Sep-2015 |
Jason Evans <jasone@canonware.com> |
Resolve an unsupported special case in arena_prof_tctx_set(). Add arena_prof_tctx_reset() and use it instead of arena_prof_tctx_set() when resetting the tctx pointer during reallocation, which happens whenever an originally sampled reallocated object is not sampled during reallocation. This regression was introduced by 594c759f37c301d0245dc2accf4d4aaf9d202819 (Optimize arena_prof_tctx_set().)
/external/jemalloc/include/jemalloc/internal/prof.h
|
ea8d97b8978a0c0423f0ed64332463a25b787c3d |
|
15-Sep-2015 |
Jason Evans <jasone@canonware.com> |
Fix prof_{malloc,free}_sample_object() call order in prof_realloc(). Fix prof_realloc() to call prof_free_sampled_object() after calling prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were the same, the tctx could have been prematurely destroyed.
/external/jemalloc/include/jemalloc/internal/prof.h
|
cec0d63d8bc46205d38456024176a0ece590253e |
|
15-Sep-2015 |
Jason Evans <jasone@canonware.com> |
Make one call to prof_active_get_unlocked() per allocation event. Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes.
/external/jemalloc/include/jemalloc/internal/prof.h
|
a00b10735a80f7070714b278c8acdad4473bea69 |
|
10-Sep-2015 |
Jason Evans <jasone@canonware.com> |
Fix "prof.reset" mallctl-related corruption. Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
/external/jemalloc/include/jemalloc/internal/prof.h
|
594c759f37c301d0245dc2accf4d4aaf9d202819 |
|
02-Sep-2015 |
Jason Evans <jasone@canonware.com> |
Optimize arena_prof_tctx_set(). Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
/external/jemalloc/include/jemalloc/internal/prof.h
|
04211e226628c41da4b3804ba411b5dd4b3a02ab |
|
16-Mar-2015 |
Jason Evans <je@fb.com> |
Fix heap profiling regressions. Remove the prof_tctx_state_destroying transitory state and instead add the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely identifies a tctx. This assures that tctx's are well ordered even when more than two with the same thr_uid coexist. A previous attempted fix based on prof_tctx_state_destroying was only sufficient for protecting against two coexisting tctx's, but it also introduced a new dumping race. These regressions were introduced by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.) and 764b00023f2bc97f240c3a758ed23ce9c0ad8526 (Fix a heap profiling regression.).
/external/jemalloc/include/jemalloc/internal/prof.h
|
764b00023f2bc97f240c3a758ed23ce9c0ad8526 |
|
14-Mar-2015 |
Jason Evans <je@fb.com> |
Fix a heap profiling regression. Add the prof_tctx_state_destroying transitionary state to fix a race between a thread destroying a tctx and another thread creating a new equivalent tctx. This regression was introduced by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.).
/external/jemalloc/include/jemalloc/internal/prof.h
|
88fef7ceda6269598cef0cee8b984c8765673c27 |
|
12-Feb-2015 |
Jason Evans <je@fb.com> |
Refactor huge_*() calls into arena internals. Make redirects to the huge_*() API the arena code's responsibility, since arenas now take responsibility for all allocation sizes.
/external/jemalloc/include/jemalloc/internal/prof.h
|
5b8ed5b7c91939f64f14fc48be84ed20e3f023f4 |
|
26-Jan-2015 |
Jason Evans <jasone@canonware.com> |
Implement the prof.gdump mallctl. This feature makes it possible to toggle the gdump feature on/off during program execution, whereas the the opt.prof_dump mallctl value can only be set during program startup. This resolves #72.
/external/jemalloc/include/jemalloc/internal/prof.h
|
cfc5706f6977a48f3b82d69cd68aa1cf8802fb8d |
|
31-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Miscellaneous cleanups.
/external/jemalloc/include/jemalloc/internal/prof.h
|
809b0ac3919da60c20ad59517ef560d0df639f3b |
|
23-Oct-2014 |
Daniel Micay <danielmicay@gmail.com> |
mark huge allocations as unlikely This cleans up the fast path a bit more by moving away more code.
/external/jemalloc/include/jemalloc/internal/prof.h
|
44c97b712ef1669a4c75ea97e8d47c0535e9ec71 |
|
12-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Fix a prof_tctx_t/prof_tdata_t cleanup race. Fix a prof_tctx_t/prof_tdata_t cleanup race by storing a copy of thr_uid in prof_tctx_t, so that the associated tdata need not be present during tctx teardown.
/external/jemalloc/include/jemalloc/internal/prof.h
|
34e85b4182d5ae029b558aae3da25fff7c3efe12 |
|
04-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Make prof-related inline functions always-inline.
/external/jemalloc/include/jemalloc/internal/prof.h
|
029d44cf8b22aa7b749747bfd585887fb59e0030 |
|
04-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Fix tsd cleanup regressions. Fix tsd cleanup regressions that were introduced in 5460aa6f6676c7f253bfcb75c028dfd38cae8aaf (Convert all tsd variables to reside in a single tsd structure.). These regressions were twofold: 1) tsd_tryget() should never (and need never) return NULL. Rename it to tsd_fetch() and simplify all callers. 2) tsd_*_set() must only be called when tsd is in the nominal state, because cleanup happens during the nominal-->purgatory transition, and re-initialization must not happen while in the purgatory state. Add tsd_nominal() and use it as needed. Note that tsd_*{p,}_get() can still be used as long as no re-initialization that would require cleanup occurs. This means that e.g. the thread_allocated counter can be updated unconditionally.
/external/jemalloc/include/jemalloc/internal/prof.h
|
fc12c0b8bc1160530d1e3e641b76d2a4f793136f |
|
04-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Implement/test/fix prof-related mallctl's. Implement/test/fix the opt.prof_thread_active_init, prof.thread_active_init, and thread.prof.active mallctl's. Test/fix the thread.prof.name mallctl. Refactor opt_prof_active to be read-only and move mutable state into the prof_active variable. Stop leaning on ctl-related locking for protection.
/external/jemalloc/include/jemalloc/internal/prof.h
|
551ebc43647521bdd0bc78558b106762b3388928 |
|
03-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Convert to uniform style: cond == false --> !cond
/external/jemalloc/include/jemalloc/internal/prof.h
|
20c31deaae38ed9aa4fe169ed65e0c45cd542955 |
|
03-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Test prof.reset mallctl and fix numerous discovered bugs.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6ef80d68f092caf3b3802a73b8d716057b41864c |
|
25-Sep-2014 |
Jason Evans <jasone@canonware.com> |
Fix profile dumping race. Fix a race that caused a non-critical assertion failure. To trigger the race, a thread had to be part way through initializing a new sample, such that it was discoverable by the dumping thread, but not yet linked into its gctx by the time a later dump phase would normally have reset its state to 'nominal'. Additionally, lock access to the state field during modification to transition to the dumping state. It's not apparent that this oversight could have caused an actual problem due to outer locking that protects the dumping machinery, but the added locking pedantically follows the stated locking protocol for the state field.
/external/jemalloc/include/jemalloc/internal/prof.h
|
5460aa6f6676c7f253bfcb75c028dfd38cae8aaf |
|
23-Sep-2014 |
Jason Evans <jasone@canonware.com> |
Convert all tsd variables to reside in a single tsd structure.
/external/jemalloc/include/jemalloc/internal/prof.h
|
9c640bfdd4e2f25180a32ed3704ce8e4c4cc21f1 |
|
12-Sep-2014 |
Jason Evans <jasone@canonware.com> |
Apply likely()/unlikely() to allocation/deallocation fast paths.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6e73dc194ee9682d3eacaf725a989f04629718f7 |
|
10-Sep-2014 |
Jason Evans <je@fb.com> |
Fix a profile sampling race. Fix a profile sampling race that was due to preparing to sample, yet doing nothing to assure that the context remains valid until the stats are updated. These regressions were caused by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.), which did not make it into any releases prior to these fixes.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6fd53da030b5e9161a49d6010a8b38499ca2a124 |
|
09-Sep-2014 |
Jason Evans <je@fb.com> |
Fix prof_tdata_get()-related regressions. Fix prof_tdata_get() to avoid dereferencing an invalid tdata pointer (when it's PROF_TDATA_STATE_{REINCARNATED,PURGATORY}). Fix prof_tdata_get() callers to check for invalid results besides NULL (PROF_TDATA_STATE_{REINCARNATED,PURGATORY}). These regressions were caused by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.), which did not make it into any releases prior to these fixes.
/external/jemalloc/include/jemalloc/internal/prof.h
|
602c8e0971160e4b85b08b16cf8a2375aa24bc04 |
|
19-Aug-2014 |
Jason Evans <jasone@canonware.com> |
Implement per thread heap profiling. Rename data structures (prof_thr_cnt_t-->prof_tctx_t, prof_ctx_t-->prof_gctx_t), and convert to storing a prof_tctx_t for sampled objects. Convert PROF_ALLOC_PREP() to prof_alloc_prep(), since precise backtrace depth within jemalloc functions is no longer an issue (pprof prunes irrelevant frames). Implement mallctl's: - prof.reset implements full sample data reset, and optional change of sample interval. - prof.lg_sample reads the current sample interval (opt.lg_prof_sample was the permanent source of truth prior to prof.reset). - thread.prof.name provides naming capability for threads within heap profile dumps. - thread.prof.active makes it possible to activate/deactivate heap profiling for individual threads. Modify the heap dump files to contain per thread heap profile data. This change is incompatible with the existing pprof, which will require enhancements to read and process the enriched data.
/external/jemalloc/include/jemalloc/internal/prof.h
|
3a81cbd2d4f2d8c052f11f4b0b73ee5c84a33d4f |
|
16-Aug-2014 |
Jason Evans <jasone@canonware.com> |
Dump heap profile backtraces in a stable order. Also iterate over per thread stats in a stable order, which prepares the way for stable ordering of per thread heap profile dumps.
/external/jemalloc/include/jemalloc/internal/prof.h
|
ab532e97991d190e9368781cf308c60c2319b933 |
|
16-Aug-2014 |
Jason Evans <jasone@canonware.com> |
Directly embed prof_ctx_t's bt.
/external/jemalloc/include/jemalloc/internal/prof.h
|
b41ccdb125b312d4522da1a80091a0137773c964 |
|
16-Aug-2014 |
Jason Evans <jasone@canonware.com> |
Convert prof_tdata_t's bt2cnt to a comprehensive map. Treat prof_tdata_t's bt2cnt as a comprehensive map of the thread's extant allocation samples (do not limit the total number of entries). This helps prepare the way for per thread heap profiling.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6f001059aa33d77a3cb7799002044faf8dd08fc0 |
|
23-Apr-2014 |
Jason Evans <jasone@canonware.com> |
Simplify backtracing. Simplify backtracing to not ignore any frames, and compensate for this in pprof in order to increase flexibility with respect to function-based refactoring even in the presence of non-deterministic inlining. Modify pprof to blacklist all jemalloc allocation entry points including non-standard ones like mallocx(), and ignore all allocator-internal frames. Prior to this change, pprof excluded the specifically blacklisted functions from backtraces, but it left allocator-internal frames intact.
/external/jemalloc/include/jemalloc/internal/prof.h
|
0b49403958b68294eee0eca8a0b5195e761cf316 |
|
17-Apr-2014 |
Jason Evans <je@fb.com> |
Fix debug-only compilation failures. Fix debug-only compilation failures introduced by changes to prof_sample_accum_update() in: 6c39f9e059d0825f4c29d8cec9f318b798912c3c refactor profiling. only use a bytes till next sample variable.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6c39f9e059d0825f4c29d8cec9f318b798912c3c |
|
15-Apr-2014 |
Ben Maurer <bmaurer@fb.com> |
refactor profiling. only use a bytes till next sample variable.
/external/jemalloc/include/jemalloc/internal/prof.h
|
9b0cbf0850b130a9b0a8c58bd10b2926b2083510 |
|
11-Apr-2014 |
Jason Evans <je@fb.com> |
Remove support for non-prof-promote heap profiling metadata. Make promotion of sampled small objects to large objects mandatory, so that profiling metadata can always be stored in the chunk map, rather than requiring one pointer per small region in each small-region page run. In practice the non-prof-promote code was only useful when using jemalloc to track all objects and report them as leaks at program exit. However, Valgrind is at least as good a tool for this particular use case. Furthermore, the non-prof-promote code is getting in the way of some optimizations that will make heap profiling much cheaper for the predominant use case (sampling a small representative proportion of all allocations).
/external/jemalloc/include/jemalloc/internal/prof.h
|
5f60afa01eb2cf7d44024d162a1ecc6cceedcca1 |
|
29-Jan-2014 |
Jason Evans <jasone@canonware.com> |
Avoid a compiler warning. Avoid copying "jeprof" to a 1-byte buffer within prof_boot0() when heap profiling is disabled. Although this is dead code under such conditions, the compiler doesn't figure that part out. Reported by Eduardo Silva.
/external/jemalloc/include/jemalloc/internal/prof.h
|
772163b4f3d8e9a12343e9215f6b070068507604 |
|
18-Jan-2014 |
Jason Evans <je@fb.com> |
Add heap profiling tests. Fix a regression in prof_dump_ctx() due to an uninitized variable. This was caused by revision 4f37ef693e3d5903ce07dc0b61c0da320b35e3d9, so no releases are affected.
/external/jemalloc/include/jemalloc/internal/prof.h
|
eefdd02e70ec1b9cf11920fcff585835dcbd766b |
|
17-Jan-2014 |
Jason Evans <je@fb.com> |
Fix a variable prototype/definition mismatch.
/external/jemalloc/include/jemalloc/internal/prof.h
|
4f37ef693e3d5903ce07dc0b61c0da320b35e3d9 |
|
16-Jan-2014 |
Jason Evans <je@fb.com> |
Refactor prof_dump() to reduce contention. Refactor prof_dump() to use a two pass algorithm, and prof_leave() prior to the second pass. This avoids write(2) system calls while holding critical prof resources. Fix prof_dump() to close the dump file descriptor for all relevant error paths. Minimize the size of prof-related static buffers when prof is disabled. This saves roughly 65 KiB of application memory for non-prof builds. Refactor prof_ctx_init() out of prof_lookup_global().
/external/jemalloc/include/jemalloc/internal/prof.h
|
665769357cd77b74e00a146f196fff19243b33c4 |
|
16-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Optimize arena_prof_ctx_set(). Refactor such that arena_prof_ctx_set() receives usize as an argument, and use it to determine whether to handle ptr as a small region, rather than reading the chunk page map.
/external/jemalloc/include/jemalloc/internal/prof.h
|
b1941c615023cab9baf0a78a28df1e3b4972434f |
|
10-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Add probabability distribution utility code. Add probabability distribution utility code that enables generation of random deviates drawn from normal, Chi-square, and Gamma distributions. Fix format strings in several of the assert_* macros (remove a %s). Clean up header issues; it's critical that system headers are not included after internal definitions potentially do things like: #define inline Fix the build system to incorporate header dependencies for the test library C files.
/external/jemalloc/include/jemalloc/internal/prof.h
|
d37d5adee4e4570cfda83e5f1b948a25b9226224 |
|
06-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Disable floating point code/linking when possible. Unless heap profiling is enabled, disable floating point code and don't link with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 systems, makes it possible to completely disable floating point register use. Some versions of glibc neglect to save/restore caller-saved floating point registers during dynamic lazy symbol loading, and the symbol loading code uses whatever malloc the application happens to have linked/loaded with, the result being potential floating point register corruption.
/external/jemalloc/include/jemalloc/internal/prof.h
|
bbe29d374d0fa5f4684621f16c099294e56c26ef |
|
31-Jan-2013 |
Jason Evans <je@fb.com> |
Fix potential TLS-related memory corruption. Avoid writing to uninitialized TLS as a side effect of deallocation. Initializing TLS during deallocation is unsafe because it is possible that a thread never did any allocation, and that TLS has already been deallocated by the threads library, resulting in write-after-free corruption. These fixes affect prof_tdata and quarantine; all other uses of TLS are already safe, whether intentionally (as for tcache) or unintentionally (as for arenas).
/external/jemalloc/include/jemalloc/internal/prof.h
|
20f1fc95adb35ea63dc61f47f2b0ffbd37d39f32 |
|
09-Oct-2012 |
Jason Evans <je@fb.com> |
Fix fork(2)-related deadlocks. Add a library constructor for jemalloc that initializes the allocator. This fixes a race that could occur if threads were created by the main thread prior to any memory allocation, followed by fork(2), and then memory allocation in the child process. Fix the prefork/postfork functions to acquire/release the ctl, prof, and rtree mutexes. This fixes various fork() child process deadlocks, but one possible deadlock remains (intentionally) unaddressed: prof backtracing can acquire runtime library mutexes, so deadlock is still possible if heap profiling is enabled during fork(). This deadlock is known to be a real issue in at least the case of libgcc-based backtracing. Reported by tfengjun.
/external/jemalloc/include/jemalloc/internal/prof.h
|
3860eac17023933180ef5dfb5bd24077cda57dfe |
|
15-May-2012 |
Jason Evans <je@fb.com> |
Fix heap profiling crash for realloc(p, 0) case. Fix prof_realloc() to not call prof_ctx_set() if a sampled object is being freed via realloc(p, 0).
/external/jemalloc/include/jemalloc/internal/prof.h
|
8b49971d0ce0819af78aa2a278c26ecb298ee134 |
|
24-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Avoid variable length arrays and remove declarations within code MSVC doesn't support C99, and building as C++ to be able to use them is dangerous, as C++ and C99 are incompatible. Introduce a VARIABLE_ARRAY macro that either uses VLA when supported, or alloca() otherwise. Note that using alloca() inside loops doesn't quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged. It might be worth investigating ways to check whether VARIABLE_ARRAY is used in such context at runtime in debug builds and bail out if that happens.
/external/jemalloc/include/jemalloc/internal/prof.h
|
f27899402914065a6c1484ea8d81a2c8b70aa659 |
|
29-Apr-2012 |
Jason Evans <je@fb.com> |
Fix more prof_tdata resurrection corner cases.
/external/jemalloc/include/jemalloc/internal/prof.h
|
0050a0f7e6ea5a33c9aed769e2652afe20714194 |
|
29-Apr-2012 |
Jason Evans <je@fb.com> |
Handle prof_tdata resurrection. Handle prof_tdata resurrection during thread shutdown, similarly to how tcache and quarantine handle resurrection.
/external/jemalloc/include/jemalloc/internal/prof.h
|
3fb50b0407ff7dfe14727995706e2b42836f0f7e |
|
25-Apr-2012 |
Jason Evans <je@fb.com> |
Fix a PROF_ALLOC_PREP() error path. Fix a PROF_ALLOC_PREP() error path to initialize the return value to NULL.
/external/jemalloc/include/jemalloc/internal/prof.h
|
52386b2dc689db3bf71307424c4e1a2b7044c363 |
|
23-Apr-2012 |
Jason Evans <je@fb.com> |
Fix heap profiling bugs. Fix a potential deadlock that could occur during interval- and growth-triggered heap profile dumps. Fix an off-by-one heap profile statistics bug that could be observed in interval- and growth-triggered heap profiles. Fix heap profile dump filename sequence numbers (regression during conversion to malloc_snprintf()).
/external/jemalloc/include/jemalloc/internal/prof.h
|
0b25fe79aaf8840a5acda7e3160a053d42349872 |
|
18-Apr-2012 |
Jason Evans <je@fb.com> |
Update prof defaults to match common usage. Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). Change the "opt.prof_accum" default from true to false. Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be abused to disable final profile dumping.
/external/jemalloc/include/jemalloc/internal/prof.h
|
122449b073bcbaa504c4f592ea2d733503c272d2 |
|
06-Apr-2012 |
Jason Evans <je@fb.com> |
Implement Valgrind support, redzones, and quarantine. Implement Valgrind support, as well as the redzone and quarantine features, which help Valgrind detect memory errors. Redzones are only implemented for small objects because the changes necessary to support redzones around large and huge objects are complicated by in-place reallocation, to the point that it isn't clear that the maintenance burden is worth the incremental improvement to Valgrind support. Merge arena_salloc() and arena_salloc_demote(). Refactor i[v]salloc() to expose the 'demote' option.
/external/jemalloc/include/jemalloc/internal/prof.h
|
6da5418ded9170b087c35960e0010006430117c1 |
|
24-Mar-2012 |
Jason Evans <je@fb.com> |
Remove ephemeral mutexes. Remove ephemeral mutexes from the prof machinery, and remove malloc_mutex_destroy(). This simplifies mutex management on systems that call malloc()/free() inside pthread_mutex_{create,destroy}(). Add atomic_*_u() for operation on unsigned values. Fix prof_printf() to call malloc_vsnprintf() rather than malloc_snprintf().
/external/jemalloc/include/jemalloc/internal/prof.h
|
cd9a1346e96f71bdecdc654ea50fc62d76371e74 |
|
22-Mar-2012 |
Jason Evans <je@fb.com> |
Implement tsd. Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.
/external/jemalloc/include/jemalloc/internal/prof.h
|
e24c7af35d1e9d24d02166ac98cfca7cf807ff13 |
|
19-Mar-2012 |
Jason Evans <je@fb.com> |
Invert NO_TLS to JEMALLOC_TLS.
/external/jemalloc/include/jemalloc/internal/prof.h
|
b8c8be7f8abe72f4cb4f315f3078ad864fd6a2d8 |
|
05-Mar-2012 |
Jason Evans <je@fb.com> |
Use UINT64_C() rather than LLU for 64-bit constants.
/external/jemalloc/include/jemalloc/internal/prof.h
|
84f7cdb0c588322dfd50a26497fc1cb54b792018 |
|
03-Mar-2012 |
Jason Evans <je@fb.com> |
Rename prn to prng. Rename prn to prng so that Windows doesn't choke when trying to create a file named prn.h.
/external/jemalloc/include/jemalloc/internal/prof.h
|
5389146191b279ca3b90028357dd6ad66b283def |
|
14-Feb-2012 |
Jason Evans <je@fb.com> |
Remove the opt.lg_prof_bt_max option. Remove opt.lg_prof_bt_max, and hard code it to 7. The original intention of this option was to enable faster backtracing by limiting backtrace depth. However, this makes graphical pprof output very difficult to interpret. In practice, decreasing sampling frequency is a better mechanism for limiting profiling overhead.
/external/jemalloc/include/jemalloc/internal/prof.h
|
0b526ff94da7e59aa947a4d3529b2376794f8b01 |
|
14-Feb-2012 |
Jason Evans <je@fb.com> |
Remove the opt.lg_prof_tcmax option. Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024. This setting is something that users just shouldn't have to worry about. If lock contention actually ends up being a problem, the simple solution available to the user is to reduce sampling frequency.
/external/jemalloc/include/jemalloc/internal/prof.h
|
fd56043c53f1cd1335ae6d1c0ee86cc0fbb9f12e |
|
13-Feb-2012 |
Jason Evans <je@fb.com> |
Remove magic. Remove structure magic, because 1) it is no longer conditional, and 2) it stopped being very effective at detecting memory corruption several years ago.
/external/jemalloc/include/jemalloc/internal/prof.h
|
7372b15a31c63ac5cb9ed8aeabc2a0a3c005e8bf |
|
11-Feb-2012 |
Jason Evans <je@fb.com> |
Reduce cpp conditional logic complexity. Convert configuration-related cpp conditional logic to use static constant variables, e.g.: #ifdef JEMALLOC_DEBUG [...] #endif becomes: if (config_debug) { [...] } The advantage is clearer, more concise code. The main disadvantage is that data structures no longer have conditionally defined fields, so they pay the cost of all fields regardless of whether they are used. In practice, this is only a minor concern; config_stats will go away in an upcoming change, and config_prof is the only other major feature that depends on more than a few special-purpose fields.
/external/jemalloc/include/jemalloc/internal/prof.h
|
a507004d294ad0c78b4d01559479620ebb272a49 |
|
12-Aug-2011 |
Jason Evans <je@fb.com> |
Fix off-by-one backtracing issues. Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to remove any doubt as to whether an additional stack frame is created. Prior to this change, it was assumed that inlining would reduce the total number of frames in the backtrace, but in practice behavior wasn't completely predictable. Create imemalign() and call it from posix_memalign(), memalign(), and valloc(), so that all entry points require the same number of stack frames to be ignored during backtracing.
/external/jemalloc/include/jemalloc/internal/prof.h
|
7427525c28d58c423a68930160e3b0fe577fe953 |
|
01-Apr-2011 |
Jason Evans <jasone@canonware.com> |
Move repo contents in jemalloc/ to top level.
/external/jemalloc/include/jemalloc/internal/prof.h
|