05a9e4ac651eb0c728e83fd883425c4894a2ae2b |
|
07-Jun-2016 |
Jason Evans <jasone@canonware.com> |
Fix potential VM map fragmentation regression. Revert 245ae6036c09cc11a72fab4335495d95cddd5beb (Support --with-lg-page values larger than actual page size.), because it could cause VM map fragmentation if the kernel grows mmap()ed memory downward. This resolves #391.
/external/jemalloc/src/chunk_mmap.c
|
c2f970c32b527660a33fa513a76d913c812dcf7c |
|
06-May-2016 |
Jason Evans <jasone@canonware.com> |
Modify pages_map() to support mapping uncommitted virtual memory. If the OS overcommits: - Commit all mappings in pages_map() regardless of whether the caller requested committed memory. - Linux-specific: Specify MAP_NORESERVE to avoid unfortunate interactions with heuristic overcommit mode during fork(2). This resolves #193.
/external/jemalloc/src/chunk_mmap.c
|
245ae6036c09cc11a72fab4335495d95cddd5beb |
|
06-Apr-2016 |
Jason Evans <jasone@canonware.com> |
Support --with-lg-page values larger than actual page size. During over-allocation in preparation for creating aligned mappings, allocate one more page than necessary if PAGE is the actual page size, so that trimming still succeeds even if the system returns a mapping that has less than PAGE alignment. This allows compiling with e.g. 64 KiB "pages" on systems that actually use 4 KiB pages. Note that for e.g. --with-lg-page=21, it is also necessary to increase the chunk size (e.g. --with-malloc-conf=lg_chunk:22) so that there are at least two "pages" per chunk. In practice this isn't a particularly compelling configuration because so much (unusable) virtual memory is dedicated to chunk headers.
/external/jemalloc/src/chunk_mmap.c
|
c7a9a6c86b483d4aebb51bd62d902f4022a7367b |
|
25-Feb-2016 |
Jason Evans <je@fb.com> |
Attempt mmap-based in-place huge reallocation. Attempt mmap-based in-place huge reallocation by plumbing new_addr into chunk_alloc_mmap(). This can dramatically speed up incremental huge reallocation. This resolves #335.
/external/jemalloc/src/chunk_mmap.c
|
78ae1ac486ffd7953536786c9a5f9dc2bda78858 |
|
08-Sep-2015 |
Dmitry-Me <wipedout@yandex.ru> |
Reduce variable scope. This resolves #274.
/external/jemalloc/src/chunk_mmap.c
|
03bf5b67be92db3a49f81816dccb5c18c0f2a0c0 |
|
12-Aug-2015 |
Jason Evans <jasone@canonware.com> |
Try to decommit new chunks. Always leave decommit disabled on non-Windows systems.
/external/jemalloc/src/chunk_mmap.c
|
8fadb1a8c2d0219aded566bc5fac7d29cff9bb67 |
|
04-Aug-2015 |
Jason Evans <jasone@canonware.com> |
Implement chunk hook support for page run commit/decommit. Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
/external/jemalloc/src/chunk_mmap.c
|
b49a334a645b854dbb1649f15c38d646fee66738 |
|
28-Jul-2015 |
Jason Evans <je@fb.com> |
Generalize chunk management hooks. Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
/external/jemalloc/src/chunk_mmap.c
|
ef0a0cc3283ea561a40b33f4325d54bbc351de21 |
|
22-Mar-2015 |
Igor Podlesny <user.email@poige.ru> |
We have pages_unmap(ret, size) so we use it.
/external/jemalloc/src/chunk_mmap.c
|
551ebc43647521bdd0bc78558b106762b3388928 |
|
03-Oct-2014 |
Jason Evans <jasone@canonware.com> |
Convert to uniform style: cond == false --> !cond
/external/jemalloc/src/chunk_mmap.c
|
994fad9bdaaa18273f2089856c2637cfb0c307bd |
|
03-Jun-2014 |
Richard Diamond <wichard@vitalitystudios.com> |
Add check for madvise(2) to configure.ac. Some platforms, such as Google's Portable Native Client, use Newlib and thus lack access to madvise(2). In those instances, pages_purge() is transformed into a no-op.
/external/jemalloc/src/chunk_mmap.c
|
e2deab7a751c8080c2b2cdcfd7b11887332be1bb |
|
16-May-2014 |
Jason Evans <je@fb.com> |
Refactor huge allocation to be managed by arenas. Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
/external/jemalloc/src/chunk_mmap.c
|
2a83ed0284e92c7ba4bd4efe9df149ac724b2f26 |
|
09-Dec-2013 |
Jason Evans <jasone@canonware.com> |
Refactor tests. Refactor tests to use explicit testing assertions, rather than diff'ing test output. This makes the test code a bit shorter, more explicitly encodes testing intent, and makes test failure diagnosis more straightforward.
/external/jemalloc/src/chunk_mmap.c
|
7de92767c20cb72c94609b9c78985526fb84a679 |
|
09-Oct-2012 |
Jason Evans <je@fb.com> |
Fix mlockall()/madvise() interaction. mlockall(2) can cause purging via madvise(2) to fail. Fix purging code to check whether madvise() succeeded, and base zeroed page metadata on the result. Reported by Olivier Lecomte.
/external/jemalloc/src/chunk_mmap.c
|
de6fbdb72c6e1401b36f8f2073404645bac6cd2b |
|
09-May-2012 |
Jason Evans <je@fb.com> |
Fix chunk_alloc_mmap() bugs. Simplify chunk_alloc_mmap() to no longer attempt map extension. The extra complexity isn't warranted, because although in the success case it saves one system call as compared to immediately falling back to chunk_alloc_mmap_slow(), it also makes the failure case even more expensive. This simplification removes two bugs: - For Windows platforms, pages_unmap() wasn't being called for unaligned mappings prior to falling back to chunk_alloc_mmap_slow(). This caused permanent virtual memory leaks. - For non-Windows platforms, alignment greater than chunksize caused pages_map() to be called with size 0 when attempting map extension. This always resulted in an mmap() error, and subsequent fallback to chunk_alloc_mmap_slow().
/external/jemalloc/src/chunk_mmap.c
|
a14bce85e885f83c96116cc5438ae52d740f3727 |
|
30-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Use Get/SetLastError on Win32 Using errno on win32 doesn't quite work, because the value set in a shared library can't be read from e.g. an executable calling the function setting errno. At the same time, since buferror always uses errno/GetLastError, don't pass it.
/external/jemalloc/src/chunk_mmap.c
|
a19e87fbad020e8dd3d26682032929e8e5ae71c1 |
|
22-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Add support for Mingw
/external/jemalloc/src/chunk_mmap.c
|
a8f8d7540d66ddee7337db80c92890916e1063ca |
|
22-Apr-2012 |
Jason Evans <jasone@canonware.com> |
Remove mmap_unaligned. Remove mmap_unaligned, which was used to heuristically decide whether to optimistically call mmap() in such a way that could reduce the total number of system calls. If I remember correctly, the intention of mmap_unaligned was to avoid always executing the slow path in the presence of ASLR. However, that reasoning seems to have been based on a flawed understanding of how ASLR actually works. Although ASLR apparently causes mmap() to ignore address requests, it does not cause total placement randomness, so there is a reasonable expectation that iterative mmap() calls will start returning chunk-aligned mappings once the first chunk has been properly aligned.
/external/jemalloc/src/chunk_mmap.c
|
8f0e0eb1c01d5d934586ea62e519ca8b8637aebc |
|
21-Apr-2012 |
Jason Evans <jasone@canonware.com> |
Fix a memory corruption bug in chunk_alloc_dss(). Fix a memory corruption bug in chunk_alloc_dss() that was due to claiming newly allocated memory is zeroed. Reverse order of preference between mmap() and sbrk() to prefer mmap(). Clean up management of 'zero' parameter in chunk_alloc*().
/external/jemalloc/src/chunk_mmap.c
|
666c5bf7a8baaa842da69cb402948411432a9d00 |
|
18-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) calls This will be used to implement the feature on mingw, which doesn't have madvise.
/external/jemalloc/src/chunk_mmap.c
|
7ca0fdfb85b2a9fc7a112e158892c098e004385b |
|
13-Apr-2012 |
Jason Evans <je@fb.com> |
Disable munmap() if it causes VM map holes. Add a configure test to determine whether common mmap()/munmap() patterns cause VM map holes, and only use munmap() to discard unused chunks if the problem does not exist. Unify the chunk caching for mmap and dss. Fix options processing to limit lg_chunk to be large enough that redzones will always fit.
/external/jemalloc/src/chunk_mmap.c
|
5ff709c264e52651de25b788692c62ff1f6f389c |
|
12-Apr-2012 |
Jason Evans <jasone@canonware.com> |
Normalize aligned allocation algorithms. Normalize arena_palloc(), chunk_alloc_mmap_slow(), and chunk_recycle_dss() to use the same algorithm for trimming over-allocation. Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and ALIGNMENT_CEILING() macros, and use them where appropriate. Remove the run_size_p parameter from sa2u(). Fix a potential deadlock in chunk_recycle_dss() that was introduced by eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to chunk_alloc()).
/external/jemalloc/src/chunk_mmap.c
|
eae269036c9f702d9fa9be497a1a2aa1be13a29e |
|
10-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Add alignment support to chunk_alloc().
/external/jemalloc/src/chunk_mmap.c
|
c5851eaf6e0edb35a499d62d30199e336da5ccb6 |
|
10-Apr-2012 |
Mike Hommey <mh@glandium.org> |
Remove MAP_NORESERVE support It was only used by the swap feature, and that is gone.
/external/jemalloc/src/chunk_mmap.c
|
cd9a1346e96f71bdecdc654ea50fc62d76371e74 |
|
22-Mar-2012 |
Jason Evans <je@fb.com> |
Implement tsd. Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.
/external/jemalloc/src/chunk_mmap.c
|
e24c7af35d1e9d24d02166ac98cfca7cf807ff13 |
|
19-Mar-2012 |
Jason Evans <je@fb.com> |
Invert NO_TLS to JEMALLOC_TLS.
/external/jemalloc/src/chunk_mmap.c
|
d81e4bdd5c991bd5642c8b859ef1f752b51cd9be |
|
06-Mar-2012 |
Jason Evans <je@fb.com> |
Implement malloc_vsnprintf(). Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as several other printing functions based on it, so that formatted printing can be relied upon without concern for inducing a dependency on floating point runtime support. Replace malloc_write() calls with malloc_*printf() where doing so simplifies the code. Add name mangling for library-private symbols in the data and BSS sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose all opt_* variable use to cpp so that proper mangling occurs.
/external/jemalloc/src/chunk_mmap.c
|
7427525c28d58c423a68930160e3b0fe577fe953 |
|
01-Apr-2011 |
Jason Evans <jasone@canonware.com> |
Move repo contents in jemalloc/ to top level.
/external/jemalloc/src/chunk_mmap.c
|