History log of /external/jemalloc/include/jemalloc/internal/chunk_mmap.h
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
c7a9a6c86b483d4aebb51bd62d902f4022a7367b 25-Feb-2016 Jason Evans <je@fb.com> Attempt mmap-based in-place huge reallocation.

Attempt mmap-based in-place huge reallocation by plumbing new_addr into
chunk_alloc_mmap(). This can dramatically speed up incremental huge
reallocation.

This resolves #335.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
8fadb1a8c2d0219aded566bc5fac7d29cff9bb67 04-Aug-2015 Jason Evans <jasone@canonware.com> Implement chunk hook support for page run commit/decommit.

Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging. For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.

This resolves #251.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
b49a334a645b854dbb1649f15c38d646fee66738 28-Jul-2015 Jason Evans <je@fb.com> Generalize chunk management hooks.

Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.

Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.

Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.

This resolves #176 and #201.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
e2deab7a751c8080c2b2cdcfd7b11887332be1bb 16-May-2014 Jason Evans <je@fb.com> Refactor huge allocation to be managed by arenas.

Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation). This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.

Remove the top level huge stats, and replace them with per arena huge
stats.

Normalize function names and types to *dalloc* (some were *dealloc*).

Remove the --enable-mremap option. As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance. The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
7de92767c20cb72c94609b9c78985526fb84a679 09-Oct-2012 Jason Evans <je@fb.com> Fix mlockall()/madvise() interaction.

mlockall(2) can cause purging via madvise(2) to fail. Fix purging code
to check whether madvise() succeeded, and base zeroed page metadata on
the result.

Reported by Olivier Lecomte.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
a8f8d7540d66ddee7337db80c92890916e1063ca 22-Apr-2012 Jason Evans <jasone@canonware.com> Remove mmap_unaligned.

Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls. If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR. However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works. Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
8f0e0eb1c01d5d934586ea62e519ca8b8637aebc 21-Apr-2012 Jason Evans <jasone@canonware.com> Fix a memory corruption bug in chunk_alloc_dss().

Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.

Reverse order of preference between mmap() and sbrk() to prefer mmap().

Clean up management of 'zero' parameter in chunk_alloc*().
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
666c5bf7a8baaa842da69cb402948411432a9d00 18-Apr-2012 Mike Hommey <mh@glandium.org> Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) calls

This will be used to implement the feature on mingw, which doesn't have
madvise.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
7ca0fdfb85b2a9fc7a112e158892c098e004385b 13-Apr-2012 Jason Evans <je@fb.com> Disable munmap() if it causes VM map holes.

Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
eae269036c9f702d9fa9be497a1a2aa1be13a29e 10-Apr-2012 Mike Hommey <mh@glandium.org> Add alignment support to chunk_alloc().
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
c5851eaf6e0edb35a499d62d30199e336da5ccb6 10-Apr-2012 Mike Hommey <mh@glandium.org> Remove MAP_NORESERVE support

It was only used by the swap feature, and that is gone.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h
7427525c28d58c423a68930160e3b0fe577fe953 01-Apr-2011 Jason Evans <jasone@canonware.com> Move repo contents in jemalloc/ to top level.
/external/jemalloc/include/jemalloc/internal/chunk_mmap.h