Lines Matching refs:chunks

70        ends of chunks) in code that calls malloc.  This malloc
83 executions, so externally crafted fake chunks cannot be
161 possibly fragmenting memory used only for large chunks.)
249 Controls the minimum alignment for malloc'ed chunks. It must be a
289 information in the footers of allocated chunks. This adds
469 more large chunks) the value should be high enough so that your
475 program undergoes phases where several large chunks are allocated
477 mixed with phases where there are no such chunks at all. The trim
491 segregates relatively large chunks of memory so that they can be
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
516 segments to the OS when freeing chunks that result in
765 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
1024 purposes. Traversal does not include include chunks that have been
1033 For example, to count the number of in-use chunks with size greater
1055 ordblks: the number of free chunks
1076 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1087 The "chunks" argument is optional (i.e., may be null, which is
1090 no longer needed. Otherwise, the chunks array must be of at least
1092 chunks.
1095 null if the allocation failed. If n_elements is zero and "chunks"
1127 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1130 chunks with sizes indicated in the "sizes" array. It returns
1137 The "chunks" argument is optional (i.e., may be null). If it is null
1139 be freed when it is no longer needed. Otherwise, the chunks array
1141 pointers to the chunks.
1144 null if the allocation failed. If n_elements is zero and chunks is
1165 void* chunks[3];
1166 if (independent_comalloc(3, sizes, chunks) == 0)
1168 struct Head* head = (struct Head*)(chunks[0]);
1169 char* body = (char*)(chunks[1]);
1170 struct Foot* foot = (struct Foot*)(chunks[2]);
1179 since it cannot reuse existing noncontiguous small chunks that
1211 memory will be locked between two used chunks, so they cannot be
1235 zero even when no user-level chunks are allocated.
1307 mspace_track_large_chunks controls whether requests for large chunks
1309 others in this mspace. By default large chunks are not tracked,
1310 which reduces fragmentation. However, such chunks are not
1331 free may be called instead of mspace_free because freed chunks from
1342 realloced chunks from any space are handled by their originating
1364 size_t elem_size, void* chunks[]);
1371 size_t sizes[], void* chunks[]);
2059 techniques.) Sizes of free chunks are stored both in the front of
2061 chunks into bigger chunks fast. The head fields also hold bits
2062 representing whether chunks are free or in use.
2118 Note that since we always merge adjacent free chunks, the chunks
2123 chunks are free, and if so, unlink them from the lists that they
2145 and consolidating chunks.
2153 inuse chunks or the ends of memory.
2193 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2208 /* MMapped chunks need a second word of overhead ... */
2305 When chunks are not in use, they are treated as nodes of either
2308 "Small" chunks are stored in circular doubly-linked lists, and look
2327 Larger chunks are kept in a form of bitwise digital trees (aka
2329 free chunks greater than 256 bytes, their size doesn't impose any
2361 work in the same way as fd/bk pointers of small chunks.
2365 tree level, with the chunks in the smaller half of the range (0x100
2387 bounded by the number of bits differentiating chunks within
2418 the space. Large chunks that are directly allocated by mmap are not
2437 used for chunks to reduce fragmentation -- new adjacent memory is
2505 An array of bin headers for free chunks. These bins hold chunks
2507 chunks of all the same size, spaced 8 bytes apart. To simplify
2994 verify footer fields of inuse chunks, which can be used guarantee
3005 always dynamically check addresses of all offset chunks (previous,
3044 /* macros to set up inuse chunks with or without footers */
3050 /* Macros for setting head/foot of non-mmapped chunks */
3252 /* Check properties of (inuse) mmapped chunks */
3266 /* Check properties of inuse chunks */
3277 /* Check properties of free chunks */
3300 /* Check properties of malloced chunks at the point they are malloced */
3365 /* Check all the chunks in a treebin. */
3376 /* Check all the chunks in a smallbin. */
3829 /* ----------------------- Direct-mmapping chunks ----------------------- */
3832 Directly mmapped chunks are set up with an offset to the start of
4059 /* Directly map large chunks, but only if already initialized */
4269 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4706 Consolidate freed chunks with preceeding or succeeding bordering
4707 free chunks, if they exist, and then place in a bin. Intermixed
4708 with special cases for top, dv, mmapped chunks, and usage errors.
4952 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4997 void* chunks[]) {
5005 void** marray; /* either "chunks" or malloced ptr array */
5013 if (chunks != 0) {
5015 return chunks; /* nothing to do */
5016 marray = chunks;
5093 if (marray != chunks) {
5116 chunks before freeing, which will occur often if allocated
5335 void* chunks[]) {
5337 return ialloc(gm, n_elements, &sz, 3, chunks);
5341 void* chunks[]) {
5342 return ialloc(gm, n_elements, sizes, 0, chunks);
5855 size_t elem_size, void* chunks[]) {
5862 return ialloc(ms, n_elements, &sz, 3, chunks);
5866 size_t sizes[], void* chunks[]) {
5872 return ialloc(ms, n_elements, sizes, 0, chunks);
6022 addresses, it must be OK for malloc'ed chunks to span multiple
6202 * realloc: don't try to shift chunks backwards, since this
6272 * Use best fit for very large chunks to prevent some worst-cases.
6276 * Removed footers when chunks are in use. Thanks to
6291 * Occasionally bin return list chunks in first scan
6298 * Scan 2 returns chunks (not just 1)