Searched defs:cache (Results 1 - 2 of 2) sorted by relevance
/mm/ |
H A D | vmalloc.c | 267 /* The vmap cache globals are protected by vmap_area_lock */ 355 * Invalidate cache if we have more permissive parameters. 463 struct vmap_area *cache; local 464 cache = rb_entry(free_vmap_cache, struct vmap_area, rb_node); 465 if (va->va_start <= cache->va_start) { 1211 * This function does NOT do any cache flushing. The caller is 1234 * This function does NOT do any cache flushing. The caller is 1245 * unmap_kernel_range - unmap kernel VM area and flush cache and TLB
|
H A D | slab.c | 22 * The memory is organized in caches, one cache for each object type. 24 * Each cache consists out of many slabs (they are small (usually one 32 * Each cache can only support one memory type (GFP_DMA, GFP_HIGHMEM, 34 * cache for that memory type. 44 * kmem_cache_destroy() CAN CRASH if you try to allocate from the cache 47 * Each cache has a short per-cpu head array, most allocs 49 * of the entries in the array are given back into the global cache. 50 * The head array is strictly LIFO and should improve the cache hit rates. 62 * The non-constant members are protected with a per-cache irq spinlock. 71 * The global cache 268 struct array_cache cache; member in struct:arraycache_init 497 page_set_cache(struct page *page, struct kmem_cache *cache) argument 532 index_to_obj(struct kmem_cache *cache, struct slab *slab, unsigned int idx) argument 544 obj_to_index(const struct kmem_cache *cache, const struct slab *slab, void *obj) argument 2590 drain_freelist(struct kmem_cache *cache, struct kmem_list3 *l3, int tofree) argument 2859 slab_map_pages(struct kmem_cache *cache, struct slab *slab, void *addr) argument 2976 verify_redzone_free(struct kmem_cache *cache, void *obj) argument 3306 fallback_alloc(struct kmem_cache *cache, gfp_t flags) argument 3509 __do_cache_alloc(struct kmem_cache *cache, gfp_t flags) argument [all...] |
Completed in 11 milliseconds