Searched refs:to (Results 1 - 10 of 10) sorted by relevance

/mm/
H A Diov_iter.c52 static size_t copy_from_iter_iovec(void *to, size_t bytes, struct iov_iter *i) argument
70 left = __copy_from_user(to, buf, copy);
73 to += copy;
79 left = __copy_from_user(to, buf, copy);
82 to += copy;
147 /* Too bad - revert to non-atomic kmap */
184 void *kaddr, *to; local
200 to = kaddr + offset;
203 left = __copy_from_user_inatomic(to, buf, copy);
206 to
537 memcpy_from_page(char *to, struct page *page, size_t offset, size_t len) argument
546 char *to = kmap_atomic(page); local
597 copy_from_iter_bvec(void *to, size_t bytes, struct iov_iter *i) argument
[all...]
H A Dtruncate.c69 * @offset: start of the range to invalidate
70 * @length: length of the range to invalidate
75 * do_invalidatepage() does not have to release all buffers, but it must
78 * point. Because the caller is about to free (and possibly reuse) those
129 * We need to bale out if page->mapping is no longer equal to the original
152 * any time, and is not supposed to throw away dirty pages. But pages can
185 * Used to get rid of pages on hardware memory corruption.
221 * @mapping: mapping to truncate
222 * @lstart: offset from which to truncat
752 pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to) argument
[all...]
H A Dmemcontrol.c127 * than using jiffies etc. to handle periodic memcg event.
168 unsigned long long usage_in_excess;/* Set to the value by which */
206 /* An array index points to threshold just below or equal to usage. */
219 * This is needed to make mem_cgroup_unregister_event() "never fail".
220 * It must be able to store at least primary->size - 1 entries.
232 * cgroup_event represents events which userspace want to receive.
236 * memcg which the event belongs to.
240 * eventfd to signal userspace about the event.
248 * register_event() callback will be used to ad
432 struct mem_cgroup *to; member in struct:move_charge_struct
1565 struct mem_cgroup *to; local
3362 mem_cgroup_move_account(struct page *page, unsigned int nr_pages, struct page_cgroup *pc, struct mem_cgroup *from, struct mem_cgroup *to) argument
3522 mem_cgroup_move_swap_account(swp_entry_t entry, struct mem_cgroup *from, struct mem_cgroup *to) argument
3550 mem_cgroup_move_swap_account(swp_entry_t entry, struct mem_cgroup *from, struct mem_cgroup *to) argument
5899 struct mem_cgroup *to = mc.to; local
[all...]
H A Dhugetlb.c52 * Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,
58 * Serializes faults on the same logical page. This is used to
70 /* If no pages are used, and no other handles to the subpool
127 /* If hugetlbfs_put_super couldn't free spool due to
152 long to; member in struct:file_region
163 if (f <= rg->to)
166 /* Round our left edge to the current segment if it encloses us. */
178 /* If this area reaches higher then extend our area to
180 * which we intend to reuse, free it. */
181 if (rg->to >
3383 hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, vm_flags_t vm_flags) argument
[all...]
H A Dnommu.c4 * Replacement code for mm functions to support CPU's that don't
79 * that can be used to drive ballooning decisions when Linux is hosted
106 * Doesn't have to be accurate, i.e. may have races.
143 * The ksize() function is only guaranteed to work for pointers
193 * get a list of pages in an address range belonging to the specified process
197 * - don't permit access to VMAs that don't support it, such as I/O mappings
220 * @pfn: location to store found PFN
312 * Allocate enough pages to cover @size from the page level
329 * Allocate enough pages to cover @size from the page level
331 * The memory allocated is set to zer
628 free_page_series(unsigned long from, unsigned long to) argument
1612 shrink_vma(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long from, unsigned long to) argument
[all...]
H A Dmempolicy.c6 * Subject to the GNU Public License, version 2.
8 * NUMA policy allows the user to give hints in which node(s) memory should
25 * to the last. It would be better if bind would truly restrict
26 * the allocation to memory nodes instead
30 * on the local CPU. This is normally identical to default,
31 * but useful to set in a VMA when you have a non default
40 * try to allocate on the local CPU. The VMA policy is only applied for memory
57 fix mmap readahead to honour policy and enable policy for any page cache
148 * If read-side task has no lock to protect task->mempolicy, write-side
151 * disallowed nodes. In this way, we can avoid finding no node to allo
1037 do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to, int flags) argument
1177 do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to, int flags) argument
[all...]
H A Dmigrate.c49 * migrate_prep() needs to be called before we start compiling a list of pages
50 * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
58 * drained them. Those pages will fail to migrate like other
103 * Restore a potential migration pte to a working pte entry
127 * Peek to check is_swap_pte() before taking ptlock? No, we
173 /* No need to invalidate - it was non-present before */
182 * Congratulations to trinity for discovering this bug.
183 * mm/fremap.c's remap_file_pages() accepts any range within a single vma to
184 * convert that vma to VM_NONLINEAR; and generic_file_remap_pages() will then
187 * by vma_interval_tree rather than lost to i_mmap_nonlinea
1536 migrate_vmas(struct mm_struct *mm, const nodemask_t *to, const nodemask_t *from, unsigned long flags) argument
[all...]
H A Dslab.c29 * slabs and you must pass objects with the same initializations to
36 * In order to reduce fragmentation, the slabs are sorted in 3 groups:
44 * kmem_cache_destroy() CAN CRASH if you try to allocate from the cache
64 * Many thanks to Mark Hemment, who wrote another per-cpu slab patch
84 * Modified the slab allocator to be node aware on NUMA systems.
133 * DEBUG - 1 for kmem_cache_create() to honour; SLAB_RED_ZONE & SLAB_POISON.
136 * STATS - 1 to collect stats for /proc/slabinfo.
181 * - LIFO ordering, to hand out cache-warm objects from _alloc
185 * The limit is stored in the per-cpu structure to reduce the data cache
200 * entries belonging to slab
811 transfer_objects(struct array_cache *to, struct array_cache *from, unsigned int max) argument
[all...]
H A Dshmem.c41 * extends ramfs by the ability to use swap and honor resource limits
82 /* Symlink up to this size is kmalloc'ed instead of using a swappable page */
88 * a time): we would prefer not to enlarge the shmem inode just for that.
91 wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
93 pgoff_t next; /* the next page offset to be fallocated */
95 pgoff_t nr_unswapped; /* how often writepage refused to swap out */
98 /* Flag allocation requirements to shmem_getpage */
170 * pages are allocated, in order to allow huge sparse files.
229 * @inode: inode to recalc
231 * We have to calculat
1522 shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) argument
[all...]
H A Dslub.c6 * and only uses a centralized lock to manage a pool of partial slabs.
49 * The role of the slab_mutex is to protect the list of all the slabs
50 * and to synchronize major metadata changes to slab cache structures.
53 * have the ability to do a cmpxchg_double. It only protects the second
72 * much as possible. As long as SLUB does not have to handle partial
76 * Interrupts are disabled during allocation and deallocation in order to
77 * make the slab allocator safe to use in the context of an irq. In addition
78 * interrupts are disabled to ensure that the processor does not change
79 * while handling per_cpu slabs, due to kerne
666 restore_bytes(struct kmem_cache *s, char *message, u8 data, void *from, void *to) argument
[all...]

Completed in 135 milliseconds