Searched refs:user (Results 1 - 3 of 3) sorted by relevance

/mm/
H A Dmlock.c846 int user_shm_lock(size_t size, struct user_struct *user) argument
858 locked + user->locked_shm > lock_limit && !capable(CAP_IPC_LOCK))
860 get_uid(user);
861 user->locked_shm += locked;
868 void user_shm_unlock(size_t size, struct user_struct *user) argument
871 user->locked_shm -= (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
873 free_uid(user);
H A Dshmem.c87 * inode->i_private (with i_mutex making sure that it has only one user at
712 * Charged back to the user (not to caller) when swap account is used.
1371 int shmem_lock(struct file *file, int lock, struct user_struct *user) argument
1379 if (!user_shm_lock(inode->i_size, user))
1384 if (!lock && (info->flags & VM_LOCKED) && user) {
1385 user_shm_unlock(inode->i_size, user);
1606 * now we can copy it to user space...
1874 * user-space mappings (eg., direct-IO, AIO). Therefore, we look at all pages
1877 * The caller must guarantee that no new user will acquire writable references
3295 int shmem_lock(struct file *file, int lock, struct user_struct *user) argument
[all...]
H A Dmmap.c219 * Don't let a single process grow so big a user can't recover
1432 struct user_struct *user = NULL; local
1443 * A dummy user value is used because we are not locking
1448 &user, HUGETLB_ANONHUGE_INODE,
1628 * new file must not have been exposed to user-space, yet.
1678 * Otherwise user-space soft-dirty page tracker won't
2754 /* mm's last user has gone, and its about to be pulled down */
3235 * This is intended to prevent a user from starting a single memory hogging
3275 * Reinititalise user and admin reserves if memory is added or removed.
3277 * The default user reserv
[all...]

Completed in 748 milliseconds