/kernel/trace/ |
H A D | Kconfig | 90 Adds a very slight overhead to tracing when enabled. 94 # This allows those options to appear when no other tracer is selected. But the 96 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 115 # Minimum requirements an architecture has to meet for us to 116 # be able to offer generic tracing facilities: 121 # tracers anyway, they were tested to build and work. Note that new 122 # exceptions to this list aren't welcomed, better implement the 145 Enable the kernel to trac [all...] |
/kernel/ |
H A D | Kconfig.hz | 10 to have the timer interrupt run at 1000 Hz but 100 Hz may be more 11 beneficial for servers and NUMA systems that do not need to have 15 environment leading to NR_CPUS * HZ number of timer interrupts 31 on SMP and NUMA systems. If you are going to be using NTSC video 46 systems requiring fast interactive responses to events.
|
H A D | futex.c | 13 * Thanks to Thomas Gleixner for suggestions, analysis and fixes. 24 * Thanks to Thomas Gleixner for conceptual design and careful reviews. 26 * Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly 44 * along with this program; if not, write to the Free Software 73 * READ this before attempting to hack on futexes! 92 * optimization to work, ordering guarantees must exist so that the waiter 93 * being added to the list is acknowledged when the list is concurrently being 112 * This would cause the waiter on CPU 0 to wait forever because it 113 * missed the transition of the user space value from val to newval 146 * to fute 2167 struct hrtimer_sleeper timeout, *to = NULL; local 2264 struct hrtimer_sleeper timeout, *to = NULL; local 2549 struct hrtimer_sleeper timeout, *to = NULL; local [all...] |
H A D | signal.c | 9 * Changes to use preallocated sigqueue structures 10 * to allow signals to be sent reliably. 95 * Tracers may want to know about even ignored signals. 149 * After recalculating TIF_SIGPENDING, we need to make sure the task wakes up. 181 * synchronous signals that need to be dequeued first. 210 /* Nothing to do */ 234 * @mask: pending bits to set 268 * If JOBCTL_TRAPPING is set, a ptracer is waiting for us to enter TRACED. 270 * locking. @task->siglock guarantees that @task->parent points to th 2703 copy_siginfo_to_user(siginfo_t __user *to, const siginfo_t *from) argument [all...] |
H A D | Makefile | 121 # have make canonicalise the pathnames and then sort them to discard the 163 # supplied, then one will need to be generated to make sure the build does not 168 $(error Could not determine digest type to use from kernel config) 173 @echo "### Now generating an X.509 key pair to be used for signing modules." 175 @echo "### If this takes a long time, you might wish to run rngd in the" 176 @echo "### background to keep the supply of entropy topped up. It" 177 @echo "### needs to be run as root, and uses a hardware random"
|
H A D | cpuset.c | 16 * 2006 Rework by Paul Menage to use generic cgroups 20 * This file is subject to the terms and conditions of the GNU General Public 83 * The user-configured masks can only be changed by writing to 87 * The effective masks is the real masks that apply to the tasks 100 /* user-configured CPUs and Memory Nodes allow to tasks */ 104 /* effective CPUs and Memory Nodes allow to tasks */ 111 * - top_cpuset.old_mems_allowed is initialized to mems_allowed. 116 * then old_mems_allowed is updated to mems_allowed. 123 * Tasks are being attached to this cpuset. Used to preven 969 cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to) argument [all...] |
H A D | workqueue.c | 12 * Made to use alloc_percpu by Christoph Lameter. 21 * pools for workqueues which are not bound to any specific CPU - the 59 * While associated (!DISASSOCIATED), all workers are bound to the 68 * attach_mutex to avoid changing binding state while 76 WORKER_PREP = 1 << 3, /* preparing to run works */ 96 CREATE_COOLDOWN = HZ, /* time to breath after fail */ 122 * POOL_DISASSOCIATED is set, it's identical to L. 173 * The current concurrency level. As it's likely to be accessed 180 * Destruction of pool is sched-RCU protected to allow dereferences 189 * point to th 3293 copy_workqueue_attrs(struct workqueue_attrs *to, const struct workqueue_attrs *from) argument [all...] |
H A D | cgroup.c | 24 * This file is subject to the terms and conditions of the GNU General Public 75 * cgroup_mutex is the master lock. Any modification to cgroup or its 115 * which may lead to deadlock. 120 * pidlist destructions need to be flushed on cgroup destruction. Use a 154 * ->dfl_files to use ->legacy_files on the default hierarchy. 170 * Assign a monotonically increasing serial number to csses. It guarantees 172 * Also, as csses are always appended to the parent's ->children list, it 179 * check for fork/exit handlers to call. This avoids us having to do 180 * extra work in the fork/exit path if none of the subsystems need to 3699 cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from) argument [all...] |
/kernel/locking/ |
H A D | rtmutex_common.h | 20 * belong to the tester. That way we can delay the wakeup path of those 21 * threads to provoke lock stealing and testing of complex boosting scenarios. 43 * @tree_entry: pi node to enqueue into the mutex waiters tree 44 * @pi_tree_entry: pi node to enqueue into the mutex owner waiters tree 45 * @task: task reference to the blocked task 61 * Various helpers to access the waiters-tree: 109 * no further PI adjustments to be made. 131 struct hrtimer_sleeper *to, 133 extern int rt_mutex_timed_futex_lock(struct rt_mutex *l, struct hrtimer_sleeper *to);
|
H A D | rtmutex.c | 26 * is used to keep track of the "lock has waiters" state. 31 * is going to take the lock* 39 * with ->wait_lock is held. To prevent any fast path cmpxchg to the lock, 40 * we need to set the bit0 before looking at the lock, and the owner may be 45 * To prevent a cmpxchg of the owner releasing the lock, we need to 74 * supports cmpxchg and if there's no debugging state to be set up 91 * 3) Try to unlock the lock with cmpxchg 248 * the waiter is not allowed to do priority boosting 268 * Called by sched_setscheduler() to check whether the priority change 296 * (Note: We do this outside of the protection of lock->wait_lock to 1619 rt_mutex_finish_proxy_lock(struct rt_mutex *lock, struct hrtimer_sleeper *to, struct rt_mutex_waiter *waiter) argument [all...] |
/kernel/rcu/ |
H A D | srcu.c | 42 * Initialize an rcu_batch structure to empty. 69 * and return a pointer to it, or return NULL if the structure is empty. 87 * Move all callbacks from the rcu_batch structure specified by "from" to 88 * the structure specified by "to". 90 static inline void rcu_batch_move(struct rcu_batch *to, struct rcu_batch *from) argument 93 *to->tail = from->head; 94 to->tail = from->tail; 129 * @sp: structure to initialize. 132 * to any other function. Each srcu_struct represents a separate domain 178 * Return true if the number of pre-existing readers is determined to [all...] |
/kernel/debug/kdb/ |
H A D | kdb_main.c | 4 * This file is subject to the terms and conditions of the GNU General Public 59 * kdb_lock protects updates to kdb_initial_cpu. Used to 131 * Initial environment. This is all kept static and local to 132 * this file. We don't want to rely on the memory allocation 135 * environment is limited to a fixed number of entries (add more 136 * to __env[] if required) and a fixed amount of heap (add more to 197 * char* Pointer to string value of environment variable. 222 * kdballocenv - This function is used to allocat [all...] |
/kernel/time/ |
H A D | clocksource.c | 19 * along with this program; if not, write to the Free Software 23 * o Allow clocksource drivers to be unregistered 49 * @tc: Pointer to time counter 55 * The first call to this function for a new time counter initializes 69 /* convert to nanoseconds: */ 115 * @mult: pointer to mult variable 116 * @shift: pointer to shift variable 117 * @from: frequency to convert from 118 * @to: frequency to conver 137 clocks_calc_mult_shift(u32 *mult, u32 *shift, u32 from, u32 to, u32 maxsec) argument [all...] |
/kernel/bpf/ |
H A D | core.c | 142 * fill a page, allow at least 128 extra bytes to insert a 170 /* Base function for offset calculation. Needs to go into .text section, 298 /* Registers used in classic BPF programs need to be reset first. */ 572 * BPF_R0 - 8/16/32-bit skb data converted to cpu endianness 626 * try to JIT internal BPF program, if JIT is not available select interpreter 662 int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to, argument
|
/kernel/sched/ |
H A D | core.c | 8 * 1996-12-23 Modified by Dave Grothe to fix bugs in semaphores and 23 * 2007-05-06 Interactivity improvements to CFS by Mike Galbraith 280 * Number of tasks to iterate in a single balance run. 302 * part of the period that we allow rt tasks to run in us. 384 * Use HR-timers to deliver accurate preemption points. 435 * Called to set the hrtick timer state. 487 * Called to set the hrtick timer state. 558 * If this returns true, then the idle task promises to call 595 * resched_curr - mark rq's current task 'to be rescheduled now'. 598 * might also involve a cross-CPU call to trigge 1286 ktime_t to = ktime_set(0, NSEC_PER_SEC/HZ); local [all...] |