54279552bd260532d90e7a59fbc931924bbb0f7b |
|
31-Oct-2014 |
Boris Ostrovsky <boris.ostrovsky@oracle.com> |
x86/core, x86/xen/smp: Use 'die_complete' completion when taking CPU down Commit 2ed53c0d6cc9 ("x86/smpboot: Speed up suspend/resume by avoiding 100ms sleep for CPU offline during S3") introduced completions to CPU offlining process. These completions are not initialized on Xen kernels causing a panic in play_dead_common(). Move handling of die_complete into common routines to make them available to Xen guests. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Cc: tianyu.lan@intel.com Cc: konrad.wilk@oracle.com Cc: xen-devel@lists.xenproject.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1414770572-7950-1-git-send-email-boris.ostrovsky@oracle.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
a2ef5dc2c7cbedbeb4c847039845afaea5e63745 |
|
11-Sep-2014 |
Mukesh Rathor <mukesh.rathor@oracle.com> |
x86/xen: Set EFER.NX and EFER.SCE in PVH guests This fixes two bugs in PVH guests: - Not setting EFER.NX means the NX bit in page table entries is ignored on Intel processors and causes reserved bit page faults on AMD processors. - After the Xen commit 7645640d6ff1 ("x86/PVH: don't set EFER_SCE for pvh guest") PVH guests are required to set EFER.SCE to enable the SYSCALL instruction. Secondary VCPUs are started with pagetables with the NX bit set so EFER.NX must be set before using any stack or data segment. xen_pvh_cpu_early_init() is the new secondary VCPU entry point that sets EFER before jumping to cpu_bringup_and_idle(). Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
ce4b1b16502b182368cda20a61de2995762c8bcc |
|
20-Jun-2014 |
Igor Mammedov <imammedo@redhat.com> |
x86/smpboot: Initialize secondary CPU only if master CPU will wait for it Hang is observed on virtual machines during CPU hotplug, especially in big guests with many CPUs. (It reproducible more often if host is over-committed). It happens because master CPU gives up waiting on secondary CPU and allows it to run wild. As result AP causes locking or crashing system. For example as described here: https://lkml.org/lkml/2014/3/6/257 If master CPU have sent STARTUP IPI successfully, and AP signalled to master CPU that it's ready to start initialization, make master CPU wait indefinitely till AP is onlined. To ensure that AP won't ever run wild, make it wait at early startup till master CPU confirms its intention to wait for AP. If AP doesn't respond in 10 seconds, the master CPU will timeout and cancel AP onlining. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Acked-by: Toshi Kani <toshi.kani@hp.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1403266991-12233-1-git-send-email-imammedo@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
4461bbc05bf11fa4251acded60e4645863a4158a |
|
10-Apr-2014 |
Boris Ostrovsky <boris.ostrovsky@oracle.com> |
x86/xen: Fix 32-bit PV guests's usage of kernel_stack Commit 198d208df4371734ac4728f69cb585c284d20a15 ("x86: Keep thread_info on thread stack in x86_32") made 32-bit kernels use kernel_stack to point to thread_info. That change missed a couple of updates needed by Xen's 32-bit PV guests: 1. kernel_stack needs to be initialized for secondary CPUs 2. GET_THREAD_INFO() now uses %fs register which may not be the kernel's version when executing xen_iret(). With respect to the second issue, we don't need GET_THREAD_INFO() anymore: we used it as an intermediate step to get to per_cpu xen_vcpu and avoid referencing %fs. Now that we are going to use %fs anyway we may as well go directly to xen_vcpu. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
c9f6e9977e38de15da96b732a8dec0ef56cbf977 |
|
20-Jan-2014 |
Roger Pau Monne <roger.pau@citrix.com> |
xen/pvh: Set X86_CR0_WP and others in CR0 (v2) otherwise we will get for some user-space applications that use 'clone' with CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID end up hitting an assert in glibc manifested by: general protection ip:7f80720d364c sp:7fff98fd8a80 error:0 in libc-2.13.so[7f807209e000+180000] This is due to the nature of said operations which sets and clears the PID. "In the successful one I can see that the page table of the parent process has been updated successfully to use a different physical page, so the write of the tid on that page only affects the child... On the other hand, in the failed case, the write seems to happen before the copy of the original page is done, so both the parent and the child end up with the same value (because the parent copies the page after the write of the child tid has already happened)." (Roger's analysis). The nature of this is due to the Xen's commit of 51e2cac257ec8b4080d89f0855c498cbbd76a5e5 "x86/pvh: set only minimal cr0 and cr4 flags in order to use paging" the CR0_WP was removed so COW features of the Linux kernel were not operating properly. While doing that also update the rest of the CR0 flags to be inline with what a baremetal Linux kernel would set them to. In 'secondary_startup_64' (baremetal Linux) sets: X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | X86_CR0_PG The hypervisor for HVM type guests (which PVH is a bit) sets: X86_CR0_PE | X86_CR0_ET | X86_CR0_TS For PVH it specifically sets: X86_CR0_PG Which means we need to set the rest: X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM to have full parity. Signed-off-by: Roger Pau Monne <roger.pau@citrix.com> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v1: Took out the cr4 writes to be a seperate patch] [v2: 0-DAY kernel found xen_setup_gdt to be missing a static]
|
5840c84b16aad223d5305d8a569ea55de4120d67 |
|
13-Dec-2013 |
Mukesh Rathor <mukesh.rathor@oracle.com> |
xen/pvh: Secondary VCPU bringup (non-bootup CPUs) The VCPU bringup protocol follows the PV with certain twists. From xen/include/public/arch-x86/xen.h: Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise for HVM and PVH guests, not all information in this structure is updated: - For HVM guests, the structures read include: fpu_ctxt (if VGCT_I387_VALID is set), flags, user_regs, debugreg[*] - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to set cr3. All other fields not used should be set to 0. This is what we do. We piggyback on the 'xen_setup_gdt' - but modify a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt' can load per-cpu data-structures. It has no effect on the VCPU0. We also piggyback on the %rdi register to pass in the CPU number - so that when we bootup a new CPU, the cpu_bringup_and_idle will have passed as the first parameter the CPU number (via %rdi for 64-bit). Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
9d71cee66725a0a1333f02f315c06cc42f07650e |
|
07-Sep-2013 |
Michael Opdenacker <michael.opdenacker@free-electrons.com> |
x86/xen: remove deprecated IRQF_DISABLED This patch proposes to remove the IRQF_DISABLED flag from x86/xen code. It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 |
|
10-Oct-2013 |
Frediano Ziglio <frediano.ziglio@citrix.com> |
xen: Fix possible user space selector corruption Due to the way kernel is initialized under Xen is possible that the ring1 selector used by the kernel for the boot cpu end up to be copied to userspace leading to segmentation fault in the userspace. Xen code in the kernel initialize no-boot cpus with correct selectors (ds and es set to __USER_DS) but the boot one keep the ring1 (passed by Xen). On task context switch (switch_to) we assume that ds, es and cs already point to __USER_DS and __KERNEL_CSso these selector are not changed. If processor is an Intel that support sysenter instruction sysenter/sysexit is used so ds and es are not restored switching back from kernel to userspace. In the case the selectors point to a ring1 instead of __USER_DS the userspace code will crash on first memory access attempt (to be precise Xen on the emulated iret used to do sysexit will detect and set ds and es to zero which lead to GPF anyway). Now if an userspace process call kernel using sysenter and get rescheduled (for me it happen on a specific init calling wait4) could happen that the ring1 selector is set to ds and es. This is quite hard to detect cause after a while these selectors are fixed (__USER_DS seems sticky). Bisecting the code commit 7076aada1040de4ed79a5977dbabdb5e5ea5e249 appears to be the first one that have this issue. Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
|
26a799952737de20626e8c5c51b24534f1c90536 |
|
16-Aug-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Update pv_lock_ops functions before alternative code starts under PVHVM Before this patch we would patch all of the pv_lock_ops sites using alternative assembler. Then later in the bootup cycle change the unlock_kick and lock_spinning to the Xen specific - without re patching. That meant that for the core of the kernel we would be running with the baremetal version of unlock_kick and lock_spinning while for modules we would have the proper Xen specific slowpaths. As most of the module uses some API from the core kernel that ended up with slowpath lockers waiting forever to be kicked (b/c they would be using the Xen specific slowpath logic). And the kick never came b/c the unlock path that was taken was the baremetal one. On PV we do not have the problem as we initialise before the alternative code kicks in. The fix is to make the updating of the pv_lock_ops function be done before the alternative code starts patching. Note that this patch fixes issues discovered by commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23. ("xen: disable PV spinlocks on HVM") wherein it mentioned PV spinlocks cannot possibly work with the current code because they are enabled after pvops patching has already been done, and because PV spinlocks use a different data structure than native spinlocks so we cannot switch between them dynamically. The first problem is solved by this patch. The second problem has been solved by commit 816434ec4a674fcdb3c2221a6dffdc8f34020550 (Merge branch 'x86-spinlocks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip) P.S. There is still the commit 70dd4998cb85f0ecd6ac892cc7232abefa432efb (xen/spinlock: Disable IRQ spinlock (PV) allocation on PVHVM) to revert but that can be done later after all other bugs have been fixed. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
|
1fb3a8b2cfb278f139d9ff7ca5fe06a65de64494 |
|
13-Aug-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/spinlock: Fix locking path engaging too soon under PVHVM. The xen_lock_spinning has a check for the kicker interrupts and if it is not initialized it will spin normally (not enter the slowpath). But for PVHVM case we would initialize the kicker interrupt before the CPU came online. This meant that if the booting CPU used a spinlock and went in the slowpath - it would enter the slowpath and block forever. The forever part because during bootup: the spinlock would be taken _before_ the CPU sets itself to be online (more on this further), and we enter to poll on the event channel forever. The bootup CPU (see commit fc78d343fa74514f6fd117b5ef4cd27e4ac30236 "xen/smp: initialize IPI vectors before marking CPU online" for details) and the CPU that started the bootup consult the cpu_online_mask to determine whether the booting CPU should get an IPI. The booting CPU has to set itself in this mask via: set_cpu_online(smp_processor_id(), true); However, if the spinlock is taken before this (and it is) and it polls on an event channel - it will never be woken up as the kernel will never send an IPI to an offline CPU. Note that the PVHVM logic in sending IPIs is using the HVM path which has numerous checks using the cpu_online_mask and cpu_active_mask. See above mention git commit for details. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
|
fc78d343fa74514f6fd117b5ef4cd27e4ac30236 |
|
07-Aug-2013 |
Chuck Anderson <chuck.anderson@oracle.com> |
xen/smp: initialize IPI vectors before marking CPU online An older PVHVM guest (v3.0 based) crashed during vCPU hot-plug with: kernel BUG at drivers/xen/events.c:1328! RCU has detected that a CPU has not entered a quiescent state within the grace period. It needs to send the CPU a reschedule IPI if it is not offline. rcu_implicit_offline_qs() does this check: /* * If the CPU is offline, it is in a quiescent state. We can * trust its state not to change because interrupts are disabled. */ if (cpu_is_offline(rdp->cpu)) { rdp->offline_fqs++; return 1; } Else the CPU is online. Send it a reschedule IPI. The CPU is in the middle of being hot-plugged and has been marked online (!cpu_is_offline()). See start_secondary(): set_cpu_online(smp_processor_id(), true); ... per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; start_secondary() then waits for the CPU bringing up the hot-plugged CPU to mark it as active: /* * Wait until the cpu which brought this one up marked it * online before enabling interrupts. If we don't do that then * we can end up waking up the softirq thread before this cpu * reached the active state, which makes the scheduler unhappy * and schedule the softirq thread on the wrong cpu. This is * only observable with forced threaded interrupts, but in * theory it could also happen w/o them. It's just way harder * to achieve. */ while (!cpumask_test_cpu(smp_processor_id(), cpu_active_mask)) cpu_relax(); /* enable local interrupts */ local_irq_enable(); The CPU being hot-plugged will be marked active after it has been fully initialized by the CPU managing the hot-plug. In the Xen PVHVM case xen_smp_intr_init() is called to set up the hot-plugged vCPU's XEN_RESCHEDULE_VECTOR. The hot-plugging CPU is marked online, not marked active and does not have its IPI vectors set up. rcu_implicit_offline_qs() sees the hot-plugging cpu is !cpu_is_offline() and tries to send it a reschedule IPI: This will lead to: kernel BUG at drivers/xen/events.c:1328! xen_send_IPI_one() xen_smp_send_reschedule() rcu_implicit_offline_qs() rcu_implicit_dynticks_qs() force_qs_rnp() force_quiescent_state() __rcu_process_callbacks() rcu_process_callbacks() __do_softirq() call_softirq() do_softirq() irq_exit() xen_evtchn_do_upcall() because xen_send_IPI_one() will attempt to use an uninitialized IRQ for the XEN_RESCHEDULE_VECTOR. There is at least one other place that has caused the same crash: xen_smp_send_reschedule() wake_up_idle_cpu() add_timer_on() clocksource_watchdog() call_timer_fn() run_timer_softirq() __do_softirq() call_softirq() do_softirq() irq_exit() xen_evtchn_do_upcall() xen_hvm_callback_vector() clocksource_watchdog() uses cpu_online_mask to pick the next CPU to handle a watchdog timer: /* * Cycle through CPUs to check if the CPUs stay synchronized * to each other. */ next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask); if (next_cpu >= nr_cpu_ids) next_cpu = cpumask_first(cpu_online_mask); watchdog_timer.expires += WATCHDOG_INTERVAL; add_timer_on(&watchdog_timer, next_cpu); This resulted in an attempt to send an IPI to a hot-plugging CPU that had not initialized its reschedule vector. One option would be to make the RCU code check to not check for CPU offline but for CPU active. As becoming active is done after a CPU is online (in older kernels). But Srivatsa pointed out that "the cpu_active vs cpu_online ordering has been completely reworked - in the online path, cpu_active is set *before* cpu_online, and also, in the cpu offline path, the cpu_active bit is reset in the CPU_DYING notification instead of CPU_DOWN_PREPARE." Drilling in this the bring-up path: "[brought up CPU].. send out a CPU_STARTING notification, and in response to that, the scheduler sets the CPU in the cpu_active_mask. Again, this mask is better left to the scheduler alone, since it has the intelligence to use it judiciously." The conclusion was that: " 1. At the IPI sender side: It is incorrect to send an IPI to an offline CPU (cpu not present in the cpu_online_mask). There are numerous places where we check this and warn/complain. 2. At the IPI receiver side: It is incorrect to let the world know of our presence (by setting ourselves in global bitmasks) until our initialization steps are complete to such an extent that we can handle the consequences (such as receiving interrupts without crashing the sender etc.) " (from Srivatsa) As the native code enables the interrupts at some point we need to be able to service them. In other words a CPU must have valid IPI vectors if it has been marked online. It doesn't need to handle the IPI (interrupts may be disabled) but needs to have valid IPI vectors because another CPU may find it in cpu_online_mask and attempt to send it an IPI. This patch will change the order of the Xen vCPU bring-up functions so that Xen vectors have been set up before start_secondary() is called. It also will not continue to bring up a Xen vCPU if xen_smp_intr_init() fails to initialize it. Orabug 13823853 Signed-off-by Chuck Anderson <chuck.anderson@oracle.com> Acked-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
6efa20e49b9cb1db1ab66870cc37323474a75a13 |
|
19-Jul-2013 |
Konrad Rzeszutek Wilk <konrad@kernel.org> |
xen: Support 64-bit PV guest receiving NMIs This is based on a patch that Zhenzhong Duan had sent - which was missing some of the remaining pieces. The kernel has the logic to handle Xen-type-exceptions using the paravirt interface in the assembler code (see PARAVIRT_ADJUST_EXCEPTION_FRAME - pv_irq_ops.adjust_exception_frame and and INTERRUPT_RETURN - pv_cpu_ops.iret). That means the nmi handler (and other exception handlers) use the hypervisor iret. The other changes that would be neccessary for this would be to translate the NMI_VECTOR to one of the entries on the ipi_vector and make xen_send_IPI_mask_allbutself use different events. Fortunately for us commit 1db01b4903639fcfaec213701a494fe3fb2c490b (xen: Clean up apic ipi interface) implemented this and we piggyback on the cleanup such that the apic IPI interface will pass the right vector value for NMI. With this patch we can trigger NMIs within a PV guest (only tested x86_64). For this to work with normal PV guests (not initial domain) we need the domain to be able to use the APIC ops - they are already implemented to use the Xen event channels. For that to be turned on in a PV domU we need to remove the masking of X86_FEATURE_APIC. Incidentally that means kgdb will also now work within a PV guest without using the 'nokgdbroundup' workaround. Note that the 32-bit version is different and this patch does not enable that. CC: Lisa Nguyen <lisa@xenapiadmin.com> CC: Ben Guthro <benjamin.guthro@citrix.com> CC: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v1: Fixed up per David Vrabel comments] Reviewed-by: Ben Guthro <benjamin.guthro@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
|
bf7aab3ad4b4364a293421d628a912a2153ee1ee |
|
09-Aug-2013 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: Defer spinlock setup until boot CPU setup There's no need to do it at very early init, and doing it there makes it impossible to use the jump_label machinery. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-5-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
148f9bb87745ed45f7a11b2cbd3bc0f017d5d257 |
|
19-Jun-2013 |
Paul Gortmaker <paul.gortmaker@windriver.com> |
x86: delete __cpuinit usage from all x86 files The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
b85fffec7f5ba1c43171c63c046a97bac30a4561 |
|
04-Jun-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Don't leak interrupt name when offlining. When the user does: echo 0 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu1/online kmemleak reports: kmemleak: 7 new suspected memory leaks (see /sys/kernel/debug/kmemleak) unreferenced object 0xffff88003fa51240 (size 32): comm "swapper/0", pid 1, jiffies 4294667339 (age 1027.789s) hex dump (first 32 bytes): 72 65 73 63 68 65 64 31 00 00 00 00 00 00 00 00 resched1........ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff81660721>] kmemleak_alloc+0x21/0x50 [<ffffffff81190aac>] __kmalloc_track_caller+0xec/0x2a0 [<ffffffff812fe1bb>] kvasprintf+0x5b/0x90 [<ffffffff812fe228>] kasprintf+0x38/0x40 [<ffffffff81047ed1>] xen_smp_intr_init+0x41/0x2c0 [<ffffffff816636d3>] xen_cpu_up+0x393/0x3e8 [<ffffffff8166bbf5>] _cpu_up+0xd1/0x14b [<ffffffff8166bd48>] cpu_up+0xd9/0xec [<ffffffff81ae6e4a>] smp_init+0x4b/0xa3 [<ffffffff81ac4981>] kernel_init_freeable+0xdb/0x1e6 [<ffffffff8165ce39>] kernel_init+0x9/0xf0 [<ffffffff8167edfc>] ret_from_fork+0x7c/0xb0 [<ffffffffffffffff>] 0xffffffffffffffff This patch fixes some of it by using the 'struct xen_common_irq->name' field to stash away the char so that it can be freed when the interrupt line is destroyed. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
ee336e10d5650d408efb66f634d462b9eb39c191 |
|
04-Jun-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Set the per-cpu IRQ number to a valid default. When we free it we want to make sure to set it to a default value of -1 so that we don't double-free it (in case somebody calls us twice). Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
9547689fcdf0b223967edcbbe588d9f0489ee5aa |
|
04-Jun-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Introduce a common structure to contain the IRQ name and interrupt line. This patch adds a new structure to contain the common two things that each of the per-cpu interrupts need: - an interrupt number, - and the name of the interrupt (to be added in 'xen/smp: Don't leak interrupt name when offlining'). This allows us to carry the tuple of the per-cpu interrupt data structure and expand it as we need in the future. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
53b94fdc8fa0ccd88f97b72a6149672d7ddc0c50 |
|
04-Jun-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Coalesce the free_irq calls in one function. There are two functions that do a bunch of 'free_irq' on the per_cpu IRQ. Instead of having duplicate code just move it to one function. This is just code movement. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
466318a87f28cb3ba0d08a3b7ef1a37ae73d5aa7 |
|
03-Jun-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Fixup NOHZ per cpu data when onlining an offline CPU. The xen_play_dead is an undead function. When the vCPU is told to offline it ends up calling xen_play_dead wherin it calls the VCPUOP_down hypercall which offlines the vCPU. However, when the vCPU is onlined back, it resumes execution right after VCPUOP_down hypercall. That was OK (albeit the API for play_dead assumes that the CPU stays dead and never returns) but with commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down) that is no longer safe as said commit resets the ts->inidle which at the start of the cpu_idle loop was set. The net effect is that we get this warn: Broke affinity for irq 16 installing Xen timer for CPU 1 cpu 1 spinlock event irq 48 ------------[ cut here ]------------ WARNING: at /home/konrad/linux-linus/kernel/time/tick-sched.c:935 tick_nohz_idle_exit+0x195/0x1b0() Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.10.0-rc3upstream-00068-gdcdbe33 #1 Hardware name: BIOSTAR Group N61PB-M2S/N61PB-M2S, BIOS 6.00 PG 09/03/2009 ffffffff8193b448 ffff880039da5e60 ffffffff816707c8 ffff880039da5ea0 ffffffff8108ce8b ffff880039da4010 ffff88003fa8e500 ffff880039da4010 0000000000000001 ffff880039da4000 ffff880039da4010 ffff880039da5eb0 Call Trace: [<ffffffff816707c8>] dump_stack+0x19/0x1b [<ffffffff8108ce8b>] warn_slowpath_common+0x6b/0xa0 [<ffffffff8108ced5>] warn_slowpath_null+0x15/0x20 [<ffffffff810e4745>] tick_nohz_idle_exit+0x195/0x1b0 [<ffffffff810da755>] cpu_startup_entry+0x205/0x250 [<ffffffff81661070>] cpu_bringup_and_idle+0x13/0x15 ---[ end trace 915c8c486004dda1 ]--- b/c ts_inidle is set to zero. Thomas suggested that we just add a workaround to call tick_nohz_idle_enter before returning from xen_play_dead() - and that is what this patch does and fixes the issue. We also add the stable part b/c git commit 4b0c0f294 is on the stable tree. CC: stable@vger.kernel.org Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
1db01b4903639fcfaec213701a494fe3fb2c490b |
|
08-May-2013 |
Stefan Bader <stefan.bader@canonical.com> |
xen: Clean up apic ipi interface Commit f447d56d36af18c5104ff29dcb1327c0c0ac3634 introduced the implementation of the PV apic ipi interface. But there were some odd things (it seems none of which cause really any issue but maybe they should be cleaned up anyway): - xen_send_IPI_mask_allbutself (and by that xen_send_IPI_allbutself) ignore the passed in vector and only use the CALL_FUNCTION_SINGLE vector. While xen_send_IPI_all and xen_send_IPI_mask use the vector. - physflat_send_IPI_allbutself is declared unnecessarily. It is never used. This patch tries to clean up those things. Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
b12abaa192c4340de50ddd86853b3583c255c449 |
|
09-Apr-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Unifiy some of the PVs and PVHVM offline CPU path The "xen_cpu_die" and "xen_hvm_cpu_die" are very similar. Lets coalesce them. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
27d8b207f0dbc19b35e504f5e631f00461dba7f9 |
|
16-Apr-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp/pvhvm: Don't initialize IRQ_WORKER as we are using the native one. There is no need to use the PV version of the IRQ_WORKER mechanism as under PVHVM we are using the native version. The native version is using the SMP API. They just sit around unused: 69: 0 0 xen-percpu-ipi irqwork0 83: 0 0 xen-percpu-ipi irqwork1 Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
66ff0fe9e7bda8aec99985b24daad03652f7304e |
|
16-Apr-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp/spinlock: Fix leakage of the spinlock interrupt line for every CPU online/offline While we don't use the spinlock interrupt line (see for details commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23 - xen: disable PV spinlocks on HVM) - we should still do the proper init / deinit sequence. We did not do that correctly and for the CPU init for PVHVM guest we would allocate an interrupt line - but failed to deallocate the old interrupt line. This resulted in leakage of an irq_desc but more importantly this splat as we online an offlined CPU: genirq: Flags mismatch irq 71. 0002cc20 (spinlock1) vs. 0002cc20 (spinlock1) Pid: 2542, comm: init.late Not tainted 3.9.0-rc6upstream #1 Call Trace: [<ffffffff811156de>] __setup_irq+0x23e/0x4a0 [<ffffffff81194191>] ? kmem_cache_alloc_trace+0x221/0x250 [<ffffffff811161bb>] request_threaded_irq+0xfb/0x160 [<ffffffff8104c6f0>] ? xen_spin_trylock+0x20/0x20 [<ffffffff813a8423>] bind_ipi_to_irqhandler+0xa3/0x160 [<ffffffff81303758>] ? kasprintf+0x38/0x40 [<ffffffff8104c6f0>] ? xen_spin_trylock+0x20/0x20 [<ffffffff810cad35>] ? update_max_interval+0x15/0x40 [<ffffffff816605db>] xen_init_lock_cpu+0x3c/0x78 [<ffffffff81660029>] xen_hvm_cpu_notify+0x29/0x33 [<ffffffff81676bdd>] notifier_call_chain+0x4d/0x70 [<ffffffff810bb2a9>] __raw_notifier_call_chain+0x9/0x10 [<ffffffff8109402b>] __cpu_notify+0x1b/0x30 [<ffffffff8166834a>] _cpu_up+0xa0/0x14b [<ffffffff816684ce>] cpu_up+0xd9/0xec [<ffffffff8165f754>] store_online+0x94/0xd0 [<ffffffff8141d15b>] dev_attr_store+0x1b/0x20 [<ffffffff81218f44>] sysfs_write_file+0xf4/0x170 [<ffffffff811a2864>] vfs_write+0xb4/0x130 [<ffffffff811a302a>] sys_write+0x5a/0xa0 [<ffffffff8167ada9>] system_call_fastpath+0x16/0x1b cpu 1 spinlock event irq -16 smpboot: Booting Node 0 Processor 1 APIC 0x2 And if one looks at the /proc/interrupts right after offlining (CPU1): 70: 0 0 xen-percpu-ipi spinlock0 71: 0 0 xen-percpu-ipi spinlock1 77: 0 0 xen-percpu-ipi spinlock2 There is the oddity of the 'spinlock1' still being present. CC: stable@vger.kernel.org Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
888b65b4bc5e7fcbbb967023300cd5d44dba1950 |
|
16-Apr-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Fix leakage of timer interrupt line for every CPU online/offline. In the PVHVM path when we do CPU online/offline path we would leak the timer%d IRQ line everytime we do a offline event. The online path (xen_hvm_setup_cpu_clockevents via x86_cpuinit.setup_percpu_clockev) would allocate a new interrupt line for the timer%d. But we would still use the old interrupt line leading to: kernel BUG at /home/konrad/ssd/konrad/linux/kernel/hrtimer.c:1261! invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff810b9e21>] [<ffffffff810b9e21>] hrtimer_interrupt+0x261/0x270 .. snip.. <IRQ> [<ffffffff810445ef>] xen_timer_interrupt+0x2f/0x1b0 [<ffffffff81104825>] ? stop_machine_cpu_stop+0xb5/0xf0 [<ffffffff8111434c>] handle_irq_event_percpu+0x7c/0x240 [<ffffffff811175b9>] handle_percpu_irq+0x49/0x70 [<ffffffff813a74a3>] __xen_evtchn_do_upcall+0x1c3/0x2f0 [<ffffffff813a760a>] xen_evtchn_do_upcall+0x2a/0x40 [<ffffffff8167c26d>] xen_hvm_callback_vector+0x6d/0x80 <EOI> [<ffffffff81666d01>] ? start_secondary+0x193/0x1a8 [<ffffffff81666cfd>] ? start_secondary+0x18f/0x1a8 There is also the oddity (timer1) in the /proc/interrupts after offlining CPU1: 64: 1121 0 xen-percpu-virq timer0 78: 0 0 xen-percpu-virq timer1 84: 0 2483 xen-percpu-virq timer2 This patch fixes it. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
|
7d1a941731fabf27e5fb6edbebb79fe856edb4e5 |
|
21-Mar-2013 |
Thomas Gleixner <tglx@linutronix.de> |
x86: Use generic idle loop Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Magnus Damm <magnus.damm@gmail.com> Link: http://lkml.kernel.org/r/20130321215235.486594473@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org
|
dacd45f4e793e46e8299c9a580e400866ffe0770 |
|
22-Oct-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Move the common CPU init code a bit to prep for PVH patch. The PV and PVH code CPU init code share some functionality. The PVH code ("xen/pvh: Extend vcpu_guest_context, p2m, event, and XenBus") sets some of these up, but not all. To make it easier to read, this patch removes the PV specific out of the generic way. No functional change - just code movement. Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> [v2: Fixed compile errors noticed by Fengguang Wu build system] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
d55bf532d72b3cfdfe84e696ace995067324c96c |
|
16-Jan-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
Revert "xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while atomic." This reverts commit 41bd956de3dfdc3a43708fe2e0c8096c69064a1e. The fix is incorrect and not appropiate for the latest kernels. In fact it _causes_ the BUG: scheduling while atomic while doing vCPU hotplug. Suggested-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
06d0b5d9edcecccab45588a472cd34af2608e665 |
|
18-Dec-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time. Git commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da ("x86, hotplug: Support functions for CPU0 online/offline") alters what the call to smp_store_cpu_info() does. For BSP we should use the smp_store_boot_cpu_info() and for secondary CPU's the old variant of smp_store_cpu_info() should be used. This fixes the regression introduced by said commit. Reported-and-Tested-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
816afe4ff98ee10b1d30fd66361be132a0a5cee6 |
|
06-Aug-2012 |
Rusty Russell <rusty@rustcorp.com.au> |
x86/smp: Don't ever patch back to UP if we unplug cpus We still patch SMP instructions to UP variants if we boot with a single CPU, but not at any other time. In particular, not if we unplug CPUs to return to a single cpu. Paul McKenney points out: mean offline overhead is 6251/48=130.2 milliseconds. If I remove the alternatives_smp_switch() from the offline path [...] the mean offline overhead is 550/42=13.1 milliseconds Basically, we're never going to get those 120ms back, and the code is pretty messy. We get rid of: 1) The "smp-alt-once" boot option. It's actually "smp-alt-boot", the documentation is wrong. It's now the default. 2) The skip_smp_alternatives flag used by suspend. 3) arch_disable_nonboot_cpus_begin() and arch_disable_nonboot_cpus_end() which were only used to set this one flag. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul McKenney <paul.mckenney@us.ibm.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/87vcgwwive.fsf@rustcorp.com.au Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
3b6f70fd7dd4e19fc674ec99e389bf0da5589525 |
|
29-May-2012 |
Yong Zhang <yong.zhang0@gmail.com> |
x86-smp-remove-call-to-ipi_call_lock-ipi_call_unlock ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock protects only the call_function data structure itself, but it's completely unrelated to cpu_online_mask. The mask to which the IPIs are sent is calculated before call_function.lock is taken in smp_call_function_many(), so the locking around set_cpu_online() is pointless and can be removed. [ tglx: Massaged changelog ] Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Cc: ralf@linux-mips.org Cc: sshtylyov@mvista.com Cc: david.daney@cavium.com Cc: nikunj@linux.vnet.ibm.com Cc: paulmck@linux.vnet.ibm.com Cc: axboe@kernel.dk Cc: peterz@infradead.org Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: http://lkml.kernel.org/r/1338275765-3217-7-git-send-email-yong.zhang0@gmail.com Acked-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
2f1bd67d544d3c086fb5101513f4b6c8f4291b43 |
|
21-May-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: unbind irqworkX when unplugging vCPUs. The git commit 1ff2b0c303698e486f1e0886b4d9876200ef8ca5 "xen: implement IRQ_WORK_VECTOR handler" added the functionality to have a per-cpu "irqworkX" for the IPI APIC functionality. However it missed the unbind when a vCPU is unplugged resulting in an orphaned per-cpu interrupt line for unplugged vCPU: 30: 216 0 xen-dyn-event hvc_console 31: 810 4 xen-dyn-event eth0 32: 29 0 xen-dyn-event blkif - 36: 0 0 xen-percpu-ipi irqwork2 - 37: 287 0 xen-dyn-event xenbus + 36: 287 0 xen-dyn-event xenbus NMI: 0 0 Non-maskable interrupts LOC: 0 0 Local timer interrupts SPU: 0 0 Spurious interrupts Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
1ff2b0c303698e486f1e0886b4d9876200ef8ca5 |
|
20-Apr-2012 |
Lin Ming <mlin@ss.pku.edu.cn> |
xen: implement IRQ_WORK_VECTOR handler Signed-off-by: Lin Ming <mlin@ss.pku.edu.cn> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
f447d56d36af18c5104ff29dcb1327c0c0ac3634 |
|
20-Apr-2012 |
Ben Guthro <ben@guthro.net> |
xen: implement apic ipi interface Map native ipi vector to xen vector. Implement apic ipi interface with xen_send_IPI_one. Tested-by: Steven Noonan <steven@uplinklabs.net> Signed-off-by: Ben Guthro <ben@guthro.net> Signed-off-by: Lin Ming <mlin@ss.pku.edu.cn> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
cf405ae612b0f7e2358db7ff594c0e94846137aa |
|
26-Apr-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Fix crash when booting with ACPI hotplug CPUs. When we boot on a machine that can hotplug CPUs and we are using 'dom0_max_vcpus=X' on the Xen hypervisor line to clip the amount of CPUs available to the initial domain, we get this: (XEN) Command line: com1=115200,8n1 dom0_mem=8G noreboot dom0_max_vcpus=8 sync_console mce_verbosity=verbose console=com1,vga loglvl=all guest_loglvl=all .. snip.. DMI: Intel Corporation S2600CP/S2600CP, BIOS SE5C600.86B.99.99.x032.072520111118 07/25/2011 .. snip. SMP: Allowing 64 CPUs, 32 hotplug CPUs installing Xen timer for CPU 7 cpu 7 spinlock event irq 361 NMI watchdog: disabled (cpu7): hardware events not enabled Brought up 8 CPUs .. snip.. [acpi processor finds the CPUs are not initialized and starts calling arch_register_cpu, which creates /sys/devices/system/cpu/cpu8/online] CPU 8 got hotplugged CPU 9 got hotplugged CPU 10 got hotplugged .. snip.. initcall 1_acpi_battery_init_async+0x0/0x1b returned 0 after 406 usecs calling erst_init+0x0/0x2bb @ 1 [and the scheduler sticks newly started tasks on the new CPUs, but said CPUs cannot be initialized b/c the hypervisor has limited the amount of vCPUS to 8 - as per the dom0_max_vcpus=8 flag. The spinlock tries to kick the other CPU, but the structure for that is not initialized and we crash.] BUG: unable to handle kernel paging request at fffffffffffffed8 IP: [<ffffffff81035289>] xen_spin_lock+0x29/0x60 PGD 180d067 PUD 180e067 PMD 0 Oops: 0002 [#1] SMP CPU 7 Modules linked in: Pid: 1, comm: swapper/0 Not tainted 3.4.0-rc2upstream-00001-gf5154e8 #1 Intel Corporation S2600CP/S2600CP RIP: e030:[<ffffffff81035289>] [<ffffffff81035289>] xen_spin_lock+0x29/0x60 RSP: e02b:ffff8801fb9b3a70 EFLAGS: 00010282 With this patch, we cap the amount of vCPUS that the initial domain can run, to exactly what dom0_max_vcpus=X has specified. In the future, if there is a hypercall that will allow a running domain to expand past its initial set of vCPUS, this patch should be re-evaluated. CC: stable@kernel.org Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
7eb43a6d232bfa46464b501cd1987ec2d705d8cf |
|
20-Apr-2012 |
Thomas Gleixner <tglx@linutronix.de> |
x86: Use generic idle thread allocation Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20120420124557.246929343@linutronix.de
|
5cdaf1834f43b0edc4a3aa683aa4ec98f6bfe8a7 |
|
20-Apr-2012 |
Thomas Gleixner <tglx@linutronix.de> |
x86: Add task_struct argument to smp_ops.cpu_up Preparatory patch to use the generic idle thread allocation. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20120420124557.176604405@linutronix.de
|
e8c9e788f493d3236809e955c9fc12625a461e09 |
|
22-Mar-2012 |
Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> |
xen/smp: Remove unnecessary call to smp_processor_id() There is an extra and unnecessary call to smp_processor_id() in cpu_bringup(). Remove it. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
106b44388d8f76373149c4ea144f717b6d4d9a6d |
|
21-Mar-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Fix bringup bug in AP code. The CPU hotplug code has now a callback to help bring up the CPU. Without the call we end up getting: BUG: soft lockup - CPU#0 stuck for 29s! [migration/0:6] Modules linked in: CPU ] Pid: 6, comm: migration/0 Not tainted 3.3.0upstream-01180-ged378a5 #1 Dell Inc. PowerEdge T105 /0RR825 RIP: e030:[<ffffffff810d3b8b>] [<ffffffff810d3b8b>] stop_machine_cpu_stop+0x7b/0xf0 RSP: e02b:ffff8800ceaabdb0 EFLAGS: 00000293 .. snip.. Call Trace: [<ffffffff810d3b10>] ? stop_one_cpu_nowait+0x50/0x50 [<ffffffff810d3841>] cpu_stopper_thread+0xf1/0x1c0 [<ffffffff815a9776>] ? __schedule+0x3c6/0x760 [<ffffffff815aa749>] ? _raw_spin_unlock_irqrestore+0x19/0x30 [<ffffffff810d3750>] ? res_counter_charge+0x150/0x150 [<ffffffff8108dc76>] kthread+0x96/0xa0 [<ffffffff815b27e4>] kernel_thread_helper+0x4/0x10 [<ffffffff815aacbc>] ? retint_restore_ar Thix fixes it. Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
41bd956de3dfdc3a43708fe2e0c8096c69064a1e |
|
01-Feb-2012 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while atomic. When a user offlines a VCPU and then onlines it, we get: NMI watchdog disabled (cpu2): hardware events not enabled BUG: scheduling while atomic: swapper/2/0/0x00000002 Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbco ttm bitblit softcursor drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs [last unloaded: Pid: 0, comm: swapper/2 Tainted: G O 3.2.0phase15.1-00003-gd6f7f5b-dirty #4 Call Trace: [<ffffffff81070571>] __schedule_bug+0x61/0x70 [<ffffffff8158eb78>] __schedule+0x798/0x850 [<ffffffff8158ed6a>] schedule+0x3a/0x50 [<ffffffff810349be>] cpu_idle+0xbe/0xe0 [<ffffffff81583599>] cpu_bringup_and_idle+0xe/0x10 The reason for this should be obvious from this call-chain: cpu_bringup_and_idle: \- cpu_bringup | \-[preempt_disable] | |- cpu_idle \- play_dead [assuming the user offlined the VCPU] | \ | +- (xen_play_dead) | \- HYPERVISOR_VCPU_off [so VCPU is dead, once user | | onlines it starts from here] | \- cpu_bringup [preempt_disable] | +- preempt_enable_no_reschedule() +- schedule() \- preempt_enable() So we have two preempt_disble() and one preempt_enable(). Calling preempt_enable() after the cpu_bringup() in the xen_play_dead fixes the imbalance. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
2113f4691663f033189bf43d7501c6d29cd685a5 |
|
13-Jan-2012 |
Alex Shi <alex.shi@intel.com> |
xen: use this_cpu_xxx replace percpu_xxx funcs percpu_xxx funcs are duplicated with this_cpu_xxx funcs, so replace them for further code clean up. I don't know much of xen code. But, since the code is in x86 architecture, the percpu_xxx is exactly same as this_cpu_xxx serials functions. So, the change is safe. Signed-off-by: Alex Shi <alex.shi@intel.com> Acked-by: Christoph Lameter <cl@gentwo.org> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
f10cd522c5fbfec9ae3cc01967868c9c2401ed23 |
|
06-Sep-2011 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
xen: disable PV spinlocks on HVM PV spinlocks cannot possibly work with the current code because they are enabled after pvops patching has already been done, and because PV spinlocks use a different data structure than native spinlocks so we cannot switch between them dynamically. A spinlock that has been taken once by the native code (__ticket_spin_lock) cannot be taken by __xen_spin_lock even after it has been released. Reported-and-Tested-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
ed467e69f16e6b480e2face7bc5963834d025f91 |
|
01-Sep-2011 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen/smp: Warn user why they keel over - nosmp or noapic and what to use instead. We have hit a couple of customer bugs where they would like to use those parameters to run an UP kernel - but both of those options turn of important sources of interrupt information so we end up not being able to boot. The correct way is to pass in 'dom0_max_vcpus=1' on the Xen hypervisor line and the kernel will patch itself to be a UP kernel. Fixes bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637308 CC: stable@kernel.org Acked-by: Ian Campbell <Ian.Campbell@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
3c05c4bed4ccce3f22f6d7899b308faae24ad198 |
|
17-Aug-2011 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
xen: Do not enable PV IPIs when vector callback not present Fix regression for HVM case on older (<4.1.1) hypervisors caused by commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Date: Thu Dec 2 17:55:10 2010 +0000 xen: PV on HVM: support PV spinlocks and IPIs This change replaced the SMP operations with event based handlers without taking into account that this only works when the hypervisor supports callback vectors. This causes unexplainable hangs early on boot for HVM guests with more than one CPU. BugLink: http://bugs.launchpad.net/bugs/791850 CC: stable@kernel.org Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-and-Reported-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
900cba8881b39dfbc7c8062098504ab93f5387a8 |
|
18-Dec-2009 |
Andrew Jones <drjones@redhat.com> |
xen: support CONFIG_MAXSMP The MAXSMP config option requires CPUMASK_OFFSTACK, which in turn requires we init the memory for the maps while we bring up the cpus. MAXSMP also increases NR_CPUS to 4096. This increase in size exposed an issue in the argument construction for multicalls from xen_flush_tlb_others. The args should only need space for the actual number of cpus. Also in 2.6.39 it exposes a bootup problem. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff8157a1d3>] set_cpu_sibling_map+0x123/0x30d ... Call Trace: [<ffffffff81039a3f>] ? xen_restore_fl_direct_reloc+0x4/0x4 [<ffffffff819dc4db>] xen_smp_prepare_cpus+0x36/0x135 .. CC: stable@kernel.org Signed-off-by: Andrew Jones <drjones@redhat.com> [v2: Updated to compile on 3.0] [v3: Updated to compile when CONFIG_SMP is not defined] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
b53cedebd74918237176520f9157deb7ae066b71 |
|
04-May-2011 |
Daniel Kiper <dkiper@net-space.pl> |
arch/x86/xen/smp: Cleanup code/data sections definitions Cleanup code/data sections definitions accordingly to include/linux/init.h. Signed-off-by: Daniel Kiper <dkiper@net-space.pl> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
184748cc50b2dceb8287f9fb657eda48ff8fcfe7 |
|
05-Apr-2011 |
Peter Zijlstra <a.p.zijlstra@chello.nl> |
sched: Provide scheduler_ipi() callback in response to smp_send_reschedule() For future rework of try_to_wake_up() we'd like to push part of that function onto the CPU the task is actually going to run on. In order to do so we need a generic callback from the existing scheduler IPI. This patch introduces such a generic callback: scheduler_ipi() and implements it as a NOP. BenH notes: PowerPC might use this IPI on offline CPUs under rare conditions! Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Chris Metcalf <cmetcalf@tilera.com> Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110405152728.744338123@chello.nl
|
99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f |
|
02-Dec-2010 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
xen: PV on HVM: support PV spinlocks and IPIs Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus (that switch to APIC mode and initialize APIC routing); on secondary CPUs on CPU_UP_PREPARE. Enable the usage of event channels to send and receive IPIs when running as a PV on HVM guest. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
ea5b8f73933e34d2b47a65284c46d26d49e7edb9 |
|
26-Oct-2010 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
xen: initialize cpu masks for pv guests in xen_smp_init Pv guests don't have ACPI and need the cpu masks to be set correctly as early as possible so we call xen_fill_possible_map from xen_smp_init. On the other hand the initial domain supports ACPI so in this case we skip xen_fill_possible_map and rely on it. However Xen might limit the number of cpus usable by the domain, so we filter those masks during smp initialization using the VCPUOP_is_up hypercall. It is important that the filtering is done before xen_setup_vcpu_info_placement. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
801fd14a725ef7757d33f07b83415cdd2165e50a |
|
23-Sep-2010 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
xen: use vcpu_ops to setup cpu masks Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
76fac077db6b34e2c6383a7b4f3f4f7b7d06d8ce |
|
11-Oct-2010 |
Alok Kataria <akataria@vmware.com> |
x86, kexec: Make sure to stop all CPUs before exiting the kernel x86 smp_ops now has a new op, stop_other_cpus which takes a parameter "wait" this allows the caller to specify if it wants to stop until all the cpus have processed the stop IPI. This is required specifically for the kexec case where we should wait for all the cpus to be stopped before starting the new kernel. We now wait for the cpus to stop in all cases except for panic/kdump where we expect things to be broken and we are doing our best to make things work anyway. This patch fixes a legitimate regression, which was introduced during 2.6.30, by commit id 4ef702c10b5df18ab04921fc252c26421d4d6c75. Signed-off-by: Alok N Kataria <akataria@vmware.com> LKML-Reference: <1286833028.1372.20.camel@ank32.eng.vmware.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: <stable@kernel.org> v2.6.30-36 Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
086748e52fb072ff0935ba4512e29c421bd5b716 |
|
03-Aug-2010 |
Ian Campbell <ian.campbell@citrix.com> |
xen/panic: use xen_reboot and fix smp_send_stop Offline vcpu when using stop_self. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
|
5a0e3ad6af8660be21ca98a971cd00f331318c05 |
|
24-Mar-2010 |
Tejun Heo <tj@kernel.org> |
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
71709247aa852b5c4a01e70a9186590800d15575 |
|
28-Dec-2009 |
Robert P. J. Day <rpjday@crashcourse.ca> |
xen: Fix misspelled CONFIG variable in comment. Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
028896721ac04f6fa0697f3ecac3f98761746363 |
|
24-Nov-2009 |
Ian Campbell <ian.campbell@citrix.com> |
xen: register runstate on secondary CPUs The commit "xen: re-register runstate area earlier on resume" caused us to never try and setup the runstate area for secondary CPUs. Ensure that we do this... Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stable Kernel <stable@kernel.org>
|
d7d3756c5b1277fafd132ce7a2211b388c3b5bd2 |
|
03-Nov-2009 |
Rusty Russell <rusty@rustcorp.com.au> |
cpumask: Use modern cpumask style in Xen Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> LKML-Reference: <200911031458.38406.rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
c6e22f9e3e99cc221fe01a0cacf94a9da8a59c31 |
|
29-Oct-2009 |
Tejun Heo <tj@kernel.org> |
percpu: make percpu symbols in xen unique This patch updates percpu related symbols in xen such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/x86/xen/smp.c, arch/x86/xen/time.c, arch/ia64/xen/irq_xen.c: add xen_ prefix to percpu variables * arch/ia64/xen/time.c: add xen_ prefix to percpu variables, drop processed_ prefix and make them static Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org>
|
577eebeae34d340685d8985dfdb7dfe337c511e8 |
|
27-Aug-2009 |
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> |
xen: make -fstack-protector work under Xen -fstack-protector uses a special per-cpu "stack canary" value. gcc generates special code in each function to test the canary to make sure that the function's stack hasn't been overrun. On x86-64, this is simply an offset of %gs, which is the usual per-cpu base segment register, so setting it up simply requires loading %gs's base as normal. On i386, the stack protector segment is %gs (rather than the usual kernel percpu %fs segment register). This requires setting up the full kernel GDT and then loading %gs accordingly. We also need to make sure %gs is initialized when bringing up secondary cpus too. To keep things consistent, we do the full GDT/segment register setup on both architectures. Because we need to avoid -fstack-protected code before setting up the GDT and because there's no way to disable it on a per-function basis, several files need to have stack-protector inhibited. [ Impact: allow Xen booting with stack-protector enabled ] Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
|
1207cf8eb99d8c699919e352292bdf1f519fbba5 |
|
05-Mar-2009 |
Hannes Eder <hannes@hanneseder.net> |
NULL noise: arch/x86/xen/smp.c Fix this sparse warnings: arch/x86/xen/smp.c:316:52: warning: Using plain integer as NULL pointer arch/x86/xen/smp.c:421:60: warning: Using plain integer as NULL pointer Signed-off-by: Hannes Eder <hannes@hanneseder.net> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
|
e9e2d1ffcfdb38bed11a3064aa74bea9ee38ed80 |
|
05-Mar-2009 |
Hannes Eder <hannes@hanneseder.net> |
NULL noise: arch/x86/xen/smp.c Fix this sparse warnings: arch/x86/xen/smp.c:316:52: warning: Using plain integer as NULL pointer arch/x86/xen/smp.c:421:60: warning: Using plain integer as NULL pointer Signed-off-by: Hannes Eder <hannes@hanneseder.net> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
|
4f0628963c86d2f97b8cb9acc024a7fe288a6a57 |
|
13-Mar-2009 |
Rusty Russell <rusty@rustcorp.com.au> |
cpumask: use new cpumask functions throughout x86 Impact: cleanup 1) &cpu_online_map -> cpu_online_mask 2) first_cpu/next_cpu_nr -> cpumask_first/cpumask_next 3) cpu_*_map manipulation -> init_cpu_* / set_cpu_* Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
9976b39b5031bbf76f715893cf080b6a17683881 |
|
27-Feb-2009 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: deal with virtually mapped percpu data The virtually mapped percpu space causes us two problems: - for hypercalls which take an mfn, we need to do a full pagetable walk to convert the percpu va into an mfn, and - when a hypercall requires a page to be mapped RO via all its aliases, we need to make sure its RO in both the percpu mapping and in the linear mapping This primarily affects the gdt and the vcpu info structure. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Xen-devel <xen-devel@lists.xensource.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
383414322b3b3ced0cbc146801e0cc6c60a6c5f4 |
|
02-Feb-2009 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: setup percpu data pointers We need to access percpu data fairly early, so set up the percpu registers as soon as possible. We only need to load the appropriate segment register. We already have a GDT, but its hard to change it early because we need to manipulate the pagetable to do so, and that hasn't been set up yet. Also, set the kernel stack when bringing up secondary CPUs. If we don't they all end up sharing the same stack... Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
795f99b61d20c34cb04d17d8906b32f745a635ec |
|
30-Jan-2009 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: setup percpu data pointers Impact: fix xen booting We need to access percpu data fairly early, so set up the percpu registers as soon as possible. We only need to load the appropriate segment register. We already have a GDT, but its hard to change it early because we need to manipulate the pagetable to do so, and that hasn't been set up yet. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Tejun Heo <tj@kernel.org>
|
b2d2f4312b117a6cc647c8521e2643a88771f757 |
|
26-Jan-2009 |
Brian Gerst <brgerst@gmail.com> |
x86: initialize per-cpu GDT segment in per-cpu setup Impact: cleanup Rename init_gdt() to setup_percpu_segment(), and move it to setup_percpu.c. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
|
c6f5e0acd5d12ee23f701f15889872e67b47caa6 |
|
18-Jan-2009 |
Brian Gerst <brgerst@gmail.com> |
x86-64: Move current task from PDA to per-cpu and consolidate with 32-bit. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
|
1b437c8c73a36daa471dd54a63c426d72af5723d |
|
18-Jan-2009 |
Brian Gerst <brgerst@gmail.com> |
x86-64: Move irq stats from PDA to per-cpu and consolidate with 32-bit. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
|
6dbde3530850d4d8bfc1b6bd4006d92786a2787f |
|
15-Jan-2009 |
Ingo Molnar <mingo@elte.hu> |
percpu: add optimized generic percpu accessors It is an optimization and a cleanup, and adds the following new generic percpu methods: percpu_read() percpu_write() percpu_add() percpu_sub() percpu_and() percpu_or() percpu_xor() and implements support for them on x86. (other architectures will fall back to a default implementation) The advantage is that for example to read a local percpu variable, instead of this sequence: return __get_cpu_var(var); ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx ffffffff8102ca32: 81 ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax We can get a single instruction by using the optimized variants: return percpu_read(var); ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax I also cleaned up the x86-specific APIs and made the x86 code use these new generic percpu primitives. tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out * added percpu_and() for completeness's sake * made generic percpu ops atomic against preemption Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Tejun Heo <tj@kernel.org>
|
1a51e3a0aed18767cf2762e95456ecfeb0bca5e6 |
|
13-Jan-2009 |
Tejun Heo <tj@kernel.org> |
x86: fold pda into percpu area on SMP [ Based on original patch from Christoph Lameter and Mike Travis. ] Currently pdas and percpu areas are allocated separately. %gs points to local pda and percpu area can be reached using pda->data_offset. This patch folds pda into percpu area. Due to strange gcc requirement, pda needs to be at the beginning of the percpu area so that pda->stack_canary is at %gs:40. To achieve this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is added and used to reserve pda sized chunk at the start of the percpu area. After this change, for boot cpu, %gs first points to pda in the data.init area and later during setup_per_cpu_areas() gets updated to point to the actual pda. This means that setup_per_cpu_areas() need to reload %gs for CPU0 while clearing pda area for other cpus as cpu0 already has modified it when control reaches setup_per_cpu_areas(). This patch also removes now unnecessary get_local_pda() and its call sites. A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
bcda016eddd7a8b374bb371473c821a91ff1d8cc |
|
17-Dec-2008 |
Mike Travis <travis@sgi.com> |
x86: cosmetic changes apic-related files. This patch simply changes cpumask_t to struct cpumask and similar trivial modernizations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
|
b78936e14ee47b6b2d628501a0eab5270db80132 |
|
17-Dec-2008 |
Mike Travis <travis@sgi.com> |
xen: convert to cpumask_var_t and new cpumask primitives. Simple change, and eventual space saving when NR_CPUS >> nr_cpu_ids. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
|
e7986739a76cde5079da08809d8bbc6878387ae0 |
|
17-Dec-2008 |
Mike Travis <travis@sgi.com> |
x86 smp: modify send_IPI_mask interface to accept cpumask_t pointers Impact: cleanup, change parameter passing * Change genapic interfaces to accept cpumask_t pointers where possible. * Modify external callers to use cpumask_t pointers in function calls. * Create new send_IPI_mask_allbutself which is the same as the send_IPI_mask functions but removes smp_processor_id() from list. This removes another common need for a temporary cpumask_t variable. * Functions that used a temp cpumask_t variable for: cpumask_t allbutme = cpu_online_map; cpu_clear(smp_processor_id(), allbutme); if (!cpus_empty(allbutme)) ... become: if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu))) ... * Other minor code optimizations (like using cpus_clear instead of CPU_MASK_NONE, etc.) Applies to linux-2.6.tip/master. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu>
|
df6b07949b6cab9d119363d02ef63379160f6c82 |
|
22-Nov-2008 |
Al Viro <viro@ftp.linux.org.uk> |
xen_play_dead() is __cpuinit Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
26fd10517e810dd59ea050b052de24a75ee6dc07 |
|
08-Sep-2008 |
Alex Nixon <alex.nixon@citrix.com> |
xen: make CPU hotplug functions static There's no need for these functions to be accessed from outside of xen/smp.c Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
2737146b3aa2cf8b5d5ae87a18c49fe1c374528b |
|
08-Sep-2008 |
Alex Nixon <alex.nixon@citrix.com> |
x86, xen: fix build when !CONFIG_HOTPLUG_CPU Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
d68d82afd4c88e25763b23cd9cd4974573a3706f |
|
22-Aug-2008 |
Alex Nixon <alex.nixon@citrix.com> |
xen: implement CPU hotplugging Note the changes from 2.6.18-xen CPU hotplugging: A vcpu_down request from the remote admin via Xenbus both hotunplugs the CPU, and disables it by removing it from the cpu_present map, and removing its entry in /sys. A vcpu_up request from the remote admin only re-enables the CPU, and does not immediately bring the CPU up. A udev event is emitted, which can be caught by the user if he wishes to automatically re-up CPUs when available, or implement a more complex policy. Signed-off-by: Alex Nixon <alex.nixon@citrix.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
d5de8841355a48f7f634a04507185eaf1f9755e3 |
|
23-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
x86: split spinlock implementations out into their own files ftrace requires certain low-level code, like spinlocks and timestamps, to be compiled without -pg in order to avoid infinite recursion. This patch splits out the core paravirt spinlocks and the Xen spinlocks into separate files which can be compiled without -pg. Also do xen/time.c while we're about it. As a result, we can now use ftrace within a Xen domain. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
2d9e1e2f58b5612aa4eab0ab54c84308a29dbd79 |
|
07-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: implement Xen-specific spinlocks The standard ticket spinlocks are very expensive in a virtual environment, because their performance depends on Xen's scheduler giving vcpus time in the order that they're supposed to take the spinlock. This implements a Xen-specific spinlock, which should be much more efficient. The fast-path is essentially the old Linux-x86 locks, using a single lock byte. The locker decrements the byte; if the result is 0, then they have the lock. If the lock is negative, then locker must spin until the lock is positive again. When there's contention, the locker spin for 2^16[*] iterations waiting to get the lock. If it fails to get the lock in that time, it adds itself to the contention count in the lock and blocks on a per-cpu event channel. When unlocking the spinlock, the locker looks to see if there's anyone blocked waiting for the lock by checking for a non-zero waiter count. If there's a waiter, it traverses the per-cpu "lock_spinners" variable, which contains which lock each CPU is waiting on. It picks one CPU waiting on the lock and sends it an event to wake it up. This allows efficient fast-path spinlock operation, while allowing spinning vcpus to give up their processor time while waiting for a contended lock. [*] 2^16 iterations is threshold at which 98% locks have been taken according to Thomas Friebel's Xen Summit talk "Preventing Guests from Spinning Around". Therefore, we'd expect the lock and unlock slow paths will only be entered 2% of the time. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Christoph Lameter <clameter@linux-foundation.org> Cc: Petr Tesarik <ptesarik@suse.cz> Cc: Virtualization <virtualization@lists.linux-foundation.org> Cc: Xen devel <xen-devel@lists.xensource.com> Cc: Thomas Friebel <thomas.friebel@amd.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
56397f8dadb40055479a8ffff23f21a890098a31 |
|
07-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: use lock-byte spinlock implementation Switch to using the lock-byte spinlock implementation, to avoid the worst of the performance hit from ticket locks. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Christoph Lameter <clameter@linux-foundation.org> Cc: Petr Tesarik <ptesarik@suse.cz> Cc: Virtualization <virtualization@lists.linux-foundation.org> Cc: Xen devel <xen-devel@lists.xensource.com> Cc: Thomas Friebel <thomas.friebel@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
6fcac6d305e8238939e169f4c52e8ec8a552a31f |
|
09-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen64: set up syscall and sysenter entrypoints for 64-bit We set up entrypoints for syscall and sysenter. sysenter is only used for 32-bit compat processes, whereas syscall can be used in by both 32 and 64-bit processes. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
4560a2947e32670fc6ede108c2b032c396180649 |
|
09-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: set num_processors Someone's got to do it. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
c7b75947f89d45493562ede6d9ee7311dfa5c4ce |
|
09-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen64: smp.c compile hacking A number of random changes to make xen/smp.c compile in 64-bit mode. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>a Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
a9e7062d7339f1a1df2b6d7e5d595c7d55b56bfb |
|
09-Jul-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: move smp setup into smp.c Move all the smp_ops setup into smp.c, allowing a lot of things to become static. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
8691e5a8f691cc2a4fda0651e8d307aaba0e7d68 |
|
06-Jun-2008 |
Jens Axboe <jens.axboe@oracle.com> |
smp_call_function: get rid of the unused nonatomic/retry argument It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
3b16cf874861436725c43ba0b68bdd799297be7c |
|
26-Jun-2008 |
Jens Axboe <jens.axboe@oracle.com> |
x86: convert to generic helpers for IPI function calls This converts x86, x86-64, and xen to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
0e91398f2a5d4eb6b07df8115917d0d1cf3e9b58 |
|
27-May-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: implement save/restore This patch implements Xen save/restore and migration. Saving is triggered via xenbus, which is polled in drivers/xen/manage.c. When a suspend request comes in, the kernel prepares itself for saving by: 1 - Freeze all processes. This is primarily to prevent any partially-completed pagetable updates from confusing the suspend process. If CONFIG_PREEMPT isn't defined, then this isn't necessary. 2 - Suspend xenbus and other devices 3 - Stop_machine, to make sure all the other vcpus are quiescent. The Xen tools require the domain to run its save off vcpu0. 4 - Within the stop_machine state, it pins any unpinned pgds (under construction or destruction), performs canonicalizes various other pieces of state (mostly converting mfns to pfns), and finally 5 - Suspend the domain Restore reverses the steps used to save the domain, ending when all the frozen processes are thawed. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
38bb5ab4179572f4d24d3ca7188172a31ca51a69 |
|
27-May-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: count resched interrupts properly Make sure resched interrupts appear in /proc/interrupts in the proper place. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
334ef7a7ab8f80b689a2be95d5e62d2167900865 |
|
12-May-2008 |
Mike Travis <travis@sgi.com> |
x86: use performance variant for_each_cpu_mask_nr Change references from for_each_cpu_mask to for_each_cpu_mask_nr where appropriate Reviewed-by: Paul Jackson <pj@sgi.com> Reviewed-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> commit 2d474871e2fb092eb46a0930aba5442e10eb96cc Author: Mike Travis <travis@sgi.com> Date: Mon May 12 21:21:13 2008 +0200
|
7c04e64a1b43b4c8fea281ce1f82df30ed9bab4e |
|
19-Apr-2008 |
Akinobu Mita <akinobu.mita@gmail.com> |
x86: use cpumask function for present, possible, and online cpus cpu_online(), cpu_present(), for_each_possible_cpu(), num_possible_cpus() Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
ee523ca1e456d754d66be6deab910131e4e1dbf8 |
|
18-Mar-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: implement a debug-interrupt handler Xen supports the notion of a debug interrupt which can be triggered from the console. For now this is implemented to show pending events, masks and each CPU's pending event set. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
e2a81baf6604a2e08e10c7405b0349106f77c8af |
|
18-Mar-2008 |
Jeremy Fitzhardinge <jeremy@goop.org> |
xen: support sysenter/sysexit if hypervisor does 64-bit Xen supports sysenter for 32-bit guests, so support its use. (sysenter is faster than int $0x80 in 32-on-64.) sysexit is still not supported, so we fake it up using iret. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
ecaa6c9de759259c5ba517e5442e26452d49107e |
|
27-Mar-2008 |
Glauber Costa <gcosta@redhat.com> |
x86: change naming of cpu_initialized_mask for xen xen does not use the global cpu_initialized mask, but rather, a specific one. So we change its name so it won't conflict with the upcoming movement of cpu_initialized_mask from smp_64.h to smp_32.h. Signed-off-by: Glauber Costa <gcosta@redhat.com> CC: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
faca62273b602ab482fb7d3d940dbf41ef08b00e |
|
30-Jan-2008 |
H. Peter Anvin <hpa@zytor.com> |
x86: use generic register name in the thread and tss structures This changes size-specific register names (eip/rip, esp/rsp, etc.) to generic names in the thread and tss structures. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
7bf0c23ed24b0d95a2a717f86dce1f210e16f8a5 |
|
30-Jan-2008 |
Mike Travis <travis@sgi.com> |
x86: prevent dereferencing non-allocated per_cpu variables 'for_each_possible_cpu(i)' when there's a _remote possibility_ of dereferencing a non-allocated per_cpu variable involved. All files except mm/vmstat.c are x86 arch. Thanks to pageexec@freemail.hu for pointing this out. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: <pageexec@freemail.hu> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
38e760a1335ffaca5a08624a9aed6fe2055c2c98 |
|
17-Oct-2007 |
Joe Korty <joe.korty@ccur.com> |
x86: expand /proc/interrupts to include missing vectors, v2 Add missing IRQs and IRQ descriptions to /proc/interrupts. /proc/interrupts is most useful when it displays every IRQ vector in use by the system, not just those somebody thought would be interesting. This patch inserts the following vector displays to the i386 and x86_64 platforms, as appropriate: rescheduling interrupts TLB flush interrupts function call interrupts thermal event interrupts threshold interrupts spurious interrupts A threshold interrupt occurs when ECC memory correction is occuring at too high a frequency. Thresholds are used by the ECC hardware as occasional ECC failures are part of normal operation, but long sequences of ECC failures usually indicate a memory chip that is about to fail. Thermal event interrupts occur when a temperature threshold has been exceeded for some CPU chip. IIRC, a thermal interrupt is also generated when the temperature drops back to a normal level. A spurious interrupt is an interrupt that was raised then lowered by the device before it could be fully processed by the APIC. Hence the apic sees the interrupt but does not know what device it came from. For this case the APIC hardware will assume a vector of 0xff. Rescheduling, call, and TLB flush interrupts are sent from one CPU to another per the needs of the OS. Typically, their statistics would be used to discover if an interrupt flood of the given type has been occuring. AK: merged v2 and v4 which had some more tweaks AK: replace Local interrupts with Local timer interrupts AK: Fixed description of interrupt types. [ tglx: arch/x86 adaptation ] [ mingo: small cleanup ] Signed-off-by: Joe Korty <joe.korty@ccur.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Tim Hockin <thockin@hockin.org> Cc: Andi Kleen <ak@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
f0d733942750c1ee6358c3a4a1a5d7ba73b7122f |
|
16-Oct-2007 |
Jeremy Fitzhardinge <jeremy@xensource.com> |
xen: yield to IPI target if necessary When sending a call-function IPI to a vcpu, yield if the vcpu isn't running. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
|
d5a7430ddcdb598261d70f7eb1bf450b5be52085 |
|
16-Oct-2007 |
Mike Travis <travis@sgi.com> |
Convert cpu_sibling_map to be a per cpu variable Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly from startup and CPU HOTPLUG functions. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
083576112940fda783d716fd5ccc744f81667b2f |
|
16-Oct-2007 |
Mike Travis <travis@sgi.com> |
x86: Convert cpu_core_map to be a per cpu variable This is from an earlier message from 'Christoph Lameter': cpu_core_map is currently an array defined using NR_CPUS. This means that we overallocate since we will rarely really use maximum configured cpu. If we put the cpu_core_map into the per cpu area then it will be allocated for each processor as it comes online. This means that the core map cannot be accessed until the per cpu area has been allocated. Xen does a weird thing here looping over all processors and zeroing the masks that are not yet allocated and that will be zeroed when they are allocated. I commented the code out. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
9702785a747aa27baf46ff504beab6528f21f2dd |
|
11-Oct-2007 |
Thomas Gleixner <tglx@linutronix.de> |
i386: move xen Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
|