408cddd96e3b155337f9e3aba2198e92e94c6068 |
|
30-Sep-2014 |
Hari Bathini <hbathini@linux.vnet.ibm.com> |
powerpc/fadump: Fix endianess issues in firmware assisted dump handling Firmware-assisted dump (fadump) kernel code is not endian safe. The below patch fixes this issue. Tested this patch with upstream kernel. Below output shows crash tool successfully opening LE fadump vmcore. # crash vmlinux vmcore GNU gdb (GDB) 7.6 This GDB was configured as "powerpc64le-unknown-linux-gnu"... KERNEL: vmlinux DUMPFILE: vmcore CPUS: 16 DATE: Wed Dec 31 19:00:00 1969 UPTIME: 00:03:28 LOAD AVERAGE: 0.46, 0.86, 0.41 TASKS: 268 NODENAME: linux-dhr2 RELEASE: 3.17.0-rc5-7-default VERSION: #6 SMP Tue Sep 30 01:06:34 EDT 2014 MACHINE: ppc64le (4116 Mhz) MEMORY: 40 GB PANIC: "Oops: Kernel access of bad area, sig: 11 [#1]" (check log for details) PID: 6223 COMMAND: "bash" TASK: c0000009661b2500 [THREAD_INFO: c000000967ac0000] CPU: 2 STATE: TASK_RUNNING (PANIC) Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> [mpe: Make the comment in pSeries_lpar_hptab_clear() clearer] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
d4fe0965e20820f3dd05bcc4d89de3da29bb83aa |
|
21-Aug-2014 |
Zhouyi Zhou <zhouzhouyi@gmail.com> |
powerpc/jump_label: use HAVE_JUMP_LABEL? CONFIG_JUMP_LABEL doesn't ensure HAVE_JUMP_LABEL, if it is not the case use maintainers's own mutex to guard the modification of global values. Signed-off-by: Zhouyi Zhou <yizhouzhou@ict.ac.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
a38efcea56988761f89a3134145f0d5f9ea68076 |
|
20-Aug-2014 |
Anton Blanchard <anton@samba.org> |
powerpc: Remove stale function prototypes There were a number of prototypes for functions that no longer exist. Remove them. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
23f66e2d661b4d3226d16e25910a9e9472ce2410 |
|
27-Aug-2014 |
Tejun Heo <tj@kernel.org> |
Revert "powerpc: Replace __get_cpu_var uses" This reverts commit 5828f666c069af74e00db21559f1535103c9f79a due to build failure after merging with pending powerpc changes. Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.au Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
5828f666c069af74e00db21559f1535103c9f79a |
|
17-Aug-2014 |
Christoph Lameter <cl@linux.com> |
powerpc: Replace __get_cpu_var uses __get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) tj: Folded a fix patch. http://lkml.kernel.org/g/alpine.DEB.2.11.1408172143020.9652@gentwo.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
|
fa1f8ae80f8bb996594167ff4750a0b0a5a5bb5d |
|
12-Aug-2014 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc/thp: Don't recompute vsid and ssize in loop on invalidate The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch CC: <stable@vger.kernel.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
cc1adb5f32557f10f48e8febbef7278a2db9d593 |
|
03-Jul-2014 |
Anton Blanchard <anton@samba.org> |
powerpc/pseries: Use jump labels for hcall tracepoints hcall tracepoints add quite a few instructions to our hcall path: plpar_hcall: mr r2,r2 mfcr r0 stw r0,8(r1) b 164 <---- start ld r12,0(r2) std r12,32(r1) cmpdi r12,0 beq 164 <---- end ... We have an unconditional branch that gets noped out during boot and a load/compare/branch. We also store the tracepoint value to the stack for the hcall_exit path to use. By using jump labels we can simplify this to just a single nop that gets replaced with a branch when the tracepoint is enabled: plpar_hcall: mr r2,r2 mfcr r0 stw r0,8(r1) nop <---- ... If jump labels are not enabled, we fall back to the old method. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
92c08a0d522c7e62c01a63e42597f0c2b02c4245 |
|
18-Nov-2013 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc/mm: Use HPTE constants when updating hpte bits Even though we have same value for linux PTE bits and hash PTE pits use the hash pte bits wen updating hash pte Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
1a8f6f97ea4dbaaa21b05cae2dacea47e4aea37b |
|
05-Dec-2013 |
Jeremy Kerr <jk@ozlabs.org> |
powerpc: Make slb_shadow a local The only external user of slb_shadow is the pseries lpar code, and it can access through the paca array instead. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
e844b1eeae42dc93bf13e67812a95ee7b58be8c7 |
|
20-Nov-2013 |
Anton Blanchard <anton@samba.org> |
pseries: Add H_SET_MODE to change exception endianness On little endian builds call H_SET_MODE so exceptions have the correct endianness. We need to reset the endian during kexec so do that in the MMU hashtable clear callback. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
b89bdfb8deb0cac5141f54806e406e5888175c80 |
|
15-Aug-2013 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc/pseries: Add a warning in the case of cross-cpu VPA registration The spec says it "may be problematic" if CPU x registers the VPA of CPU y. Add a warning in case we ever do that. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
212bebb4097837ec0b601c42be839c1314994dc2 |
|
22-Aug-2013 |
Deepthi Dharwar <deepthi@linux.vnet.ibm.com> |
pseries: Move plpar_wrapper.h to powerpc common include/asm location. As a part of pseries_idle backend driver cleanup to make the code common to both pseries and powernv platforms, it is necessary to move the backend-driver code to drivers/cpuidle. As a pre-requisite for that, it is essential to move plpar_wrapper.h to include/asm. Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
7ffcf8ec26f4b94b95b1297131d223b121d951e5 |
|
06-Aug-2013 |
Anton Blanchard <anton@samba.org> |
powerpc: Fix little endian lppaca, slb_shadow and dtl_entry The lppaca, slb_shadow and dtl_entry hypervisor structures are big endian, so we have to byte swap them in little endian builds. LE KVM hosts will also need to be fixed but for now add an #error to remind us. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
b0d436c739b0d4afcdfe2e97d4d1ee41ea2db62e |
|
06-Aug-2013 |
Anton Blanchard <anton@samba.org> |
powerpc: Fix a number of sparse warnings Address some of the trivial sparse warnings in arch/powerpc. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
ad92c615975a57c4b206c5c99f77a0bf22b373c4 |
|
23-Jul-2013 |
Denis Kirjanov <kda@linux-powerpc.org> |
powerpc/pseries: Fix a typo in pSeries_lpar_hpte_insert() Commit 801eb73f45371accc78ca9d6d22d647eeb722c11 introduced a bug while checking PTE flags. We have to drop the _PAGE_COHERENT flag when __PAGE_NO_CACHE is set and the cache update policy is not write-through (i.e. _PAGE_WRITETHRU is not set) Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> CC: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
6e0b8bc965d25a8e0701eaca3fca5941b4f4b2b2 |
|
28-Jun-2013 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc/pseries: Inform the hypervisor we are using EBB regs On LPAR systems we need to inform the hypervisor that we are using the EBB registers. We do this by setting a bit in the Virtual Processor Area (VPA) - formerly known as the lppaca. For now we do this always, ie. we do not dynamically enable/disable. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
1a5272866f87d7fbf04dc8060f8da3e8456490ab |
|
20-Jun-2013 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc: Optimize hugepage invalidate Hugepage invalidate involves invalidating multiple hpte entries. Optimize the operation using H_BULK_REMOVE on lpar platforms. On native, reduce the number of tlb flush. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
db3d8534903c8a9617142975bec6db95acaba753 |
|
20-Jun-2013 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc/mm: handle hugepage size correctly when invalidating hpte entries If a hash bucket gets full, we "evict" a more/less random entry from it. When we do that we don't invalidate the TLB (hpte_remove) because we assume the old translation is still technically "valid". This implies that when we are invalidating or updating pte, even if HPTE entry is not valid we should do a tlb invalidate. With hugepages, we need to pass the correct actual page size value for tlb invalidation. This change update the patch 0608d692463598c1d6e826d9dd7283381b4f246c "powerpc/mm: Always invalidate tlb on hpte invalidate and update" to handle transparent hugepages correctly. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
b1022fbd293564de91596b8775340cf41ad5214c |
|
28-Apr-2013 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc: Decode the pte-lp-encoding bits correctly. We look at both the segment base page size and actual page size and store the pte-lp-encodings in an array per base page size. We also update all relevant functions to take actual page size argument so that we can use the correct PTE LP encoding in HPTE. This should also get the basic Multiple Page Size per Segment (MPSS) support. This is needed to enable THP on ppc64. [Fixed PR KVM build --BenH] Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
4b8f63d92e30ffd33bd77e028918919be2d926e6 |
|
28-Apr-2013 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc: Use signed formatting when printing error PAPR defines these errors as negative values. So print them accordingly for easy debugging. Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
9fb2640159f9d4f5a2a9d60e490482d4cbecafdb |
|
05-Apr-2013 |
Michael Wolf <mjw@linux.vnet.ibm.com> |
powerpc: pSeries_lpar_hpte_remove fails from Adjunct partition being performed before the ANDCOND test Some versions of pHyp will perform the adjunct partition test before the ANDCOND test. The result of this is that H_RESOURCE can be returned and cause the BUG_ON condition to occur. The HPTE is not removed. So add a check for H_RESOURCE, it is ok if this HPTE is not removed as pSeries_lpar_hpte_remove is looking for an HPTE to remove and not a specific HPTE to remove. So it is ok to just move on to the next slot and try again. Cc: stable@vger.kernel.org Signed-off-by: Michael Wolf <mjw@linux.vnet.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
|
5524a27d39b68770f203d8d42eb5a95dde4933bc |
|
10-Sep-2012 |
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> |
powerpc/mm: Convert virtual address to vpn This patch convert different functions to take virtual page number instead of virtual address. Virtual page number is virtual address shifted right by VPN_SHIFT (12) bits. This enable us to have an address range of upto 76 bits. Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
beacc6da8649f5c0841ac9b326dcf0c4dad823cd |
|
25-Jul-2012 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc: Remove all includes of <asm/abs_addr.h> It's empty now, apart from other includes. Fixup a few files that were getting things via this header. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
f5339277eb8d3aed37f12a27988366f68ab68930 |
|
15-Mar-2012 |
Stephen Rothwell <sfr@canb.auug.org.au> |
powerpc: Remove FW_FEATURE ISERIES from arch code This is no longer selectable, so just remove all the dependent code. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
a5ccfee05a439b803640e94584056204501db31c |
|
09-Jan-2012 |
Anton Blanchard <anton@samba.org> |
powerpc: Fix RCU idle and hcall tracing Tracepoints should not be called inside an rcu_idle_enter/rcu_idle_exit region. Since pSeries calls H_CEDE in the idle loop, we were violating this rule. commit a7b152d5342c (powerpc: Tell RCU about idle after hcall tracing) tried to work around it by delaying the rcu_idle_enter until after we called the hcall tracepoint, but there are a number of issues with it. The hcall tracepoint trampoline code is called conditionally when the tracepoint is enabled. If the tracepoint is not enabled we never call rcu_idle_enter. The idle_uses_rcu check was also done at compile time which breaks multiplatform builds. The simple fix is to avoid tracing H_CEDE and rely on other tracepoints and the hypervisor dispatch trace log to work out if we called H_CEDE. This fixes a hang during boot on pSeries. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
e4f387d8db3ba3c2dae4d8bdfe7bb5f4fe1bcb0d |
|
18-Dec-2011 |
Li Zhong <zhong@linux.vnet.ibm.com> |
powerpc: Fix unpaired probe_hcall_entry and probe_hcall_exit Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen as following, which could cause incorrect preempt count. __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry => get_cpu_var => preempt_disable __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit => put_cpu_var => preempt_enable where: A => B and A -> B means A calls B, but => means A will call B through function name, and B will definitely be called. -> means A will call B through function pointer, so B might not be called if the function pointer is not set. So error happens when only one of probe_hcall_entry and probe_hcall_exit get called during a hcall. This patch tries to move the preempt count operations from probe_hcall_entry and probe_hcall_exit to its callers. Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: stable@kernel.org [v2.6.32+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
a7b152d5342c06e81ab0cf7d18345f69fa7e56b5 |
|
14-Oct-2011 |
Paul E. McKenney <paul.mckenney@linaro.org> |
powerpc: Tell RCU about idle after hcall tracing The PowerPC pSeries platform (CONFIG_PPC_PSERIES=y) enables hypervisor-call tracing for CONFIG_TRACEPOINTS=y kernels. One of the hypervisor calls that is traced is the H_CEDE call in the idle loop that tells the hypervisor that this OS instance no longer needs the current CPU. However, tracing uses RCU, so this combination of kernel configuration variables needs to avoid telling RCU about the current CPU's idleness until after the H_CEDE-entry tracing completes on the one hand, and must tell RCU that the the current CPU is no longer idle before the H_CEDE-exit tracing starts. In all other cases, it suffices to inform RCU of CPU idleness upon idle-loop entry and exit. This commit makes the required adjustments. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
66b15db69c2553036cc25f6e2e74fe7e3aa2761e |
|
27-May-2011 |
Paul Gortmaker <paul.gortmaker@windriver.com> |
powerpc: add export.h to files making use of EXPORT_SYMBOL With module.h being implicitly everywhere via device.h, the absence of explicitly including something for EXPORT_SYMBOL went unnoticed. Since we are heading to fix things up and clean module.h from the device.h file, we need to explicitly include these files now. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
711ef84e80ec6f937ad59c7a00490421a5c92867 |
|
25-Jul-2011 |
Anton Blanchard <anton@samba.org> |
powerpc/pseries: Cleanup VPA registration and deregistration errors Make the VPA, SLB shadow and DTL registration and deregistration functions print consistent messages on error. I needed the firmware error code while chasing a kexec bug but we weren't printing it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
bed9a31527af8ff3dfbad62a1a42815cef4baab7 |
|
26-Jul-2011 |
Anton Blanchard <anton@samba.org> |
powerpc: pseries: Fix kexec on machines with more than 4TB of RAM On a box with 8TB of RAM the MMU hashtable is 64GB in size. That means we have 4G PTEs. pSeries_lpar_hptab_clear was using a signed int to store the index which will overflow at 2G. Signed-off-by: Anton Blanchard <anton@samba.org> Cc: <stable@kernel.org> Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
4d2bb3f5003617cb42b89faefd0009c505c3abd5 |
|
12-May-2011 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
powerpc/pseries: Re-implement HVSI as part of hvc_vio On pseries machines, consoles are provided by the hypervisor using a low level get_chars/put_chars type interface. However, this is really just a transport to the service processor which implements them either as "raw" console (networked consoles, HMC, ...) or as "hvsi" serial ports. The later is a simple packet protocol on top of the raw character interface that is supposed to convey additional "serial port" style semantics. In practice however, all it does is provide a way to read the CD line and set/clear our DTR line, that's it. We currently implement the "raw" protocol as an hvc console backend (/dev/hvcN) and the "hvsi" protocol using a separate tty driver (/dev/hvsi0). However this is quite impractical. The arbitrary difference between the two type of devices has been a major source of user (and distro) confusion. Additionally, there's an additional mini -hvsi implementation in the pseries platform code for our low level debug console and early boot kernel messages, which means code duplication, though that low level variant is impractical as it's incapable of doing the initial protocol negociation to establish the link to the FSP. This essentially replaces the dedicated hvsi driver and the platform udbg code completely by extending the existing hvc_vio backend used in "raw" mode so that: - It now supports HVSI as well - We add support for hvc backend providing tiocm{get,set} - It also provides a udbg interface for early debug and boot console This is overall less code, though this will only be obvious once we remove the old "hvsi" driver, which is still available for now. When the old driver is enabled, the new code still kicks in for the low level udbg console, replacing the old mini implementation in the platform code, it just doesn't provide the higher level "hvc" interface. In addition to producing generally simler code, this has several benefits over our current situation: - The user/distro only has to deal with /dev/hvcN for the hypervisor console, avoiding all sort of confusion that has plagued us in the past - The tty, kernel and low level debug console all use the same code base which supports the full protocol establishment process, thus the console is now available much earlier than it used to be with the old HVSI driver. The kernel console works much earlier and udbg is available much earlier too. Hackers can enable a hard coded very-early debug console as well that works with HVSI (previously that was only supported for the "raw" mode). I've tried to keep the same semantics as hvsi relative to how I react to things like CD changes, with some subtle differences though: - I clear DTR on close if HUPCL is set - Current hvsi triggers a hangup if it detects a up->down transition on CD (you can still open a console with CD down). My new implementation triggers a hangup if the link to the FSP is severed, and severs it upon detecting a up->down transition on CD. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
dd2e356a3dd1fea6d911798044532304c3ef4050 |
|
16-Jun-2011 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
powerpc/udbg: Register udbg console generically When CONFIG_PPC_EARLY_DEBUG is set, call register_early_udbg_console() early from generic code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
9ee820fa005254dfc816330f6654f14dcb2beee1 |
|
04-May-2011 |
Brian King <brking@linux.vnet.ibm.com> |
powerpc/pseries: Add page coalescing support Adds support for page coalescing, which is a feature on IBM Power servers which allows for coalescing identical pages between logical partitions. Hint text pages as coalesce candidates, since they are the most likely pages to be able to be coalesced between partitions. This patch also exports some page coalescing statistics available from firmware via lparcfg. [BenH: Moved a couple of things around to fix compile problems] Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
44ae3ab3358e962039c36ad4ae461ae9fb29596c |
|
06-Apr-2011 |
Matt Evans <matt@ozlabs.org> |
powerpc: Free up some CPU feature bits by moving out MMU-related features Some of the 64bit PPC CPU features are MMU-related, so this patch moves them to MMU_FTR_ bits. All cpu_has_feature()-style tests are moved to mmu_has_feature(), and seven feature bits are freed as a result. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
57cdfdf829a850a317425ed93c6a576c9ee6329c |
|
21-Oct-2010 |
Anton Blanchard <anton@samba.org> |
powerpc: Fix hcall tracepoint recursion Spinlocks on shared processor partitions use H_YIELD to notify the hypervisor we are waiting on another virtual CPU. Unfortunately this means the hcall tracepoints can recurse. The patch below adds a percpu depth and checks it on both the entry and exit hcall tracepoints. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: stable@kernel.org
|
4e89a2d8e2d5ab33d73b76f16c10fdf515faabef |
|
28-Sep-2010 |
Will Schmidt <will_schmidt@vnet.ibm.com> |
powerpc/pseries: Add kernel parameter to disable batched hcalls This introduces a pair of kernel parameters that can be used to disable the MULTITCE and BULK_REMOVE h-calls. By default, those hcalls are enabled, active, and good for throughput and performance. The ability to disable them will be useful for some of the PREEMPT_RT related investigation and work occurring on Power. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> cc: Olof Johansson <olof@lixom.net> cc: Anton Blanchard <anton@samba.org> cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
cf9efce0ce3136fa076f53e53154e98455229514 |
|
26-Aug-2010 |
Paul Mackerras <paulus@samba.org> |
powerpc: Account time using timebase rather than PURR Currently, when CONFIG_VIRT_CPU_ACCOUNTING is enabled, we use the PURR register for measuring the user and system time used by processes, as well as other related times such as hardirq and softirq times. This turns out to be quite confusing for users because it means that a program will often be measured as taking less time when run on a multi-threaded processor (SMT2 or SMT4 mode) than it does when run on a single-threaded processor (ST mode), even though the program takes longer to finish. The discrepancy is accounted for as stolen time, which is also confusing, particularly when there are no other partitions running. This changes the accounting to use the timebase instead, meaning that the reported user and system times are the actual number of real-time seconds that the program was executing on the processor thread, regardless of which SMT mode the processor is in. Thus a program will generally show greater user and system times when run on a multi-threaded processor than on a single-threaded processor. On pSeries systems on POWER5 or later processors, we measure the stolen time (time when this partition wasn't running) using the hypervisor dispatch trace log. We check for new entries in the log on every entry from user mode and on every transition from kernel process context to soft or hard IRQ context (i.e. when account_system_vtime() gets called). So that we can correctly distinguish time stolen from user time and time stolen from system time, without having to check the log on every exit to user mode, we store separate timestamps for exit to user mode and entry from user mode. On systems that have a SPURR (POWER6 and POWER7), we read the SPURR in account_system_vtime() (as before), and then apportion the SPURR ticks since the last time we read it between scaled user time and scaled system time according to the relative proportions of user time and system time over the same interval. This avoids having to read the SPURR on every kernel entry and exit. On systems that have PURR but not SPURR (i.e., POWER5), we do the same using the PURR rather than the SPURR. This disables the DTL user interface in /sys/debug/kernel/powerpc/dtl for now since it conflicts with the use of the dispatch trace log by the time accounting code. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
8154c5d22d91cd16bd9985b0638c8957e4688d0e |
|
12-Aug-2010 |
Paul Mackerras <paulus@samba.org> |
powerpc: Abstract indexing of lppaca structs Currently we have the lppaca structs as a simple array of NR_CPUS entries, taking up space in the data section of the kernel image. In future we would like to allocate them dynamically, so this abstracts out the accesses to the array, making it easier to change how we locate the lppaca for a given cpu in future. Specifically, lppaca[cpu] changes to lppaca_of(cpu). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
d504bed676caad29a3dba3d3727298c560628f5c |
|
10-May-2010 |
Michael Neuling <mikey@neuling.org> |
powerpc/kexec: Speedup kexec hash PTE tear down Currently for kexec the PTE tear down on 1TB segment systems normally requires 3 hcalls for each PTE removal. On a machine with 32GB of memory it can take around a minute to remove all the PTEs. This optimises the path so that we only remove PTEs that are valid. It also uses the read 4 PTEs at once HCALL. For the common case where a PTEs is invalid in a 1TB segment, this turns the 3 HCALLs per PTE down to 1 HCALL per 4 PTEs. This gives an > 10x speedup in kexec times on PHYP, taking a 32GB machine from around 1 minute down to a few seconds. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
6f26353ca29e96475208bce673efb6a2c58b73f2 |
|
26-Oct-2009 |
Anton Blanchard <anton@samba.org> |
powerpc: tracing: Give hypervisor call tracepoints access to arguments While most users of the hcall tracepoints will only want the opcode and return code, some will want all the arguments. To avoid the complexity of using varargs we pass a pointer to the register save area, which contains all the arguments. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
c8cd093a6e9f96ea6b871576fd4e46d7c818bb89 |
|
26-Oct-2009 |
Anton Blanchard <anton@samba.org> |
powerpc: tracing: Add hypervisor call tracepoints Add hcall_entry and hcall_exit tracepoints. This replaces the inline assembly HCALL_STATS code and converts it to use the new tracepoints. To keep the disabled case as quick as possible, we embed a status word in the TOC so we can get at it with a single load. By doing so we keep the overhead at a minimum. Time taken for a null hcall: No tracepoint code: 135.79 cycles Disabled tracepoints: 137.95 cycles For reference, before this patch enabling HCALL_STATS resulted in a null hcall of 201.44 cycles! Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
551a232c87b7781712c57c31f3e0851303d9f591 |
|
17-Jun-2009 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc/pseries: Use pr_devel() in pseries LPAR HPTE routines pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. In particular, pSeries_lpar_hpte_insert() goes from 185 instructions to 77 instructions as a result of this patch. Luckily that code isn't called very often ... With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 7284 1552 296 9132 23ac platforms/pseries/lpar.o size after: text data bss dec hex filename 5806 1096 296 7198 1c1e platforms/pseries/lpar.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
14f966e79445015cd89d0fa0ceb6b33702e951b6 |
|
15-Apr-2009 |
Robert Jennings <rcj@linux.vnet.ibm.com> |
powerpc/pseries: CMO unused page hinting Adds support for the "unused" page hint which can be used in shared memory partitions to flag pages not in use, which will then be stolen before active pages by the hypervisor when memory needs to be moved to LPARs in need of additional memory. Failure to mark pages as 'unused' makes the LPAR slower to give up unused memory to other partitions. This adds the kernel parameter 'cmo_free_hint' to disable this functionality. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
443dcac4d89622cbfc61f53523007979879d6f8e |
|
09-Jul-2008 |
Dave Kleikamp <shaggy@linux.vnet.ibm.com> |
powerpc: Remove unnecessary condition when sanity-checking WIMG bits It is okay for both _PAGE_GUARDED and _PAGE_COHERENT (G and M) to be set in the same pte. In fact, even if that were not the case, there doesn't seem to be any place where G is set without also setting I (_PAGE_NO_CACHE), so the test for I is sufficient as a condition to clear _PAGE_COHERENT when filling the hash table. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
801eb73f45371accc78ca9d6d22d647eeb722c11 |
|
07-Jul-2008 |
Dave Kleikamp <shaggy@linux.vnet.ibm.com> |
powerpc/mm: Don't clear _PAGE_COHERENT when _PAGE_SAO is set The current low level hash code on LPAR configurations clears _PAGE_COHERENT (M) when either _PAGE_GUARDED (G) or _PAGE_NO_CACHE (I) is set. This conflicts with _PAGE_SAO which has M, I and W bits sets at once (normally invalid combo) to indicate the new SAO attribute. This changes the code to allow that case. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
541b2755c2ef7dd2242ac606c115daa11e43ef69 |
|
08-May-2008 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] Fix sparse warnings in arch/powerpc/platforms/pseries Don't return void in pseries/iommu.c Make mce_data_buf static in pseries/ras.c Make things static in pseries/rtasd.c Make things static in pseries/setup.c vtermno may as well be static in platforms/pseries/lpar.c Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
f7ebf352b2e04ee89efb426e33dd450d8f1cfcd5 |
|
24-Apr-2008 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] Convert from DBG() to pr_debug() in platforms/pseries/ In pseries/lpar.c, fix some printf specifier mismatches, and add a newline to one printk. In pseries/rtasd.c add "rtasd" to some messages to make it clear where they're coming from. In pseries/scanlog.c remove the hand-rolled runtime debugging support in there. This file has been largely unchanged for eons, if we need to debug it in future we can recompile. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
cb1e2ab45a92b31114dfe6e34832a084f9b0b263 |
|
24-Apr-2008 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] Register udbg console early on pseries LPAR On pseries LPAR we can call the udbg routines, and the udbg console very early. So mark the udbg console as safe to call early in boot, and register the udbg console as soon as the udbg routines are hooked up. This allows platforms/pseries code to use printk() and pr_debug() rather than needing to call udbg_printf() directly for early debugging. This is nice because a) it's standard, b) it goes via the printk buffer, and c) you can get printk time stamps. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
5faae2e5d1f53df9dce482032c8486bc3a1feffc |
|
16-Apr-2008 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] Always add preferred consoles in platforms/pseries/lpar.c There is logic in platforms/peries/lpars.c which checks if the user has specified a console on the command line, and refrains from adding a preferred console entry for the hvc/hvsi console if they have. This trips up if you use "netconsole=foo" on the command line, and has the result that you get _only_ the netconsole, because the hvc device is never added as a preferred console. Worse still if you get the netconsole configuration wrong somehow, you end up with no console at all. As it turns out we don't need to worry about checking the command line. If the user has specified "console=foo", then foo will be set as the preferred console when the command line is parsed in start_kernel(), much later than the pseries code, and so the latter setting will take effect. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
21cf91338fabe649ae3744429e13b61da2a17a6a |
|
16-Apr-2008 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] Move prototype for find_udbg_vterm() into a header file Move the prototype for find_udbg_vterm() into pseries.h, removing it from setup.c. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
f8c8803bda4db47cbbdadb9b27b024e903e1d645 |
|
28-Jan-2008 |
Badari Pulavarty <pbadari@us.ibm.com> |
[POWERPC] Add code for removing HPTEs for parts of the linear mapping For memory remove, we need to clean up htab mappings for the section of the memory we are removing. This implements support for removing htab bolted mappings for pSeries logical partitions. Other sub-archs may need to implement similar functionality for hotplug memory remove to work on them. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
dfbe0d3b6be52596b5694b1bb75b19562e769021 |
|
15-Jan-2008 |
Paul Mackerras <paulus@samba.org> |
[POWERPC] Fix boot failure on POWER6 Commit 473980a99316c0e788bca50996375a2815124ce1 added a call to clear the SLB shadow buffer before registering it. Unfortunately this means that we clear out the entries that slb_initialize has previously set in there. On POWER6, the hypervisor uses the SLB shadow buffer when doing partition switches, and that means that after the next partition switch, each non-boot CPU has no SLB entries to map the kernel text and data, which causes it to crash. This fixes it by reverting most of 473980a9 and instead clearing the 3rd entry explicitly in slb_initialize. This fixes the problem that 473980a9 was trying to solve, but without breaking POWER6. Signed-off-by: Paul Mackerras <paulus@samba.org>
|
473980a99316c0e788bca50996375a2815124ce1 |
|
11-Jan-2008 |
Michael Neuling <mikey@neuling.org> |
[POWERPC] Fix CPU hotplug when using the SLB shadow buffer Before we register the SLB shadow buffer, we need to invalidate the entries in the buffer, otherwise we can end up stale entries from when we previously offlined the CPU. This does this invalidate as well as unregistering the buffer with PHYP before we offline the cpu. Tested and fixes crashes seen on 970MP (thanks to tonyb) and POWER5. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
1189be6508d45183013ddb82b18f4934193de274 |
|
11-Oct-2007 |
Paul Mackerras <paulus@samba.org> |
[POWERPC] Use 1TB segments This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
|
9420dc65ff9e6b67c032286efde823aeb8684670 |
|
30-Jul-2007 |
Jesper Juhl <jesper.juhl@gmail.com> |
[POWERPC] Clean out a bunch of duplicate includes This removes several duplicate includes from arch/powerpc/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
b7abc5c53e3c65b8e931bd96db2d08ba670e111a |
|
14-Jun-2007 |
Sachin P. Sant <sachinp@in.ibm.com> |
[POWERPC] Fix Kexec/Kdump for power6 On Power machines supporting VRMA, Kexec/Kdump does not work. VRMA (virtual real-mode area) means that accesses with IR/DR = 0 (i.e. the MMU "off") actually still go through the hash table, using entries put there by the hypervisor. This means that when we clear out the hash table on kexec, we need to make sure these entries are left untouched. This also adds plpar_pte_read_raw() on the lines of plpar_pte_remove_raw(). Signed-off-by : Sachin Sant <sachinp@in.ibm.com> Signed-off-by : Mohan Kumar M <mohan@in.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
55b61fec22caa3e7872caea6c4100fc75cb8f49b |
|
03-May-2007 |
Stephen Rothwell <sfr@canb.auug.org.au> |
[POWERPC] Rename device_is_compatible to of_device_is_compatible for consistency with other Open Firmware interfaces (and Sparc). This is just a straight replacement. This leaves the compatibility define in place. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
e2eb63927bfcb54232163bfec32440246fd44457 |
|
03-Apr-2007 |
Stephen Rothwell <sfr@canb.auug.org.au> |
[POWERPC] Rename get_property to of_get_property: arch/powerpc Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
b4aea36b7956eeebfc56314ce0944db1441255ce |
|
21-Mar-2007 |
Mohan Kumar M <mohan@in.ibm.com> |
[POWERPC] Avoid hypervisor statistics calculation in real mode kexec invokes plpar_hcall hypervisor call in real mode. plpar_hcall refers to per cpu variables for accounting hypervisor statistics. These variables may not be in the RMO region, so accesses to them in real mode may result in a data storage exception. This fixes this problem by using a new plpar_hcall_raw function which does not update the hypervisor call statistics. Thanks to Anton for suggesting this idea. Signed-off-by: Mohan Kumar M <mohan@in.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
12e86f92fcfe4f0bcab0ad7fa4088a64c60d9b38 |
|
08-Feb-2007 |
Paul Mackerras <paulus@samba.org> |
[POWERPC] Only use H_BULK_REMOVE if the firmware supports it The previous patch changing pSeries to use H_BULK_REMOVE broke the JS20 blade, where the firmware doesn't support H_BULK_REMOVE. This adds a firmware check so that on machines that don't have H_BULK_REMOVE, we just use the H_REMOVE call as before. Signed-off-by: Paul Mackerras <paulus@samba.org>
|
f03e64f2ca6ee3d0b7824536b1940497701fe766 |
|
06-Feb-2007 |
Paul Mackerras <paulus@samba.org> |
[POWERPC] Make pSeries use the H_BULK_REMOVE hypervisor call H_BULK_REMOVE lets us remove 4 entries from the MMU hash table with one hypervisor call. This uses it in pSeries_lpar_hpte_invalidate so we can tear down mappings with fewer hypervisor calls. Signed-off-by: Paul Mackerras <paulus@samba.org>
|
035223fb28791f0eb0d5719727355d3f6817d228 |
|
05-Oct-2006 |
Geoff Levand <geoffrey.levand@am.sony.com> |
[POWERPC] Make pSeries_lpar_hpte_insert static Change the powerpc hpte_insert routines now called through ppc_md to static scope. Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
2f6093c84730b4bad65bcd0f2f904a5769b1dfc5 |
|
07-Aug-2006 |
Michael Neuling <mikey@neuling.org> |
[POWERPC] Implement SLB shadow buffer This adds a shadow buffer for the SLBs and regsiters it with PHYP. Only the bolted SLB entries (top 3) are shadowed. The SLB shadow buffer tells the hypervisor what the kernel needs to have in the SLB for the kernel to be able to function. The hypervisor can use this information to speed up partition context switches. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
b9377ffc3a03cde558d76349a262a1adbb6d3112 |
|
19-Jul-2006 |
Anton Blanchard <anton@samba.org> |
[POWERPC] clean up pseries hcall interfaces Our pseries hcall interfaces are out of control: plpar_hcall_norets plpar_hcall plpar_hcall_8arg_2ret plpar_hcall_4out plpar_hcall_7arg_7ret plpar_hcall_9arg_9ret Create 3 interfaces to cover all cases: plpar_hcall_norets: 7 arguments no returns plpar_hcall: 6 arguments 4 returns plpar_hcall9: 9 arguments 9 returns There are only 2 cases in the kernel that need plpar_hcall9, hopefully we can keep it that way. Pass in a buffer to stash return parameters so we avoid the &dummy1, &dummy2 madness. Signed-off-by: Anton Blanchard <anton@samba.org> -- Signed-off-by: Paul Mackerras <paulus@samba.org>
|
954a46e2d5aec6f59976ddeb1d232b486e59b54a |
|
12-Jul-2006 |
Jeremy Kerr <jk@ozlabs.org> |
[POWERPC] pseries: Constify & voidify get_property() Now that get_property() returns a void *, there's no need to cast its return value. Also, treat the return value as const, so we can constify get_property later. pseries platform changes. Built for pseries_defconfig Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
6ab3d5624e172c553004ecc862bfeac16d9d68b7 |
|
30-Jun-2006 |
Jörn Engel <joern@wohnheim.fh-wedel.de> |
Remove obsolete #include <linux/config.h> Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
7d0daae4ae1a3e80d78b83cddf414a3b98a962f4 |
|
23-Jun-2006 |
Michael Ellerman <michael@ellerman.id.au> |
[POWERPC] powerpc: Initialise ppc_md htab pointers earlier Initialise the ppc_md htab callbacks earlier, in the probe routines. This allows us to call htab_finish_init() from htab_initialize(), and makes it private to hash_utils_64.c. Move htab_finish_init() and make_bl() above htab_initialize() to avoid forward declarations. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
b13a96cfb055fd4b9c61463f87534a6f406b174b |
|
30-Mar-2006 |
Heiko J Schick <schihei@de.ibm.com> |
[PATCH] powerpc: Extends HCALL interface for InfiniBand usage This extends the HCALL interface for InfiniBand usage. I've made the patch against the linux-2.6 git tree and Segher's patch: [PATCH] Change H_StudlyCaps to H_SHOUTING_CAPS We moved this into the common powerpc code based on comments we got after posting the first eHCA InfiniBand device driver patch. Signed-off-by: Heiko j Schick <schickhj@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
706c8c93ba4865a19e981b9770151a7a63c15794 |
|
30-Mar-2006 |
Segher Boessenkool <segher@kernel.crashing.org> |
[PATCH] powerpc/pseries: Change H_StudlyCaps to H_SHOUTING_CAPS Also cleans up some nearby whitespace problems. Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
3356bb9f7ba378a6e2709f9df95f4ea52111f4df |
|
13-Jan-2006 |
David Gibson <david@gibson.dropbear.id.au> |
[PATCH] powerpc: Remove lppaca structure from the PACA At present the lppaca - the structure shared with the iSeries hypervisor and phyp - is contained within the PACA, our own low-level per-cpu structure. This doesn't have to be so, the patch below removes it, making a separate array of lppaca structures. This saves approximately 500*NR_CPUS bytes of image size and kernel memory, because we don't need aligning gap between the Linux and hypervisor portions of every PACA. On the other hand it means an extra level of dereference in many accesses to the lppaca. The patch also gets rid of several places where we assign the paca address to a local variable for no particular reason. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
bb6b9b28d6847bc71f910e2e82c9040ff4b97ec0 |
|
30-Nov-2005 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
[PATCH] powerpc: udbg updates The udbg low level io layer has an issue with udbg_getc() returning a char (unsigned on ppc) instead of an int, thus the -1 if you had no available input device could end up turned into 0xff, filling your display with bogus characters. This fixes it, along with adding a little blob to xmon to do a delay before exiting when getting an EOF and fixing the detection of ADB keyboards in udbg_adb.c Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
51d3082fe6e55aecfa17113dbe98077c749f724c |
|
23-Nov-2005 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
[PATCH] powerpc: Unify udbg (#2) This patch unifies udbg for both ppc32 and ppc64 when building the merged achitecture. xmon now has a single "back end". The powermac udbg stuff gets enriched with some ADB capabilities and btext output. In addition, the early_init callback is now called on ppc32 as well, approx. in the same order as ppc64 regarding device-tree manipulations. The init sequences of ppc32 and ppc64 are getting closer, I'll unify them in a later patch. For now, you can force udbg to the scc using "sccdbg" or to btext using "btextdbg" on powermacs. I'll implement a cleaner way of forcing udbg output to something else than the autodetected OF output device in a later patch. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
463ce0e103f419f51b1769111e73fe8bb305d0ec |
|
23-Nov-2005 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
[PATCH] powerpc: serial port discovery (#2) This moves the discovery of legacy serial ports to a separate file, makes it common to ppc32 and ppc64, and reworks it to use the new OF address translators to get to the ports early. This new version can also detect some PCI serial cards using legacy chips and will probably match those discovered port with the default console choice. Only ppc64 gets udbg still yet, unifying udbg isn't finished yet. It also adds some speed-probing code to udbg so that the default console can come up at the same speed it was set to by the firmware. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
6184b723876cd1a374e2d1094b3c73765d4c31c1 |
|
08-Dec-2005 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
[PATCH] powerpc: Remove debug code in hash path Some debug code wasn't properly removed from the initial 64k pages patch, and while it's harmless, it's also slowing down significantly a very hot code path, thus it should really be removed. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
2249ca9d60d3a8a1f6f223f0f0a0283fcb7ce33e |
|
07-Nov-2005 |
Paul Mackerras <paulus@samba.org> |
powerpc: Various UP build fixes Mostly this involves adding #include <asm/smp.h>, since that defines things like boot_cpuid[_phys] and [gs]et_hard_smp_processor_id, which are SMP-related but still needed on UP. This incorporates fixes posted by Olof Johansson and Heikki Lindholm. Signed-off-by: Paul Mackerras <paulus@samba.org>
|
dcad47fc423ac9f4934579af814fa2dad5c8081b |
|
06-Nov-2005 |
David Gibson <david@gibson.dropbear.id.au> |
[PATCH] powerpc: Kill ppcdebug The ancient ppcdebug/PPCDBG mechanism is now only used in two places. First, in the hash setup code, one of the bits allows the size of the hash table to be reduced by a factor of 8 - which would be better accomplished with a command line option for that purpose. The other was a bunch of bus walking related messages in the iSeries code, which would seem to be insufficient reason to keep the mechanism. This patch removes the last traces of this mechanism. Built and booted on iSeries and pSeries POWER5 LPAR (ARCH=powerpc). Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
|
3c726f8dee6f55e96475574e9f645327e461884c |
|
07-Nov-2005 |
Benjamin Herrenschmidt <benh@kernel.crashing.org> |
[PATCH] ppc64: support 64k pages Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel base page size to 64K. The resulting kernel still boots on any hardware. On current machines with 4K pages support only, the kernel will maintain 16 "subpages" for each 64K page transparently. Note that while real 64K capable HW has been tested, the current patch will not enable it yet as such hardware is not released yet, and I'm still verifying with the firmware architects the proper to get the information from the newer hypervisors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
40765d2b8b86446b4ac8ec880cf4fdf56ce4ae7e |
|
03-Nov-2005 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc: Cleanup vpa code register_vpa() doesn't actually do a VPA register call it just uses the flags you pass it, so rename it to vpa_call() to be clearer. We can then define register_vpa() and unregister_vpa() which are both simple wrappers around vpa_call(). (we'll need unregister_vpa() for kexec soon) We can then cleanup vpa_init(), and because vpa_init() is only called from platforms/pseries we remove the definition in asm-ppc64/smp.h. Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
|
a1218720321d778134914cf90ef54cf0d1d8477c |
|
03-Nov-2005 |
Michael Ellerman <michael@ellerman.id.au> |
powerpc: Move plpar_wrappers.h into arch/powerpc/platforms/pseries Move plpar_wrappers.h into arch/powerpc/platforms/pseries, fixup white space, and update callers. Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
|
69a80d3f69d0b2d7fae5a73c6e034d402d434d8a |
|
10-Oct-2005 |
Paul Mackerras <paulus@samba.org> |
powerpc: move pSeries files to arch/powerpc/platforms/pseries Signed-off-by: Paul Mackerras <paulus@samba.org>
|