History log of /arch/powerpc/mm/init_32.c
Revision Date Author Comments
94966b712c6875939fcdd83cb2707a797e131a43 29-Oct-2014 Fabian Frederick <fabf@skynet.be> powerpc: Fix section mismatch warning

Add __init to MMU_setup() which uses __initdata boot_command_line.
Also MMU_setup() is only called from MMU_init(), which is also __init.

Warning appeared since commit 3e47d1474c2b.

Fixes: 3e47d1474c2b ("powerpc: Remove powerpc specific cmd_line")
Signed-off-by: Fabian Frederick <fabf@skynet.be>
[mpe: Update changelog]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
3e47d1474c2b4099f0fadd12a6553fdb2e8feaae 17-Sep-2014 Anton Blanchard <anton@samba.org> powerpc: Remove powerpc specific cmd_line

There is no need for yet another copy of the command line, just
use boot_command_line like everyone else.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5f2da0f1e8fb074482118f04d0db0614951c3bb8 11-Oct-2013 LEROY Christophe <christophe.leroy@c-s.fr> powerpc/8xx: Fixing memory init issue with CONFIG_PIN_TLB

Activating CONFIG_PIN_TLB allows access to the 24 first Mbytes of
memory at bootup instead of 8. It is needed for "big" kernels for
instance when activating CONFIG_LOCKDEP_SUPPORT. This needs to be
taken into account in init_32 too, otherwise memory allocation soon
fails after startup.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Acked-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: Scott Wood <scottwood@freescale.com>
b615f4b3f962ffedb8d4a7122ca049abdb420547 29-May-2013 Paul Bolle <pebolle@tiscali.nl> ppc: init_32: Fix error typo "CONFIG_START_KERNEL"

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
ae3a197e3d0bfe3f4bf1693723e82dc018c096f3 28-Mar-2012 David Howells <dhowells@redhat.com> Disintegrate asm/system.h for PowerPC

Disintegrate asm/system.h for PowerPC.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
cc: linuxppc-dev@lists.ozlabs.org
368ff8f14d6ed8e9fd3b7c2156f2607719bf5a7a 14-Dec-2011 Suzuki Poulose <suzuki@in.ibm.com> powerpc: Define virtual-physical translations for RELOCATABLE

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page |------------------------|
Boundary | | |
| | |
| | |
Kernel Load |___________|_ __ _ _ _ _|<- Effective
Addr(_stext)| | ^ |Virt. Base Addr
| | | |
| | | |
| |reloc_offset|
| | | |
| | | |
| |______v_____|<-(KERNELBASE)%TLB_SIZE
| | |
| | |
| | |
Page |-----------|------------|
Boundary | | |

On BookE, we need __va() & __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
PHYSICAL_START is kernstart_addr - a variable updated at runtime.
KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M

Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
= 0xbc100000 , which is wrong.

it should be : 0xc0000000 + 0x100000 = 0xc0100000

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va() & __pa() with relocation offset

#ifdef CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

a) A variable, say relocation_offset (like kernstart_addr), updated
at boot time. This impacts performance, as we have to load an additional
variable from memory.

OR

b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \
(KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK))

This introduces more calculations for doing the translation.

3) Redefine __va() & __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
= 0xc0400000 - 0x400000
= 0xc0000000
and

__va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000
which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Signed-off-by: Josh Boyer <jwboyer@gmail.com>
1aadc0560f46530f8a0f11055285b876a8a31770 08-Dec-2011 Tejun Heo <tj@kernel.org> memblock: s/memblock_analyze()/memblock_allow_resize()/ and update users

The only function of memblock_analyze() is now allowing resize of
memblock region arrays. Rename it to memblock_allow_resize() and
update its users.

* The following users remain the same other than renaming.

arm/mm/init.c::arm_memblock_init()
microblaze/kernel/prom.c::early_init_devtree()
powerpc/kernel/prom.c::early_init_devtree()
openrisc/kernel/prom.c::early_init_devtree()
sh/mm/init.c::paging_init()
sparc/mm/init_64.c::paging_init()
unicore32/mm/init.c::uc32_memblock_init()

* In the following users, analyze was used to update total size which
is no longer necessary.

powerpc/kernel/machine_kexec.c::reserve_crashkernel()
powerpc/kernel/prom.c::early_init_devtree()
powerpc/mm/init_32.c::MMU_init()
powerpc/mm/tlb_nohash.c::__early_init_mmu()
powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
sh/kernel/machine_kexec.c::reserve_crashkernel()

* x86/kernel/e820.c::memblock_x86_fill() was directly setting
memblock_can_resize before populating memblock and calling analyze
afterwards. Call memblock_allow_resize() before start populating.

memblock_can_resize is now static inside memblock.c.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "H. Peter Anvin" <hpa@zytor.com>
6fbef13c4feaf0c5576e2315f4d2999c4b670c88 08-Dec-2011 Tejun Heo <tj@kernel.org> powerpc: Cleanup memblock usage

* early_init_devtree(): Total memory size is aligned to PAGE_SIZE;
however, alignment isn't enforced if memory_limit is explicitly
specified. Simplify the logic and always apply PAGE_SIZE alignment.

* MMU_init(): memblock regions is truncated by directly modifying
memblock.memory.cnt. This is incomplete (reserved array is not
truncated) and unnecessarily low level hindering further memblock
improvments. Use memblock_enforce_memory_limit() instead.

* wii_memory_fixups(): Unnecessarily low level direct manipulation of
memblock regions. The same result can be achieved using properly
abstracted operations. Reimplement using memblock API.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
41151e77a4d96ea138cede6d84c955aa4769ce74 28-Jun-2011 Becky Bruce <beckyb@kernel.crashing.org> powerpc: Hugetlb for BookE

Enable hugepages on Freescale BookE processors. This allows the kernel to
use huge TLB entries to map pages, which can greatly reduce the number of
TLB misses and the amount of TLB thrashing experienced by applications with
large memory footprints. Care should be taken when using this on FSL
processors, as the number of large TLB entries supported by the core is low
(16-64) on current processors.

The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
Page sizes larger than the max zone size are called "gigantic" pages and
must be allocated on the command line (and cannot be deallocated).

This is currently only fully implemented for Freescale 32-bit BookE
processors, but there is some infrastructure in the code for
64-bit BooKE.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2773fcc8c48b947c997ff345f3e453917883cdb5 18-Jun-2011 Dave Carroll <dcarroll@astekcorp.com> powerpc: Move free_initmem to common code

The free_initmem function is basically duplicated in mm/init_32,
and init_64, and is moved to the common 32/64-bit mm/mem.c.

All other sections except init were removed in v2.6.15 by
6c45ab992e4299c869fb26427944a8f8ea177024 (powerpc: Remove section
free() and linker script bits), and therefore the bulk of the executed
code is identical.

This patch also removes updating ppc_md.progress to NULL in the powermac
late_initcall.

Suggested-by: Milton Miller <miltonm@bga.com>
Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Dave Carroll <dcarroll@astekcorp.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
307cfe715344e15eda12dad3bb14f794115ca823 09-Jun-2011 Benjamin Herrenschmidt <benh@kernel.crashing.org> powerpc: Force page alignment for initrd reserved memory

When using 64K pages with a separate cpio rootfs, U-Boot will align
the rootfs on a 4K page boundary. When the memory is reserved, and
subsequent early memblock_alloc is called, it will allocate memory
between the 64K page alignment and reserved memory. When the reserved
memory is subsequently freed, it is done so by pages, causing the
early memblock_alloc requests to be re-used, which in my case, caused
the device-tree to be clobbered.

This patch forces the reserved memory for initrd to be kernel page
aligned, and will move the device tree if it overlaps with the range
extension of initrd. This patch will also consolidate the identical
function free_initrd_mem() from mm/init_32.c, init_64.c to mm/mem.c,
and adds the same range extension when freeing initrd. free_initrd_mem()
is also moved to the __init section.

Many thanks to Milton Miller for his input on this patch.

[BenH: Fixed build without CONFIG_BLK_DEV_INITRD]

Signed-off-by: Dave Carroll <dcarroll@astekcorp.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
6dd227002972be910c6191f38f8641e01796557f 27-Jan-2011 Scott Wood <scottwood@freescale.com> powerpc: Fix memory limits when starting at a non-zero address

memblock_enforce_memory_limit() takes the desired maximum quantity of memory
to end up with, not an address above which memory will not be used.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
cd3db0c4ca3d237e7ad20f7107216e575705d2b0 07-Jul-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org> memblock: Remove rmo_size, burry it in arch/powerpc where it belongs

The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
server ppc64 though I hijack it on embedded ppc64 for similar purposes)
and represents the area of memory that can be accessed in real mode
(aka with MMU off), or on embedded, from the exception vectors (which
is bolted in the TLB) which pretty much boils down to the same thing.

We take that out of the generic MEMBLOCK data structure and move it into
arch/powerpc where it belongs, renaming it to "RMA" while at it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
e63075a3c9377536d085bc013cd3fe6323162449 07-Jul-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org> memblock: Introduce default allocation limit and use it to replace explicit ones

This introduce memblock.current_limit which is used to limit allocations
from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).

The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
be used with memblock_alloc_base() to allocate really anywhere.

It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.

Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
strongly recommend that you ensure that you set an appropriate limit
during boot in order to guarantee that an memblock_alloc() at any time
results in something that is accessible with a simple __va().

The reason is that a subsequent patch will introduce the ability for
the array to resize itself by reallocating itself. The MEMBLOCK core will
honor the current limit when performing those allocations.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
95f72d1ed41a66f1c1c29c24d479de81a0bea36f 12-Jul-2010 Yinghai Lu <yinghai@kernel.org> lmb: rename to memblock

via following scripts

FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

sed -i \
-e 's/lmb/memblock/g' \
-e 's/LMB/MEMBLOCK/g' \
$FILES

for N in $(find . -name lmb.[ch]); do
M=$(echo $N | sed 's/lmb/memblock/g')
mv $N $M
done

and remove some wrong change like lmbench and dlmb etc.

also move memblock.c from lib/ to mm/

Suggested-by: Ingo Molnar <mingo@elte.hu>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5a0e3ad6af8660be21ca98a971cd00f331318c05 24-Mar-2010 Tejun Heo <tj@kernel.org> include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
9ddc5b6f18fbac07d2746566b73b89e89fdd4e6a 20-Jan-2010 Uwe Kleine-König <u.kleine-koenig@pengutronix.de> tree-wide: fix typos "ammount" -> "amount"

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
c5df7f775148723de39274537a886e9502eef336 12-Dec-2009 Albert Herranz <albert_herranz@yahoo.es> powerpc: allow ioremap within reserved memory regions

Add a flag to let a platform ioremap memory regions marked as reserved.

This flag will be used later by the Nintendo Wii support code to allow
ioremapping the I/O region sitting between MEM1 and MEM2 and marked
as reserved RAM in the patch "wii: use both mem1 and mem2 as ram".

This will no longer be needed when proper discontig memory support
for 32-bit PowerPC is added to the kernel.

Signed-off-by: Albert Herranz <albert_herranz@yahoo.es>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
de32400dd26e743c5d500aa42d8d6818b79edb73 12-Dec-2009 Albert Herranz <albert_herranz@yahoo.es> wii: use both mem1 and mem2 as ram

The Nintendo Wii video game console has two discontiguous RAM regions:
- MEM1: 24MB @ 0x00000000
- MEM2: 64MB @ 0x10000000

Unfortunately, the kernel currently does not support discontiguous RAM
memory regions on 32-bit PowerPC platforms.

This patch adds a series of workarounds to allow the use of the second
memory region (MEM2) as RAM by the kernel.
Basically, a single range of memory from the beginning of MEM1 to the
end of MEM2 is reported to the kernel, and a memory reservation is
created for the hole between MEM1 and MEM2.

With this patch the system is able to use all the available RAM and not
just ~27% of it.

This will no longer be needed when proper discontig memory support
for 32-bit PowerPC is added to the kernel.

Signed-off-by: Albert Herranz <albert_herranz@yahoo.es>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
3089aa1b0c07fb7c48f9829c619f50198307789d 23-Sep-2009 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> kcore: use registerd physmem information

For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,

- range of physical memory
- range of vmalloc area
- text, etc...

are registered but "range of physical memory" has some troubles. It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes. Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug. Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
a0614da88b67ffa3dbcc0d40b817e682c7c4a0ee 23-Sep-2009 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> kcore: register vmalloc area in generic way

For /proc/kcore, vmalloc areas are registered per arch. But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them. By this. archs which have no kclist_add() hooks can see vmalloc
area correctly.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
c30bb2a25fcfde6157e6154a32c14686fb0bedbe 23-Sep-2009 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> kcore: add kclist types

Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.

This patch add kclist types as
KCORE_RAM
KCORE_VMALLOC
KCORE_TEXT
KCORE_OTHER

This "type" is used in a patch following this for detecting KCORE_RAM.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
a8f7758c1c52a13e031266483efd5525157e43e9 24-Jul-2009 Benjamin Herrenschmidt <benh@kernel.crashing.org> powerpc/mm: Move around mmu_gathers definition on 64-bit

The definition for the global structure mmu_gathers, used by generic code,
is currently defined in multiple places not including anything used by
64-bit Book3E. This changes it by moving to one place common to all
processors.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
f637a49e507c88354ab32b5d914e06acfb7ee00d 27-May-2009 Benjamin Herrenschmidt <benh@kernel.crashing.org> powerpc: Minor cleanups of kernel virt address space definitions

Make FIXADDR_TOP a compile time constant and cleanup a
couple of definitions relative to the layout of the kernel
address space on ppc32. We also print out that layout at
boot time for debugging purposes.

This is a pre-requisite for properly fixing non-coherent
DMA allocactions.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
ccdcef72c249c289898b164eada89a61855b9287 17-Dec-2008 Dale Farnsworth <dale@farnsworth.org> powerpc/32: Add the ability for a classic ppc kernel to be loaded at 32M

Add the ability for a classic ppc kernel to be loaded at an address
of 32MB. This done by fixing a few places that assume we are loaded
at address 0, and by changing several uses of KERNELBASE to use
PAGE_OFFSET, instead.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
77520351805cc19ba37394ae33f862ef6d3c2a23 18-Dec-2008 Benjamin Herrenschmidt <benh@kernel.crashing.org> powerpc/mm: Runtime allocation of mmu context maps for nohash CPUs

This makes the MMU context code used for CPUs with no hash table
(except 603) dynamically allocate the various maps used to track
the state of contexts.

Only the main free map and CPU 0 stale map are allocated at boot
time. Other CPU maps are allocated when those CPUs are brought up
and freed if they are unplugged.

This also moves the initialization of the MMU context management
slightly later during the boot process, which should be fine as
it's really only needed when userland if first started anyways.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2bf3016f89344d4cd8b2c96bbec2b642a2bde413 09-Jul-2008 Stefan Roese <sr@denx.de> powerpc: Fix problems with 32bit PPC's running with >= 4GB of RAM

This patch enables 32bit PPC's (with 36bit physical address space, e.g.
IBM/AMCC PPC44x) to run with >= 4GB of RAM. Mostly its just replacing types
(unsigned long -> phys_addr_t).

Tested on an AMCC Katmai with 4GB of DDR2.

Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
5f25f06529ecb4b20efc7ba00de599f5b9f4b63c 08-May-2008 Michael Ellerman <michael@ellerman.id.au> [POWERPC] Move declaration of init_bootmem_done into system.h

... instead of having an extern declaration in a .c file.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2c419bdeca1d958bb02228b5141695f312d8c633 23-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Port fixmap from x86 and use for kmap_atomic

The fixmap code from x86 allows us to have compile time virtual addresses
that we change the physical addresses of at run time.

This is useful for applications like kmap_atomic, PCI config that is done
via direct memory map, kexec/kdump.

We got ride of CONFIG_HIGHMEM_START as we can now determine a more optimal
location for PKMAP_BASE based on where the fixmap addresses start and
working back from there.

Additionally, the kmap code in asm-powerpc/highmem.h always had debug
enabled. Moved to using CONFIG_DEBUG_HIGHMEM to determine if we should
have the extra debug checking.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
37dd2badcfcec35f5e21a0926968d77a404f03c3 21-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero)

Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations and
kdump). The support can be configured at compile time by setting
CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
desired.

Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this
config option causes the kernel to determine at runtime the physical
addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If
CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
header physical address field in the resulting ELF image.

Currently we are limited to running at a physical address that is a
multiple of 256M. This is due to how we map TLBs to cover
lowmem. This should be fixed to allow 64M or maybe even 16M alignment
in the future. It is considered an error to try and run a kernel at a
non-aligned physical address.

All the magic for this support is accomplished by proper initialization
of the kernel memory subsystem and use of ARCH_PFN_OFFSET.

The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.

/dev/mem continues to allow access to any physical address in the system
regardless of how CONFIG_PHYSICAL_START is set.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
771168494719b90621ac61f9ae68c4af494e418f 16-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Remove unused machine call outs

When we moved to arch/powerpc we actively tried to avoid using the
ppc_md.setup_io_mappings(). Currently no board ports use it so let's
remove it to avoid any new boards using it.

Also, remove early_serial_map() since we don't even have a call out for
it in arch/powerpc.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
09b5e63f827016732d956abb7a4c74a312d20521 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr

We always use __initial_memory_limit as an address so rename it
to be clear.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
d7917ba7051e3fd12ebe2d5a09b29fb3a2b38190 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Introduce lowmem_end_addr to distinguish from total_lowmem

total_lowmem represents the amount of low memory, not the physical
address that low memory ends at. If the start of memory is at 0 it
happens that total_lowmem can be used as both the size and the address
that lowmem ends at (or more specifically one byte beyond the end).

To make the code a bit more clear and deal with the case when the start of
memory isn't at physical 0, we introduce lowmem_end_addr that represents
one byte beyond the last physical address in the lowmem region.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
99c62dd773797b68f3b1ca6bb3274725d1852fa2 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr

A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
use 0 as we don't support booting these kernels at non-zero physical
addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).

For the sub-arches that support relocatable interrupt vectors
(book-e), it's reasonable to have memory start at a non-zero physical
address. For those cases use the variable memstart_addr instead of
the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
initialization and in the future we can set memstart_addr at runtime
to have a relocatable kernel.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
e48b1b452ff630288c930fd8e0c2d808bc15f7ad 28-Mar-2008 Harvey Harrison <harvey.harrison@gmail.com> [POWERPC] Replace remaining __FUNCTION__ occurrences

__FUNCTION__ is gcc-specific, use __func__

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
d9b2b2a277219d4812311d995054ce4f95067725 14-Feb-2008 David S. Miller <davem@davemloft.net> [LIB]: Make PowerPC LMB code generic so sparc64 can use it too.

Signed-off-by: David S. Miller <davem@davemloft.net>
544cdabe642e5508e784de709530a74d0775d070 17-Jul-2007 John Traill <john.traill@freescale.com> [POWERPC] 8xx: Set initial memory limit.

The 8xx can only support a max of 8M during early boot (it seems a lot of
8xx boards only have 8M so the bug was never triggered), but the early
allocator isn't aware of this. The following change makes it able to run
with larger memory.

Signed-off-by: John Traill <john.traill@freescale.com>
Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org>
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
df174e3be88d4352bfcfe20d11adc671d2961c79 20-Sep-2007 Ed Swarthout <Ed.Swarthout@freescale.com> [POWERPC] Add memory regions to the kcore list for 32-bit machines

The entries are only 32-bit, so restrict the virtual address to stay
below 0xffff_ffff. With KERNELBASE set to 0xc000_0000, this in effect
restricts access to the first 1GB of real memory.

Make setup_kcore conditional on CONFIG_PROC_KCORE for both 32/64.

Signed-off-by: Ed Swarthout <Ed.Swarthout@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
9420dc65ff9e6b67c032286efde823aeb8684670 30-Jul-2007 Jesper Juhl <jesper.juhl@gmail.com> [POWERPC] Clean out a bunch of duplicate includes

This removes several duplicate includes from arch/powerpc/.

Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
f21f49ea639ac3f24824177dac1268af75a2d373 13-Jun-2007 David Gibson <david@gibson.dropbear.id.au> [POWERPC] Remove the dregs of APUS support from arch/powerpc

APUS (the Amiga Power-Up System) is not supported under arch/powerpc
and it's unlikely it ever will be. Therefore, this patch removes the
fragments of APUS support code from arch/powerpc which have been
copied from arch/ppc.

A few APUS references are left in asm-powerpc in .h files which are
still used from arch/ppc.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
88df6e90fa9782dbf44d936e44649afe271e4790 12-Apr-2007 Benjamin Herrenschmidt <benh@kernel.crashing.org> [POWERPC] DEBUG_PAGEALLOC for 32-bit

Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
mapping and is only tested with Hash table based processor though it
shouldn't be too hard to adapt it to others.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

arch/powerpc/Kconfig.debug | 9 ++++++
arch/powerpc/mm/init_32.c | 4 +++
arch/powerpc/mm/pgtable_32.c | 52 +++++++++++++++++++++++++++++++++++++++
arch/powerpc/mm/ppc_mmu_32.c | 4 ++-
include/asm-powerpc/cacheflush.h | 6 ++++
5 files changed, 74 insertions(+), 1 deletion(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
7835e98b2e3c66dba79cb0ff8ebb90a2fe030c29 22-Mar-2006 Nick Piggin <npiggin@suse.de> [PATCH] remove set_page_count() outside mm/

set_page_count usage outside mm/ is limited to setting the refcount to 1.
Remove set_page_count from outside mm/, and replace those users with
init_page_count() and set_page_refcounted().

This allows more debug checking, and tighter control on how code is allowed
to play around with page->_count.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
51d3082fe6e55aecfa17113dbe98077c749f724c 23-Nov-2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> [PATCH] powerpc: Unify udbg (#2)

This patch unifies udbg for both ppc32 and ppc64 when building the
merged achitecture. xmon now has a single "back end". The powermac udbg
stuff gets enriched with some ADB capabilities and btext output. In
addition, the early_init callback is now called on ppc32 as well,
approx. in the same order as ppc64 regarding device-tree manipulations.
The init sequences of ppc32 and ppc64 are getting closer, I'll unify
them in a later patch.

For now, you can force udbg to the scc using "sccdbg" or to btext using
"btextdbg" on powermacs. I'll implement a cleaner way of forcing udbg
output to something else than the autodetected OF output device in a
later patch.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
49b09853df1a303876b82a6480efb2f7b45ef041 10-Nov-2005 Paul Mackerras <paulus@samba.org> powerpc: Move some extern declarations from C code into headers

This also make klimit have the same type on 32-bit as on 64-bit,
namely unsigned long, and defines and initializes it in one place.

Signed-off-by: Paul Mackerras <paulus@samba.org>
e37bc5df8e96c72f27ec3579499726b656e4e641 24-Oct-2005 David Gibson <david@gibson.dropbear.id.au> [PATCH] powerpc: Purge bootinfo.h

With ARCH=powerpc we assume the presence of a device tree, so we don't
require any support for the old bi_recs method of passing boot
parameters. Likewise, we've never needed it for ppc64, but we still
had an include/asm-ppc64/bootinfo.h from which nothing was used. This
patch removes that file, and all references to it in arch/ppc64 and
arch/powerpc. A related, unused variable 'boot_mem_size' is also
removed from setup_32.c. The bootinfo stuff remains in ARCH=ppc for
the time being.

Built and booted on Power5 (ARCH=ppc64 and ARCH=powerpc), built for
32-bit powermac (ARCH=powerpc and ARCH=ppc).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
fa39dc437a41733adaba241fd9036760283a516a 26-Oct-2005 Paul Mackerras <paulus@samba.org> powerpc32: Limit memory to lowmem if !CONFIG_HIGHMEM.

This trims off the extra unusable memory from the lmb structure,
so we don't try to use it.

Signed-off-by: Paul Mackerras <paulus@samba.org>
5c8c56ebdfb290e4feaac406518903f58714d874 22-Oct-2005 Paul Mackerras <paulus@samba.org> powerpc: Move agp_special_page export to where it is defined

... instead of exporting it in arch/*/kernel/ppc_ksyms.c.

Signed-off-by: Paul Mackerras <paulus@samba.org>
30cd4a4e9c25e154ba087848a839bd0c6d024092 17-Oct-2005 Paul Mackerras <paulus@samba.org> powerpc: Initialize btext subsystem later, after prom_init

We were initializing the btext stuff from prom_init(), thus breaking
the rule that all communication between prom_init() and the rest of
the kernel has to be via the flattened device tree. This removes
the btext initialization calls from prom_init() and initializes it
instead after the device tree is unflattened. It would be nice to
do it earlier, but that needs some more infrastructure to find the
properties we need in the flattened device tree.

Signed-off-by: Paul Mackerras <paulus@samba.org>
70d64ceaa1a84d2502405422a4dfd3f87786a347 10-Oct-2005 Paul Mackerras <paulus@samba.org> powerpc: Rename files to have consistent _32/_64 suffixes

This doesn't change any code, just renames things so we consistently
have foo_32.c and foo_64.c where we have separate 32- and 64-bit
versions.

Signed-off-by: Paul Mackerras <paulus@samba.org>