History log of /arch/powerpc/mm/fsl_booke_mmu.c
Revision Date Author Comments
28efc35fe68dacbddc4b12c2fa8f2df1593a4ad3 12-Oct-2013 Scott Wood <scottwood@freescale.com> powerpc/e6500: TLB miss handler with hardware tablewalk support

There are a few things that make the existing hw tablewalk handlers
unsuitable for e6500:

- Indirect entries go in TLB1 (though the resulting direct entries go in
TLB0).

- It has threads, but no "tlbsrx." -- so we need a spinlock and
a normal "tlbsx". Because we need this lock, hardware tablewalk
is mandatory on e6500 unless we want to add spinlock+tlbsx to
the normal bolted TLB miss handler.

- TLB1 has no HES (nor next-victim hint) so we need software round robin
(TODO: integrate this round robin data with hugetlb/KVM)

- The existing tablewalk handlers map half of a page table at a time,
because IBM hardware has a fixed 1MiB indirect page size. e6500
has variable size indirect entries, with a minimum of 2MiB.
So we can't do the half-page indirect mapping, and even if we
could it would be less efficient than mapping the full page.

- Like on e5500, the linear mapping is bolted, so we don't need the
overhead of supporting nested tlb misses.

Note that hardware tablewalk does not work in rev1 of e6500.
We do not expect to support e6500 rev1 in mainline Linux.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Cc: Mihai Caraman <mihai.caraman@freescale.com>
0be7d969b0efef085ed6497d462ba16a875ca737 24-Dec-2013 Kevin Hao <haokexin@gmail.com> powerpc/fsl_booke: smp support for booting a relocatable kernel above 64M

When booting above the 64M for a secondary cpu, we also face the
same issue as the boot cpu that the PAGE_OFFSET map two different
physical address for the init tlb and the final map. So we have to use
switch_to_as1/restore_to_as0 between the conversion of these two
maps. When restoring to as0 for a secondary cpu, we only need to
return to the caller. So add a new parameter for function
restore_to_as0 for this purpose.

Use LOAD_REG_ADDR_PIC to get the address of variables which may
be used before we set the final map in cams for the secondary cpu.
Move the setting of cams a bit earlier in order to avoid the
unnecessary using of LOAD_REG_ADDR_PIC.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
7d2471f9fa85089beb1cb9436ffc28f9e11e518d 24-Dec-2013 Kevin Hao <haokexin@gmail.com> powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel

This is always true for a non-relocatable kernel. Otherwise the kernel
would get stuck. But for a relocatable kernel, it seems a little
complicated. When booting a relocatable kernel, we just align the
kernel start addr to 64M and map the PAGE_OFFSET from there. The
relocation will base on this virtual address. But if this address
is not the same as the memstart_addr, we will have to change the
map of PAGE_OFFSET to the real memstart_addr and do another relocation
again.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
[scottwood@freescale.com: make offset long and non-negative in simple case]
Signed-off-by: Scott Wood <scottwood@freescale.com>
813125d83372e19edecaba811d4d0dc115d36819 24-Dec-2013 Kevin Hao <haokexin@gmail.com> powerpc/fsl_booke: introduce map_mem_in_cams_addr

Introduce this function so we can set both the physical and virtual
address for the map in cams. This will be used by the relocation code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
78a235efdc42ff363de81fdbc171385e8b86b69b 24-Dec-2013 Kevin Hao <haokexin@gmail.com> powerpc/fsl_booke: set the tlb entry for the kernel address in AS1

We use the tlb1 entries to map low mem to the kernel space. In the
current code, it assumes that the first tlb entry would cover the
kernel image. But this is not true for some special cases, such as
when we run a relocatable kernel above the 64M or set
CONFIG_KERNEL_START above 64M. So we choose to switch to address
space 1 before setting these tlb entries.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
dd189692d40948d6445bbaeb8cb9bf9d15f54dc6 24-Dec-2013 Kevin Hao <haokexin@gmail.com> powerpc: enable the relocatable support for the fsl booke 32bit kernel

This is based on the codes in the head_44x.S. The difference is that
the init tlb size we used is 64M. With this patch we can only load the
kernel at address between memstart_addr ~ memstart_addr + 64M. We will
fix this restriction in the following patches.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
f0b8b3417d836f89d873f6d5de43d54f02cb11e2 05-Jan-2012 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Fixup calc_cam_sz to support MMU v2

The registers that describe size supported by TLB are different on MMU
v2 as well as we support power of two page sizes. For now we continue
to assume that FSL variable size array supports all page sizes up to the
maximum one reported in TLB1PS.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
4559424a0c34f0cb22fa31bc24015a06dc064b32 12-Oct-2011 Becky Bruce <beckyb@kernel.crashing.org> powerpc/fsl-booke: Fix settlbcam for 64-bit

Currently, it does a cntlzd on the size and then subtracts it from
21.... this doesn't take into account the varying size of a "long".
Just use __ilog instead (and subtract the 10 we have to subtract
to get to the tsize encoding).

Also correct the comment about page sizes supported.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
1dc91c3eb374ca01ec99dc0ca2a38babc509beb3 16-Sep-2011 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Fix setup_initial_memory_limit to not blindly map

On FSL Book-E devices we support multiple large TLB sizes and so we can
get into situations in which the initial 1G TLB size is too big and
we're asked for a size that is not mappable by a single entry (like
512M). The single entry is important because when we bring up secondary
cores they need to ensure any data structure they need to access (eg
PACA or stack) is always mapped.

So we really need to determine what size will actually be mapped by the
first TLB entry to ensure we limit early memory references to that
region. We refactor the map_mem_in_cams() code to provider a helper
function that we can utilize to determine the size of the first TLB
entry while taking into account size and alignment constraints.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
55fd766b5fad8240b7a6e994b5779a46d28f73d4 17-Oct-2009 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chips

On Freescale parts typically have TLB array for large mappings that we can
bolt the linear mapping into. We utilize the code that already exists
on PPC32 on the 64-bit side to setup the linear mapping to be cover by
bolted TLB entries. We utilize a quarter of the variable size TLB array
for this purpose.

Additionally, we limit the amount of memory to what we can cover via
bolted entries so we don't get secondary faults in the TLB miss
handlers. We should fix this limitation in the future.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
92437d41374bf59b1914b53bd10ca69d31b1b581 24-Sep-2010 Paul Gortmaker <paul.gortmaker@windriver.com> powerpc: Fix invalid page flags in create TLB CAM path for PTE_64BIT

There exists a four line chunk of code, which when configured for
64 bit address space, can incorrectly set certain page flags during
the TLB creation. It turns out that this is code which isn't used,
but might still serve a purpose. Since it isn't obvious why it exists
or why it causes problems, the below description covers both in detail.

For powerpc bootstrap, the physical memory (at most 768M), is mapped
into the kernel space via the following path:

MMU_init()
|
+ adjust_total_lowmem()
|
+ map_mem_in_cams()
|
+ settlbcam(i, virt, phys, cam_sz, PAGE_KERNEL_X, 0);

On settlbcam(), the kernel will create TLB entries according to the flag,
PAGE_KERNEL_X.

settlbcam()
{
...
TLBCAM[index].MAS1 = MAS1_VALID
| MAS1_IPROT | MAS1_TSIZE(tsize) | MAS1_TID(pid);
^
These entries cannot be invalidated by the
kernel since MAS1_IPROT is set on TLB property.
...
if (flags & _PAGE_USER) {
TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
}

For classic BookE (flags & _PAGE_USER) is 'zero' so it's fine.
But on boards like the the Freescale P4080, we want to support 36-bit
physical address on it. So the following options may be set:

CONFIG_FSL_BOOKE=y
CONFIG_PTE_64BIT=y
CONFIG_PHYS_64BIT=y

As a result, boards like the P4080 will introduce PTE format as Book3E.
As per the file: arch/powerpc/include/asm/pgtable-ppc32.h

* #elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
* #include <asm/pte-book3e.h>

So PAGE_KERNEL_X is __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX) and the
book3E version of _PAGE_KERNEL_RWX is defined with:

(_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY | _PAGE_BAP_SX)

Note the _PAGE_BAP_SR, which is also defined in the book3E _PAGE_USER:

#define _PAGE_USER (_PAGE_BAP_UR | _PAGE_BAP_SR) /* Can be read */

So the possibility exists to wrongly assign the user MAS3_U<RWX> bits
to kernel (PAGE_KERNEL_X) address space via the following code fragment:

if (flags & _PAGE_USER) {
TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
}

Here is a dump of the TLB info from Simics with the above code present:
------
L2 TLB1
GT SSS UUU V I
Row Logical Physical SS TLPID TID WIMGE XWR XWR F P V
----- ----------------- ------------------- -- ----- ----- ----- --- --- - - -
0 c0000000-cfffffff 000000000-00fffffff 00 0 0 M XWR XWR 0 1 1
1 d0000000-dfffffff 010000000-01fffffff 00 0 0 M XWR XWR 0 1 1
2 e0000000-efffffff 020000000-02fffffff 00 0 0 M XWR XWR 0 1 1

Actually this conditional code was used for two legacy functions:

1: support KGDB to set break point.
KGDB already dropped this; now uses its core write to set break point.

2: io_block_mapping() to create TLB in segmentation size (not PAGE_SIZE)
for device IO space.
This use case is also removed from the latest PowerPC kernel.

However, there may still be a use case for it in the future, like
large user pages, so we can't remove it entirely. As an alternative,
we match on all bits of _PAGE_USER instead of just any bits, so the
case where just _PAGE_BAP_SR is set can't sneak through.

With this done, the TLB appears without U having XWR as below:

-------
L2 TLB1
GT SSS UUU V I
Row Logical Physical SS TLPID TID WIMGE XWR XWR F P V
----- ----------------- ------------------- -- ----- ----- ----- --- --- - - -
0 c0000000-cfffffff 000000000-00fffffff 00 0 0 M XWR 0 1 1
1 d0000000-dfffffff 010000000-01fffffff 00 0 0 M XWR 0 1 1
2 e0000000-efffffff 020000000-02fffffff 00 0 0 M XWR 0 1 1

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
cd3db0c4ca3d237e7ad20f7107216e575705d2b0 07-Jul-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org> memblock: Remove rmo_size, burry it in arch/powerpc where it belongs

The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
server ppc64 though I hijack it on embedded ppc64 for similar purposes)
and represents the area of memory that can be accessed in real mode
(aka with MMU off), or on embedded, from the exception vectors (which
is bolted in the TLB) which pretty much boils down to the same thing.

We take that out of the generic MEMBLOCK data structure and move it into
arch/powerpc where it belongs, renaming it to "RMA" while at it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
e63075a3c9377536d085bc013cd3fe6323162449 07-Jul-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org> memblock: Introduce default allocation limit and use it to replace explicit ones

This introduce memblock.current_limit which is used to limit allocations
from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).

The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
be used with memblock_alloc_base() to allocate really anywhere.

It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.

Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
strongly recommend that you ensure that you set an appropriate limit
during boot in order to guarantee that an memblock_alloc() at any time
results in something that is accessible with a simple __va().

The reason is that a subsequent patch will introduce the ability for
the array to resize itself by reallocating itself. The MEMBLOCK core will
honor the current limit when performing those allocations.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
d10ac3734d07bee675384d22d06883b3c57b1524 30-Jun-2010 Becky Bruce <beckyb@kernel.crashing.org> powerpc/fsl-booke: Fix comments in mmu code that mention BATS

There are no BATS on BookE - we have the TLBCAM instead. Also correct
the page size information to included extended sizes. We don't actually allow
a 4G page size to be used, so comment on that as well.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
78f622377f7d31d988db350a43c5689dd5f31876 13-May-2010 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Move loadcam_entry back to asm code to fix SMP ftrace

When we build with ftrace enabled its possible that loadcam_entry would
have used the stack pointer (even though the code doesn't need it). We
call loadcam_entry in __secondary_start before the stack is setup. To
ensure that loadcam_entry doesn't use the stack pointer the easiest
solution is to just have it in asm code.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
500a0e56c36dabb8cb0d8f3c93aac900058ef7af 13-May-2010 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Move loadcam_entry back to asm code to fix SMP ftrace

When we build with ftrace enabled its possible that loadcam_entry would
have used the stack pointer (even though the code doesn't need it). We
call loadcam_entry in __secondary_start before the stack is setup. To
ensure that loadcam_entry doesn't use the stack pointer the easiest
solution is to just have it in asm code.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
56151e753468e34aeb322af4b0309ab727c97d2e 28-Apr-2010 Wufei <fei.wu@windriver.com> kgdb: don't needlessly skip PAGE_USER test for Fsl booke

The bypassing of this test is a leftover from 2.4 vintage
kernels, and is no longer appropriate, or even used by KGDB.
Currently KGDB uses probe_kernel_write() for all access to
memory via the KGDB core, so it can simply be deleted.

This fixes CVE-2010-1446.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
CC: Paul Mackerras <paulus@samba.org>
CC: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Wufei <fei.wu@windriver.com>
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
e8137341b1bb9bbdc29d9fd8980485ec7dcb4109 12-Apr-2010 Becky Bruce <beckyb@kernel.crashing.org> powerpc/fsl_booke: Correct test for MMU_FTR_BIG_PHYS

The code was looking for this in cpu_features, not mmu_features. Fix this.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
ae4cec4736969ec2196a6bbce4ab263ff7cb7eef 14-Dec-2009 Stephen Rothwell <sfr@canb.auug.org.au> powerpc: fix up for mmu_mapin_ram api change

Today's linux-next build (powerpc ppc44x_defconfig) failed like this:

arch/powerpc/mm/pgtable_32.c: In function 'mapin_ram':
arch/powerpc/mm/pgtable_32.c:318: error: too many arguments to function 'mmu_mapin_ram'

Casued by commit de32400dd26e743c5d500aa42d8d6818b79edb73 ("wii: use both
mem1 and mem2 as ram").

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
8b27f0b61db57f5555fc2d3fc95c3ea9fd1a9d6c 15-Oct-2009 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Rework TLB CAM code

Re-write the code so its more standalone and fixed some issues:
* Bump'd # of CAM entries to 64 to support e500mc
* Make the code handle MAS7 properly
* Use pr_cont instead of creating a string as we go

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
8dcd038a13b8e322c49fe0d3e31a0deaba4fd5fd 07-Aug-2009 Roel Kluin <roel.kluin@gmail.com> powerpc/fsl-booke: read buffer overflow

cam[tlbcam_index] is checked before tlbcam_index < ARRAY_SIZE(cam)

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
8d1cf34e7ad5c7738ce20d20bd7f002f562cb8b5 19-Mar-2009 Benjamin Herrenschmidt <benh@kernel.crashing.org> powerpc/mm: Tweak PTE bit combination definitions

This patch tweaks the way some PTE bit combinations are defined, in such a
way that the 32 and 64-bit variant become almost identical and that will
make it easier to bring in a new common pte-* file for the new variant
of the Book3-E support.

The combination of bits defining access to kernel pages are now clearly
separated from the combination used by userspace and the core VM. The
resulting generated code should remain identical unless I made a mistake.

Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
that option is enabled. Probably something that bitrot.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
96a8bac5895a41b0fb05a6aa7c3fa1ea631a91fe 12-Feb-2009 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Fix compile warning

arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem':
arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t'

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
d66c82ea456853a71d88359b0c19a92ac1d393ff 11-Feb-2009 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS defines

The Power ISA 2.06 added power of two page sizes to the embedded MMU
architecture. Its done it such a way to be code compatiable with the
existing HW. Made the minor code changes to support both power of two
and power of four page sizes. Also added some new MAS bits and macros
that are defined as part of the 2.06 ISA. Renamed some things to use
the 'Book-3e' concept to convey the new MMU that is based on the
Freescale Book-E MMU programming model.

Note, its still invalid to try and use a page size that isn't supported
by cpu.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
6c24b17453c8dc444a746e45b8a404498fc9fcf7 10-Feb-2009 Kumar Gala <galak@kernel.crashing.org> powerpc/fsl-booke: Fix mapping functions to use phys_addr_t

Fixed v_mapped_by_tlbcam() and p_mapped_by_tlbcam() to use phys_addr_t
instead of unsigned long. In 36-bit physical mode we really need these
functions to deal with phys_addr_t when trying to match a physical
address or when returning one.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
96051465fdc29e00dd14b484a45daac089c657f8 09-Dec-2008 Trent Piepho <tpiepho@freescale.com> powerpc/fsl-booke: Make CAM entries used for lowmem configurable

On booke processors, the code that maps low memory only uses up to three
CAM entries, even though there are sixteen and nothing else uses them.

Make this number configurable in the advanced options menu along with max
low memory size. If one wants 1 GB of lowmem, then it's typically
necessary to have four CAM entries.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
c8f3570b7e2dd070ba6da41f3ed4ffb4e1d296af 09-Dec-2008 Trent Piepho <tpiepho@freescale.com> powerpc/fsl-booke: Allow larger CAM sizes than 256 MB

The code that maps kernel low memory would only use page sizes up to 256
MB. On E500v2 pages up to 4 GB are supported.

However, a page must be aligned to a multiple of the page's size. I.e.
256 MB pages must aligned to a 256 MB boundary. This was enforced by a
requirement that the physical and virtual addresses of the start of lowmem
be aligned to 256 MB. Clearly requiring 1GB or 4GB alignment to allow
pages of that size isn't acceptable.

To solve this, I simply have adjust_total_lowmem() take alignment into
account when it decides what size pages to use. Give it PAGE_OFFSET =
0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map
pages like this:
PA 0x3000_0000 VA 0x7000_0000 Size 256 MB
PA 0x4000_0000 VA 0x8000_0000 Size 1 GB
PA 0x8000_0000 VA 0xC000_0000 Size 256 MB
PA 0x9000_0000 VA 0xD000_0000 Size 256 MB
PA 0xA000_0000 VA 0xE000_0000 Size 256 MB

Because the lowmem mapping code now takes alignment into account,
PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB. Even lower might be
possible. The lowmem code will work down to 4 kB but it's possible some of
the boot code will fail before then. Poor alignment will force small pages
to be used, which combined with the limited number of TLB1 pages available,
will result in very little memory getting mapped. So alignments less than
64 MB probably aren't very useful anyway.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
f88747e7f68866f2f82cef1363c5b8e7aa13b0a3 09-Dec-2008 Trent Piepho <tpiepho@freescale.com> powerpc/fsl-booke: Remove code duplication in lowmem mapping

The code to map lowmem uses three CAM aka TLB[1] entries to cover it. The
size of each is stored in three globals named __cam0, __cam1, and __cam2.
All the code that uses them is duplicated three times for each of the three
variables.

We have these things called arrays and loops....

Once converted to use an array, it will be easier to make the number of
CAMs configurable.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
6fd8be4bf72879b3039654388e985cabf8449af5 09-Dec-2008 Trent Piepho <tpiepho@freescale.com> powerpc/fsl-booke: Remove num_tlbcam_entries

This is a global variable defined in fsl_booke_mmu.c with a value that gets
initialized in assembly code in head_fsl_booke.S.

It's never used.

If some code ever does want to know the number of entries in TLB1, then
"numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a
global initialized during kernel boot from assembly.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
19f5465e823858a2f0b0e9a92e52816ba3ee70bb 09-Dec-2008 Trent Piepho <tpiepho@freescale.com> powerpc/fsl-booke: Don't hard-code size of struct tlbcam

Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam
to 20 when it indexed the TLBCAM table. Anyone changing the size of struct
tlbcam would not know to expect that.

The kernel already has a system to get the size of C structures into
assembly language files, asm-offsets, so let's use it.

The definition of the struct gets moved to a header, so that asm-offsets.c
can include it.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
82331ab15f14786f3d8e874efb76462685e3bfa0 21-Aug-2008 Becky Bruce <becky.bruce@freescale.com> powerpc/85xx: fix build warning, remove silly cast

This fixes a build warning when PHYS_64BIT is enabled, and removes an
unnecessary cast to phys_addr_t (the variable being cast is already
a phys_addr_t)

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
37dd2badcfcec35f5e21a0926968d77a404f03c3 21-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero)

Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations and
kdump). The support can be configured at compile time by setting
CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
desired.

Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this
config option causes the kernel to determine at runtime the physical
addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If
CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
header physical address field in the resulting ELF image.

Currently we are limited to running at a physical address that is a
multiple of 256M. This is due to how we map TLBs to cover
lowmem. This should be fixed to allow 64M or maybe even 16M alignment
in the future. It is considered an error to try and run a kernel at a
non-aligned physical address.

All the magic for this support is accomplished by proper initialization
of the kernel memory subsystem and use of ARCH_PFN_OFFSET.

The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.

/dev/mem continues to allow access to any physical address in the system
regardless of how CONFIG_PHYSICAL_START is set.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
09b5e63f827016732d956abb7a4c74a312d20521 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr

We always use __initial_memory_limit as an address so rename it
to be clear.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
0aef996b37d08757562ecf0bb0c1f6998e634c8b 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] 85xx: Cleanup TLB initialization

* Determine the RPN we are running the kernel at runtime rather
than using compile time constant for initial TLB

* Cleanup adjust_total_lowmem() to respect memstart_addr and
be a bit more clear on variables that are sizes vs addresses.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
99c62dd773797b68f3b1ca6bb3274725d1852fa2 15-Apr-2008 Kumar Gala <galak@kernel.crashing.org> [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr

A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
use 0 as we don't support booting these kernels at non-zero physical
addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).

For the sub-arches that support relocatable interrupt vectors
(book-e), it's reasonable to have memory start at a non-zero physical
address. For those cases use the variable memstart_addr instead of
the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
initialization and in the future we can set memstart_addr at runtime
to have a relocatable kernel.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
e8b63761554aca641bd9020447d487bfd85111bf 22-Nov-2007 Dale Farnsworth <dale@farnsworth.org> [POWERPC] 85xx: Respect KERNELBASE, PAGE_OFFSET, and PHYSICAL_START on e500

The e500 MMU init code previously assumed KERNELBASE always equaled
PAGE_OFFSET and PHYSICAL_START was 0. This is useful for kdump
support as well as asymetric multicore.

For the initial kdump support the secondary kernel will run at 32M
but need access to all of memory so we bump the initial TLB up to
64M. This also matches with the forth coming ePAPR spec.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
873553b3d6b3b19f187a5630300ece20bbf74afd 03-Oct-2007 Dale Farnsworth <dale@farnsworth.org> [POWERPC] 85xx: Failure with odd memory sizes and CONFIG_HIGHMEM

The CONFIG_FSL_BOOKE mmu setup code fails when CONFIG_HIGHMEM=y
and the 3 fixed TLB entries cannot exactly map the lowmem size.
Each TLB entry can map 4MB, 16MB, 64MB or 256MB, so the failure
is observed when the kernel lowmem size is not equal to the
sum of up to 3 of those values.

Normally, memory is sized in nice numbers, but I observed this
problem while testing a crash dump kernel. The failure can
also be observed by artificially reducing the kernel's main
memory via the mem= kernel command line parameter.

This commit fixes the problem by setting __initial_memory_limit
in adjust_total_lowmem().

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
f21f49ea639ac3f24824177dac1268af75a2d373 13-Jun-2007 David Gibson <david@gibson.dropbear.id.au> [POWERPC] Remove the dregs of APUS support from arch/powerpc

APUS (the Amiga Power-Up System) is not supported under arch/powerpc
and it's unlikely it ever will be. Therefore, this patch removes the
fragments of APUS support code from arch/powerpc which have been
copied from arch/ppc.

A few APUS references are left in asm-powerpc in .h files which are
still used from arch/ppc.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
4c8d3d997ef3c0594350fba716529905b314287e 14-Nov-2005 Kumar Gala <galak@gate.crashing.org> [PATCH] Update email address for Kumar

Changed jobs and the Freescale address is no longer valid.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
14cf11af6cf608eb8c23e989ddb17a715ddce109 26-Sep-2005 Paul Mackerras <paulus@samba.org> powerpc: Merge enough to start building in arch/powerpc.

This creates the directory structure under arch/powerpc and a bunch
of Kconfig files. It does a first-cut merge of arch/powerpc/mm,
arch/powerpc/lib and arch/powerpc/platforms/powermac. This is enough
to build a 32-bit powermac kernel with ARCH=powerpc.

For now we are getting some unmerged files from arch/ppc/kernel and
arch/ppc/syslib, or arch/ppc64/kernel. This makes some minor changes
to files in those directories and files outside arch/powerpc.

The boot directory is still not merged. That's going to be interesting.

Signed-off-by: Paul Mackerras <paulus@samba.org>