History log of /arch/arm/include/asm/pgtable.h
Revision Date Author Comments
903ed3a54df2f6234c50f696b8a3db78c26ea119 17-Sep-2014 Ard Biesheuvel <ard.biesheuvel@linaro.org> ARM: kvm: define PAGE_S2_DEVICE as read-only by default

Now that we support read-only memslots, we need to make sure that
pass-through device mappings are not mapped writable if the guest
has requested them to be read-only. The existing implementation
already honours this by calling kvm_set_s2pte_writable() on the new
pte in case of writable mappings, so all we need to do is define
the default pgprot_t value used for devices to be PTE_S2_RDONLY.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
bd951303be5b4df578c7f30ef78839f1a9d6658c 10-Oct-2014 Steve Capper <steve.capper@linaro.org> arm: mm: introduce special ptes for LPAE

We need a mechanism to tag ptes as being special, this indicates that no
attempt should be made to access the underlying struct page * associated
with the pte. This is used by the fast_gup when operating on ptes as it
has no means to access VMAs (that also contain this information)
locklessly.

The L_PTE_SPECIAL bit is already allocated for LPAE, this patch modifies
pte_special and pte_mkspecial to make use of it, and defines
__HAVE_ARCH_PTE_SPECIAL.

This patch also excludes special ptes from the icache/dcache sync logic.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dann Frazier <dann.frazier@canonical.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
f2950706871c4b6e8c0f0d7c3f62d35930b8de63 18-Jul-2014 Steven Capper <steve.capper@linaro.org> ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclear

Long descriptors on ARM are 64 bits, and some pte functions such as
pte_dirty return a bitwise-and of a flag with the pte value. If the
flag to be tested resides in the upper 32 bits of the pte, then we run
into the danger of the result being dropped if downcast.

For example:
gather_stats(page, md, pte_dirty(*pte), 1);
where pte_dirty(*pte) is downcast to an int.

This patch introduces a new macro pte_isset which performs the bitwise
and, then performs a double logical invert (where needed) to ensure
predictable downcasting. The logical inverse pte_isclear is also
introduced.

Equivalent pmd functions for Transparent HugePages have also been
added.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
1971188aa19651d8f447211c6535fb68661d77c5 21-Feb-2014 Will Deacon <will.deacon@arm.com> ARM: 7985/1: mm: implement pte_accessible for faulting mappings

The pte_accessible macro can be used to identify page table entries
capable of being cached by a TLB. In principle, this differs from
pte_present, since PROT_NONE mappings are mapped using invalid entries
identified as present and ptes designated as `old' can use either
invalid entries or those with the access flag cleared (guaranteed not to
be in the TLB). However, there is a race to take care of, as described
in 20841405940e ("mm: fix TLB flush race between migration, and
change_protection_range"), between a page being migrated and mprotected
at the same time. In this case, we can check whether a TLB invalidation
is pending for the mm and if so, temporarily consider PROT_NONE mappings
as valid.

This patch implements a quick pte_accessible macro for ARM by simply
checking if the pte is valid/present depending on the mm. For classic
MMU, these checks are identical and will generate some false positives
for PROT_NONE mappings, but this is better than the current asm-generic
definition of ((void)(pte),1).

Finally, pte_present_user is moved to use pte_valid (and renamed
appropriately) since we don't care about cache flushing for faulting
mappings.

Acked-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
27ec8da4ace46900a71b8462157aa2bc88ff5d2c 17-Jun-2013 Laura Abbott <lauraa@codeaurora.org> ARM: add definitions for pte_mkexec/pte_mknexec

Other architectures define pte_mkexec to mark a pte as executable.
Add pte_mkexec for ARM to get the same functionality. Although no
other architectures currently define it, also add pte_mknexec to
explicitly allow a pte to be marked as non executable.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
d8aa712c30148ba26fd89a5dc14de95d4c375184 28-Nov-2013 Russell King <rmk+kernel@arm.linux.org.uk> ARM: fix booting low-vectors machines

Commit f6f91b0d9fd9 (ARM: allow kuser helpers to be removed from the
vector page) required two pages for the vectors code. Although the
code setting up the initial page tables was updated, the code which
allocates page tables for new processes wasn't, neither was the code
which tears down the mappings. Fix this.

Fixes: f6f91b0d9fd9 ("ARM: allow kuser helpers to be removed from the vector page")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: <stable@vger.kernel.org>
8947c09d05da9f0436f423518f449beaa5ea1bdc 06-Aug-2013 Christoffer Dall <christoffer.dall@linaro.org> ARM: 7808/1: KVM: mm: Get rid of L_PTE_USER ref from PAGE_S2_DEVICE

THe L_PTE_USER actually has nothing to do with stage 2 mappings and the
L_PTE_S2_RDWR value sets the readable bit, which was what L_PTE_USER
was used for before proper handling of stage 2 memory defines.

Changelog:
[v3]: Drop call to kvm_set_s2pte_writable in mmu.c
[v2]: Change default mappings to be r/w instead of r/o, as per Marc
Zyngier's suggestion.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
40d158e61840fbbe23be3f37302a3ca237c15491 11-May-2013 Al Viro <viro@zeniv.linux.org.uk> consolidate io_remap_pfn_range definitions

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
8d962507007357d6fbbcbdd1647faa389a9aed6d 25-Jul-2012 Catalin Marinas <catalin.marinas@arm.com> ARM: mm: Transparent huge page support for LPAE systems.

The patch adds support for THP (transparent huge pages) to LPAE
systems. When this feature is enabled, the kernel tries to map
anonymous pages as 2MB sections where possible.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants used, value of
PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h,
added PROT_NONE support.]
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
104ad3b32d7a71941c8ab2dee78eea38e8a23309 30-Apr-2013 Catalin Marinas <catalin.marinas@arm.com> arm: set the page table freeing ceiling to TASK_SIZE

ARM processors with LPAE enabled use 3 levels of page tables, with an
entry in the top level (pgd) covering 1GB of virtual space. Because of
the branch relocation limitations on ARM, the loadable modules are
mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
between kernel modules and user space.

If free_pgtables() is called with the default ceiling 0,
free_pgd_range() (and subsequently called functions) also frees the page
table shared between user space and kernel modules (which is normally
handled by the ARM-specific pgd_free() function). This patch changes
defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
is enabled.

Note that the pgd_free() function already checks the presence of the
shared pmd page allocated by pgd_alloc() and frees it, though with
ceiling 0 this wasn't necessary.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org> [3.3+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6aaa189f8712471a250bfdf8fc8d08277258b8ab 23-Apr-2013 Catalin Marinas <catalin.marinas@arm.com> ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE

ARM processors with LPAE enabled use 3 levels of page tables, with an
entry in the top level (pgd) covering 1GB of virtual space. Because of
the branch relocation limitations on ARM, the loadable modules are
mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
between kernel modules and user space.

If free_pgtables() is called with the default ceiling 0,
free_pgd_range() (and subsequently called functions) also frees the page
table shared between user space and kernel modules (which is normally
handled by the ARM-specific pgd_free() function). This patch changes
defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
is enabled.

Note that the pgd_free() function already checks the presence of the
shared pmd page allocated by pgd_alloc() and frees it, though with
ceiling 0 this wasn't necessary.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@vger.kernel.org> # 3.3+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
69dde4c52dbac2891b49ff9723d9c84efc5baf6f 18-Feb-2013 Catalin Marinas <catalin.marinas@arm.com> ARM: 7654/1: Preserve L_PTE_VALID in pte_modify()

Following commit 26ffd0d4 (ARM: mm: introduce present, faulting entries
for PAGE_NONE), if a page has been mapped as PROT_NONE, the L_PTE_VALID
bit is cleared by the set_pte_ext() code. With LPAE the software and
hardware pte share the same location and subsequent modifications of pte
range (change_protection()) will leave the L_PTE_VALID bit cleared.

This patch adds the L_PTE_VALID bit to the newprot mask in pte_modify().

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Subash Patel <subash.rp@samsung.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.8.x
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cc577c26e2e9740b046591a72e77213c556bff19 21-Jan-2013 Christoffer Dall <c.dall@virtualopensystems.com> ARM: Add page table and page defines needed by KVM

KVM uses the stage-2 page tables and the Hyp page table format,
so we define the fields and page protection flags needed by KVM.

The nomenclature is this:
- page_hyp: PL2 code/data mappings
- page_hyp_device: PL2 device mappings (vgic access)
- page_s2: Stage-2 code/data page mappings
- page_s2_device: Stage-2 device mappings (vgic access)

Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Christoffer Dall <c.dall@virtualopensystems.com>
26ffd0d43b186b0d5186354da8714a1c2d360df0 01-Sep-2012 Will Deacon <will.deacon@arm.com> ARM: mm: introduce present, faulting entries for PAGE_NONE

PROT_NONE mappings apply the page protection attributes defined by _P000
which translate to PAGE_NONE for ARM. These attributes specify an XN,
RDONLY pte that is inaccessible to userspace. However, on kernels
configured without support for domains, such a pte *is* accessible to
the kernel and can be read via get_user, allowing tasks to read
PROT_NONE pages via syscalls such as read/write over a pipe.

This patch introduces a new software pte flag, L_PTE_NONE, that is set
to identify faulting, present entries.

Signed-off-by: Will Deacon <will.deacon@arm.com>
dbf62d50067e55a782583fe53c3d2a3d98b1f6f3 19-Jul-2012 Will Deacon <will.deacon@arm.com> ARM: mm: introduce L_PTE_VALID for page table entries

For long-descriptor translation table formats, the ARMv7 architecture
defines the last two bits of the second- and third-level descriptors to
be:

x0b - Invalid
01b - Block (second-level), Reserved (third-level)
11b - Table (second-level), Page (third-level)

This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
create ptes directly. However, when determining whether a given pte
value is present in the low-level page table accessors, we only need to
check the least significant bit of the descriptor, allowing us to write
faulting, present entries which are required for PROT_NONE mappings.

This patch introduces L_PTE_VALID, which can be used to test whether a
pte should fault, and updates the low-level page table accessors
accordingly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
a1ce39288e6fbefdd8d607021d02384eb4a20b99 02-Oct-2012 David Howells <dhowells@redhat.com> UAPI: (Scripted) Convert #include "..." to #include <path/...> in kernel system headers

Convert #include "..." to #include <path/...> in kernel system headers.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
f5f2025ef3e2cdb593707cbf87378761f17befbe 10-Aug-2012 Will Deacon <will.deacon@arm.com> ARM: 7488/1: mm: use 5 bits for swapfile type encoding

Page migration encodes the pfn in the offset field of a swp_entry_t.
For LPAE, we support physical addresses of up to 36 bits (due to
sparsemem limitations with the size of page flags), requiring 24 bits
to represent a pfn. A further 3 bits are used to encode a swp_entry into
a pte, leaving 5 bits for the type field. Furthermore, the core code
defines MAX_SWAPFILES_SHIFT as 5, so the additional type bit does not
get used.

This patch reduces the width of the type field to 5 bits, allowing us
to create up to 31 swapfiles of 64GB each.

Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
47f1204329237a0f8655f5a9f14a38ac81946ca1 10-Aug-2012 Will Deacon <will.deacon@arm.com> ARM: 7487/1: mm: avoid setting nG bit for user mappings that aren't present

Swap entries are encoding in ptes such that !pte_present(pte) and
pte_file(pte). The remaining bits of the descriptor are used to identify
the swapfile and offset within it to the swap entry.

When writing such a pte for a user virtual address, set_pte_at
unconditionally sets the nG bit, which (in the case of LPAE) will
corrupt the swapfile offset and lead to a BUG:

[ 140.494067] swap_free: Unused swap offset entry 000763b4
[ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003

This patch fixes the problem by only setting the nG bit for user
mappings that are actually present.

Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
9561f4e052a06167694e110d76ce3a5e38b59522 03-Jan-2012 Nicolas Pitre <nicolas.pitre@linaro.org> Revert "ARM: move VMALLOC_END down temporarily for shmobile"

This reverts commit 0af362f8440a78b970d5f215e234420fa87d0f3f as shmobile
is not using a non-standard memory layout anymore.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
dcfdae04bd92e8a2ea155db0e21e3bddc09e0a89 22-Nov-2011 Catalin Marinas <catalin.marinas@arm.com> ARM: LPAE: Introduce the 3-level page table format definitions

This patch introduces the pgtable-3level*.h files with definitions
specific to the LPAE page table format (3 levels of page tables).

Each table is 4KB and has 512 64-bit entries. An entry can point to a
40-bit physical address. The young, write and exec software bits share
the corresponding hardware bits (negated). Other software bits use spare
bits in the PTE.

The patch also changes some variable types from unsigned long or int to
pteval_t or pgprot_t.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
e0c0313bd720977a7ed01dc48f0762a3ddec607f 22-Nov-2011 Catalin Marinas <catalin.marinas@arm.com> ARM: LPAE: Move page table maintenance macros to pgtable-2level.h

The page table maintenance macros need to be duplicated between the
classic and the LPAE MMU so this patch moves those that are not common
to the pgtable-2level.h file.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
a32618d28dbe6e9bf8ec508ccbc3561a7d7d32f0 22-Nov-2011 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: switch to use pgtable-nopud.h

Nick Piggin noted upon introducing 4level-fixup.h:

| Add a temporary "fallback" header so architectures can run with
| the 4level pagetables patch without modification. All architectures
| should be converted to use the folding headers (include/asm-generic/
| pgtable-nop?d.h) as soon as possible, and the fallback header removed.

This makes ARM compliant with this statement.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8903826d0cd99aed9267e792d38284cf3092042b 30-Sep-2011 Will Deacon <will.deacon@arm.com> ARM: idmap: populate identity map pgd at init time using .init.text

When disabling and re-enabling the MMU, it is necessary to take out an
identity mapping for the code that manipulates the SCTLR in order to
avoid it disappearing from under our feet. This is useful when soft
rebooting and returning from CPU suspend.

This patch allocates a set of page tables during boot and populates them
with an identity mapping for the .idmap.text section. This means that
users of the identity map do not need to manage their own pgd and can
instead annotate their functions with __idmap or, in the case of assembly
code, place them in the correct section.

Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
7dbaa466780a754154531b44c2086f6618cee3a8 22-Nov-2011 Rob Herring <rob.herring@calxeda.com> ARM: 7169/1: topdown mmap support

Similar to other architectures, this adds topdown mmap support in user
process address space allocation policy. This allows mmap sizes greater
than 2GB. This support is largely copied from MIPS and the generic
implementations.

The address space randomization is moved into arch_pick_mmap_layout.

Tested on V-Express with ubuntu and a mmap test from here:
https://bugs.launchpad.net/bugs/861296

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
0af362f8440a78b970d5f215e234420fa87d0f3f 19-Sep-2011 Nicolas Pitre <nicolas.pitre@linaro.org> ARM: move VMALLOC_END down temporarily for shmobile

THIS IS A TEMPORARY HACK. The purpose of this is _only_ to avoid a
regression on an existing machine while a better fix is implemented.

On shmobile the consistent DMA memory area was set to 158MB in commit
28f0721a79 with no explanation. The documented size for this area should
vary between 2MB and 14MB, and none of the other ARM targets exceed that.

The included #warning is therefore meant to be noisy on purpose to get
shmobile maintainers attention and this commit reverted once this
consistent DMA size conflict is resolved.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Magnus Damm <damm@opensource.se>
Cc: Paul Mundt <lethal@linux-sh.org>
0536bdf33faff4d940ac094c77998cfac368cfff 25-Aug-2011 Nicolas Pitre <nicolas.pitre@linaro.org> ARM: move iotable mappings within the vmalloc region

In order to remove the build time variation between different SOCs with
regards to VMALLOC_END, the iotable mappings are now allocated inside
the vmalloc region. This allows for VMALLOC_END to be identical across
all machines.

The value for VMALLOC_END is now set to 0xff000000 which is right where
the consistent DMA area starts.

To accommodate all static mappings on machines with possible highmem usage,
the default vmalloc area size is changed to 240 MB so that VMALLOC_START
is no higher than 0xf0000000 by default.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Tested-by: Jamie Iles <jamie@jamieiles.com>
d7c5d0dcffb3b5702d9477faceff4b8398e6fed0 05-Sep-2011 Catalin Marinas <catalin.marinas@arm.com> ARM: 7077/1: LPAE: Use a mask for physical addresses in page table entries

With LPAE, the physical address mask is 40-bit while the page table
entry is 64-bit. This patch introduces PHYS_MASK for the 2-level page
table format, defined as ~0UL.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
17f57211969bddca2e922299a2530b1c65ccabfa 05-Sep-2011 Catalin Marinas <catalin.marinas@arm.com> ARM: 7075/1: LPAE: Factor out 2-level page table definitions into separate files

This patch moves page table definitions from asm/page.h, asm/pgtable.h
and asm/ptgable-hwdef.h into corresponding *-2level* files.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
8fb54284ba6aa1f0d832ec015fde64ecf4bb0f4f 28-Jun-2011 Santosh Shilimkar <santosh.shilimkar@ti.com> ARM: mm: Add strongly ordered descriptor support.

On certain architectures, there might be a need to mark certain
addresses with strongly ordered memory attributes to avoid ordering
issues at the interconnect level.

On OMAP4, the asynchronous bridge buffers can only be drained
with strongly ordered accesses and hence the need to mark the
memory strongly ordered.

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Tested-by: Vishwanath BS <vishwanath.bs@ti.com>
516295e5ab4bf986865cfff856d484ec678e3b0b 21-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: add pud-level code

Add pud_offset() et.al. between the pgd and pmd code in preparation of
using pgtable-nopud.h rather than 4level-fixup.h.

This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for
uaccess_with_memcpy.c.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cae6292b653f5e3308bf2787a54b7dcd2cc7e2b3 15-Feb-2011 Will Deacon <will.deacon@arm.com> ARM: 6672/1: LPAE: use phys_addr_t instead of unsigned long in mapping functions

The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.

This patch ensures that the phys_addr_t datatype is used to represent physical
addresses when converting from a PFN.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
36bb94ba36f332de767cfaa3af6a5136435a3a9c 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: provide RDONLY page table bit rather than WRITE bit

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
9522d7e4cb5e0858122fc55d33a2c07728f0b10d 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: invert L_PTE_EXEC to L_PTE_XN

The hardware page tables use an XN bit 'execute never'. Historically,
we've had a Linux 'execute allow' bit, in the positive sense. Get rid
of this artifact as future hardware will continue to have the XN sense.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
e926f4495e202500a6265987277fab217e235f08 21-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: remove FIRST_USER_PGD_NR

FIRST_USER_PGD_NR is now unnecessary, as this has been replaced by
FIRST_USER_ADDRESS except in the architecture code. Fix up the last
usage of FIRST_USER_PGD_NR, and remove the definition.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
614dd0585f376a25c638abbed9c5fbd21d7baece 21-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: collect up identity mapping functions

We have two places where we create identity mappings - one when we bring
secondary CPUs online, and one where we setup some mappings for soft-
reboot. Combine these two into a single implementation. Also collect
the identity mapping deletion function.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
d30e45eeabefadc6039d7f876a59e5f5f6cb11c6 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: switch order of Linux vs hardware page tables

This switches the ordering of the Linux vs hardware page tables in
each page, thereby eliminating some of the arithmetic in the page
table walks. As we now place the Linux page table at the beginning
of the page, we can deal with the offset in the pgt by simply masking
it away, along with the other control bits.

This also makes the arithmetic all be positive, rather than a mixture.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
f6e3354d02aa1f30672e3671098c12cb49c7da25 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: introduce pteval_t to represent a pte value

This makes everywhere dealing with pte values use the same type.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
97092e0c56830457af0639f6bd904537a150ea4a 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: use phys_addr_t for physical addresses

Ensure that physical addresses are typed as phys_addr_t

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
69529c0eb76469168f1dd5851f363dbab17ce8fd 16-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: directly pass pgd/pmd/pte to their error functions

Rather than passing the pte value to __pte_error, pass the raw pte_t
cookie instead. Do the same for pmd and pgd functions.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
b510b049b549500816280f7ceaa087cfefdec581 26-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: group pte functions together

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
4eec4b1396ac6a6a602b4521d40e9cf596ab776d 26-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: group pgd functions and data together

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
eb9b2b69d3bdfe9cd98cd9b2c5715346a0f0140d 26-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: pgtable: move pgprot functions to one place

Rather than scattering them throughout the file, group them together.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
c0ba10b512eb2e2a3888b6e6cc0e089f5e7a191b 21-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: improve compiler's ability to optimize page tables

Allow the compiler to better optimize the page table walking code
by avoiding over-complex pmd_addr_end() calculations. These
calculations prevent the compiler spotting that we'll never iterate
over the PMD table, causing it to create double nested loops where
a single loop will do.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ece0e2b6406a995c371e0311190631ea34ad851a 26-Oct-2010 Peter Zijlstra <a.p.zijlstra@chello.nl> mm: remove pte_*map_nested()

Since we no longer need to provide KM_type, the whole pte_*map_nested()
API is now redundant, remove it.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
d907387c42e9e39261629890e45a08ef4c3ed3fe 13-Sep-2010 Catalin Marinas <catalin.marinas@arm.com> ARM: 6383/1: Implement phys_mem_access_prot() to avoid attributes aliasing

ARMv7 onwards requires that there are no aliases to the same physical
location using different memory types (i.e. Normal vs Strongly Ordered).
Access to SO mappings when the unaligned accesses are handled in
hardware is also Unpredictable (pgprot_noncached() mappings in user
space).

The /dev/mem driver requires uncached mappings with O_SYNC. The patch
implements the phys_mem_access_prot() function which generates Strongly
Ordered memory attributes if !pfn_valid() (independent of O_SYNC) and
Normal Noncacheable (writecombine) if O_SYNC.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
6012191aa9c6ffff3a23b81162298318b56d7cb3 13-Sep-2010 Catalin Marinas <catalin.marinas@arm.com> ARM: 6380/1: Introduce __sync_icache_dcache() for VIPT caches

On SMP systems, there is a small chance of a PTE becoming visible to a
different CPU before the current cache maintenance operations in
update_mmu_cache(). To avoid this, cache maintenance must be handled in
set_pte_at() (similar to IA-64 and PowerPC).

This patch provides a unified VIPT cache handling mechanism and
implements the __sync_icache_dcache() function for ARMv6 onwards
architectures. It is called from set_pte_at() and replaces the
update_mmu_cache(). The latter is still used on VIVT hardware where a
vm_area_struct is required.

Tested-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
47ab0dee661dbd5aca67abe44a333e471134fbf9 15-May-2010 Russell King <rmk+kernel@arm.linux.org.uk> ARM: Optionally allow ARMv6 to use 'normal, bufferable' memory for DMA

Provide a configuration option to allow the ARMv6 to use normal
bufferable memory for coherent DMA. This option is forced to 'y'
for ARMv7, and offered as a configuration option on ARMv6.

Enabling this option requires drivers to have the necessary barriers
to ensure that data in DMA coherent memory is visible prior to the
DMA operation commencing.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
26a26d329688ab018e068b412b03d43d7c299f0a 20-Nov-2009 Russell King <rmk+kernel@arm.linux.org.uk> ARM: dma-mapping: switch ARMv7 DMA mappings to retain 'memory' attribute

On ARMv7, it is invalid to map the same physical address multiple times
with different memory types. Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.

However, DMA memory maps it as "strongly ordered". Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
65cec8e3db606608fd1f8dfc4a1c7c37bfba9173 17-Aug-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> ARM: implement highpte

Add the ARM implementation of highpte, which allows PTE tables to be
placed in highmem. Unfortunately, we do not offer highpte support
when support for L2 cache is enabled.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
6a00cded91532f3d58e07729ba56269339281d8e 11-Jul-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] pgtable: rearrange file PTE bit allocation

For future compatibility, we need to ensure that swap and file Linux
PTEs conform with the hardware PTEs "fault" encoding. Swap PTEs
already fit in with this, but file PTEs do not. Shift them by one
bit to ensure that they conform, using bit 2 to distinguish between
swap and file PTEs.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
f7a55fa6ecef8be6d15bd79a803e44a3187ce9d6 11-Jul-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] remove L_PTE_BUFFERABLE and L_PTE_CACHEABLE

These old symbols are meaningless now that we have memory type
support implemented. The entire memory type field needs to be
modified rather than just a few bits twiddled.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
65b1bfc13e8f50034187e339aa12b81cd6785bd5 05-Jul-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] pgtable: file pte layout documentation

Document the layout of our file PTE entries.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
fb93a1c75eb646fde35985e9af23da936775ae52 05-Jul-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] pgtable: swp pte layout documentation, definitions, and check

Document the layout of our swp PTE entries, adding definitions for
the bit masks/shifts/sizes, and implement MAX_SWAPFILES_CHECK()
such that we fail to build if we are unable to properly encode the
swp type field.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
924a158a12c7e732179dd85ddd20848039e7bd71 26-Apr-2009 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Convert pmd_page() to be highmem safe

In the long run, we may want to place page tables in highmem. However,
pmd_page() has traditionally been coded to convert the physical address
to a virtual one, which won't work with highmem pages. Instead,
translate the physical address to a PFN, and then convert the PFN to a
struct page instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
8ec53663d2698076468b3e1edc4e1b418bd54de3 07-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Improve non-executable support

Add support for detecting non-executable stack binaries, and adjust
permissions to prevent execution from data and stack areas. Also,
ensure that READ_IMPLIES_EXEC is enabled for older CPUs where that
is true, and for any executable-stack binary.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
db5b7169474882fabbd811a4cf5c1bae3157e677 07-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Remove MT_DEVICE_IXP2000 and associated definitions

As of the previous commit, MT_DEVICE_IXP2000 encodes to the same
PTE bit encoding as MT_DEVICE, so it's now redundant. Convert
MT_DEVICE_IXP2000 to use MT_DEVICE instead, and remove its aliases.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
639b0ae7f5bcd645862a9c3ea2d4321475c71d7a 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Convert ARMv6 and ARMv7 to use new memory types

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
9e8b5199a753a2583a8ef8360e6428304a242283 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Convert Xscale and Xscale3 to use new memory types

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
bb30f36f9b71c31dc8fe3483bba4c9884fc86080 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Introduce new PTE memory type bits

Provide L_PTE_MT_xxx definitions to describe the memory types that we
use in Linux/ARM. These definitions are carefully picked such that:

1. their LSBs match what is required for pre-ARMv6 CPUs.
2. they all have a unique encoding, including after modification
by build_mem_type_table() (the result being that some have more
than one combination.)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
9cff96e5bfc8e366166bfb07610604c7604ac48c 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Re-jig Linux PTE bits to allow room for 4 memory type bits

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
dfcc64497cbbf942cdd5af4b7eb17542b62aa759 30-Sep-2008 Nicolas Pitre <nico@cam.org> [ARM] 5271/1: get rid of pages_to_mb()

There is no use of this in the whole tree.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
a09e64fbc0094e3073dbb09c3b4bfe4ab669244b 05-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach

This just leaves include/asm-arm/plat-* to deal with.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
4baa9922430662431231ac637adedddbb0cfb2d7 02-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk> [ARM] move include/asm-arm to arch/arm/include/asm

Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>