History log of /arch/sh/mm/cache-sh4.c
Revision Date Author Comments
f03c4866d31e913a8dbc84f7d1459abdaf0bd326 30-Mar-2012 Paul Mundt <lethal@linux-sh.org> sh: fix up fallout from system.h disintegration.

Quite a bit of fallout all over the place, nothing terribly exciting.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
bc3e11be88010e09692ed1d214407d56caa90075 25-Nov-2011 Cong Wang <amwang@redhat.com> sh: remove the second argument of k[un]map_atomic()

Signed-off-by: Cong Wang <amwang@redhat.com>
55661fc1f105ed75852e937bf8ea408270eb0cca 01-Dec-2010 Paul Mundt <lethal@linux-sh.org> sh: Assume new page cache pages have dirty dcache lines.

This follows the ARM change c01778001a4f5ad9c62d882776235f3f31922fdd
("ARM: 6379/1: Assume new page cache pages have dirty D-cache") for the
same rationale:

There are places in Linux where writes to newly allocated page
cache pages happen without a subsequent call to flush_dcache_page()
(several PIO drivers including USB HCD). This patch changes the
meaning of PG_arch_1 to be PG_dcache_clean and always flush the
D-cache for a newly mapped page in update_mmu_cache().

This addresses issues seen with executing binaries from MMC, in
addition to some of the other HCDs that don't explicitly do cache
management for their pipe-in buffers.

Requested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
9d56dd3b083a3bec56e9da35ce07baca81030b03 25-Jan-2010 Paul Mundt <lethal@linux-sh.org> sh: Mass ctrl_in/outX to __raw_read/writeX conversion.

The old ctrl in/out routines are non-portable and unsuitable for
cross-platform use. While drivers/sh has already been sanitized, there
is still quite a lot of code that is not. This converts the arch/sh/ bits
over, which permits us to flag the routines as deprecated whilst still
building with -Werror for the architecture code, and to ensure that
future users are not added.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2dc2f8e0c46864e2a3722c84eaa96513d4cf8b2f 21-Jan-2010 Paul Mundt <lethal@linux-sh.org> sh: Kill off the special uncached section and fixmap.

Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map < 16MB anyways, there's no need for the
special uncached section. Kill it off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
b4c892762373c5e59c7e8db35f5f9a7658602bda 24-Dec-2009 Matt Fleming <matt@console-pimps.org> sh: Optimise flush_dcache_page() on SH4

If the page is not mapped into any process's address space then aliases
cannot exist in the cache. So reduce the amount of flushing we perform.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
e717cc6c07f006be36e35189aacb28be4e30ad14 08-Dec-2009 Matt Fleming <matt@console-pimps.org> sh: Can't compare physical and virtual addresses for aliases

It does not make sense to compare virtual and physical addresses for
aliasing, only virtual addresses can be compared for aliases.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a781d1e5ff6277f80ff3c9503775521bc64cf131 04-Dec-2009 Matt Fleming <matt@console-pimps.org> sh: Drop associative writes for SH-4 cache flushes.

When flushing/invalidating the icache/dcache via the memory-mapped IC/OC
address arrays, the associative bit should only be used in conjunction with
virtual addresses. However, we currently flush cache lines based on physical
address, so stop using the associative bit.

It is a better strategy to use non-associative writes (and physical tags) for
flushing the caches anyway, because flushing by virtual address (as with the
A-bit set) requires a valid TLB entry for that virtual address. If one does not
exist in the TLB no exception is generated and the flush is silently ignored.

This is also future-proofing for SH-4A parts which are gradually phasing out
associative writes to the cache array due to the aforementioned case of certain
flushes silently turning in to nops.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a9d244a2ff163247b607c4bb64803230ca8f8acb 06-Nov-2009 Matt Fleming <matt@console-pimps.org> sh: Account for cache aliases in flush_icache_range()

The icache may also contain aliases so we must account for them just
like we do when manipulating the dcache. We usually get away with
aliases in the icache because the instructions that are read from memory
are read-only, i.e. they never change. However, the place where this
bites us is when the code has been modified.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
eb3118f652ea7751ecf6a7e467bb637895e3be3b 29-Oct-2009 Matt Fleming <matt@console-pimps.org> sh: Do not apply virt_to_phys() to a physical address

The variable 'phys' already contains the physical address to flush. It
is not a virtual address and should not be passed to virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a7a7c0e1d12bcfb4a96cae439951232b08c91841 16-Oct-2009 Valentin Sitdikov <valentin.sitdikov@siemens.com> sh: Fix up single page flushing to use PAGE_SIZE.

Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.

Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
1f69b6af9171f50135cce8023c84d82fbf42a8f5 06-Oct-2009 Matt Fleming <matt@console-pimps.org> sh: Prepare for dynamic PMB support

To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
8bd642b17bea31f8361b61c16c8d154638414df4 06-Oct-2009 Matt Fleming <matt@console-pimps.org> sh: Obliterate the P1 area macros

Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.

Likewise, replace a P1SEGADDR() use with virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a6325247f50628c7e53a483807d0ef2c24a7aa90 06-Oct-2009 Matt Fleming <matt@console-pimps.org> sh: Sprinkle __uses_jump_to_uncached

Fix some callers of jump_to_uncached() and back_to_cached() that were
not annotated with __uses_jump_to_uncached.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
c4845a4b2288a9e5d96a0558e474809028c8aff3 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.

If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
that we purge the cache 4k at a time. Make it explicitly 4k per
iteration, rather than iterating for PAGE_SIZE before looping over again.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
deaef20e9789d93c06d2d3b5ffc99939814802ca 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: Rework sh4_flush_cache_page() for coherent kmap mapping.

This builds on top of the MIPS r4k code that does roughly the same thing.
This permits the use of kmap_coherent() for mapped pages with dirty
dcache lines and falls back on kmap_atomic() otherwise.

This also fixes up a problem with the alias check and defers to
shm_align_mask directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
bd6df57481b329dfeeb4889068848ee4f4761561 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: Kill off segment-based d-cache flushing on SH-4.

This kills off the unrolled segment based flushers on SH-4 and switches
over to a generic unrolled approach derived from the writethrough segment
flusher.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
31c9efde786252112cc3d04a1ed3513b6ec63a7b 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().

PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
654d364e26c797e8a5f9e2a1393607e6ca0106eb 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: sh4_flush_cache_mm() optimizations.

The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
682f88ab74e55dae55ea3bf30b46f56f71b793bd 09-Sep-2009 Paul Mundt <lethal@linux-sh.org> sh: Cleanup whitespace damage in sh4_flush_icache_range().

There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
983f4c514c4c9ddac1077a2c805fd16cbe3f7487 01-Sep-2009 Paul Mundt <lethal@linux-sh.org> Revert "sh: Kill off now redundant local irq disabling."

This reverts commit 64a6d72213dd810dd55bd0a503c36150af41c3c3.

Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
ce3f7cb96e67d6518c7fc7b361a76409c3817d64 01-Sep-2009 Matt Fleming <matt@console-pimps.org> sh: Fix dcache flushing for N-way write-through caches.

This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
e76a0136a3cf1859fbc07f122e42293d22229558 27-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Fix up sh4_flush_dcache_page() build on UP.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
ffad9d7a54a5e809007135595c778715aa0fb07a 24-Aug-2009 Stuart Menefy <stuart.menefy@st.com> sh: Fix problems with cache flushing when cache is in write-through mode

Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a5cf9e2444ec15de5407696ff21c32dd21ca0a8d 24-Aug-2009 Stuart Menefy <stuart.menefy@st.com> sh: Improve comments int SH4 cache flushing code

This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
64a6d72213dd810dd55bd0a503c36150af41c3c3 21-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Kill off now redundant local irq disabling.

on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
f26b2a562b46ab186c8383993ab1332673ac4a47 21-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Make cache flushers SMP-aware.

This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
c139a595878b0e8156476668e3d5c27b6aca7624 20-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Fix up cache-sh4 build on SMP.

mapping is unused on the SMP build, trigger a build error. Move it under
the ifdef.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
37443ef3f0406e855e169c87ae3f4ffb4b6ff635 14-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Migrate SH-4 cacheflush ops to function pointers.

This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
916e97834e023f89b31f796b53cc9c7956e7fe17 15-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Kill off unused flush_icache_user_range().

We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
0b445dcaf3adda5bec5cc494925bc689fcc59a0e 15-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Don't export flush_dcache_all().

flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
27d59ec1709817a90aa3ab7169f60994a89ad2f5 15-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Move alias computation to shared cache init.

This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().

This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
ecba1060583635ab55092072441ff903b5e9a659 15-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Centralize the CPU cache initialization routines.

This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
e7b8b7f16edc9b363573eadf2ab2683473626071 14-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: NO_CONTEXT ASID optimizations for SH-4 cache flush.

This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
817425275271f2514f0dc6952182aa057ce80973 04-Aug-2009 Paul Mundt <lethal@linux-sh.org> sh: Split out SH-4 __flush_xxx_region() ops.

This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2277ab4a1df50e05bc732fe9488d4e902bb8399a 22-Jul-2009 Paul Mundt <lethal@linux-sh.org> sh: Migrate from PG_mapped to PG_dcache_dirty.

This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.

SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
205a3b4328de1c8ddd99ddd5092bed1344068213 05-Sep-2008 Paul Mundt <lethal@linux-sh.org> sh: uninline flush_icache_all().

This uses jump_to_uncached() which is now given the noinline attribute
due to the special section mapping. Kill off the inline attribute to
fix up compilation failure.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
09b5a10c1944214a6008712bfa92b29f00b84a1a 02-Jul-2008 Chris Smith <chris.smith@st.com> sh: Optimized flush_icache_range() implementation.

Add implementation of flush_icache_range() suitable for signal handler
and kprobes. Remove flush_cache_sigtramp() and change signal.c to use
flush_icache_range().

Signed-off-by: Chris Smith <chris.smith@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
cbaa118ecfd99fc5ed7adbd9c34a30e1c05e3c93 30-Nov-2007 Stuart Menefy <stuart.menefy@st.com> sh: Preparation for uncached jumps through PMB.

Presently most of the 29-bit physical parts do P1/P2 segmentation
with a 1:1 cached/uncached mapping, jumping between the two to
control the caching behaviour. This provides the basic infrastructure
to maintain this behaviour on 32-bit physical parts that don't map
P1/P2 at all, using a shiny new linker section and corresponding
fixmap entry.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
ab27f62002f4dc8f759c1ec069024d8173e5dea0 24-Sep-2007 Paul Mundt <lethal@linux-sh.org> sh: Calculate cache aliases on L2 caches.

Calculate the number of cache aliases on probed L2 caches, and while
we're at it, print out the detected statistics at boot time for these
also.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
d10040f7eb808cd984b563d1cf727a1020990a2e 24-Sep-2007 Paul Mundt <lethal@linux-sh.org> sh: Fix alias calculation for non-aliasing cases.

There was an off-by-1 on the cache alias detection logic on SH-4,
which caused n_aliases to always be 1 even when the page size
precluded the existence of aliases.

With this corrected, 64KB pages happily reports n_aliases == 0, and
hits the appropriate fast paths in the flushing routines.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
7ec9d6f8c0e6932d380da1964021fbebf2311f04 21-Sep-2007 Paul Mundt <lethal@linux-sh.org> sh: Avoid smp_processor_id() in cache desc paths.

current_cpu_data uses smp_processor_id() in order to find the
corresponding cpu_data. As the cache descs are all currently
identical, just have this look at probed results from the boot
CPU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
f0b859e3d63a07995f0db294864c2f3c9228f1e4 25-Jul-2007 Paul Mundt <lethal@linux-sh.org> sh: Reclaim beginning of P3 space for vmalloc area.

The first 1MB of P3 space was reserved and used for page colouring,
as we've reworked that to use fixmaps, we can reclaim the space and
hand it back to VMALLOC_START.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
8cf1a74305688c85fc8d23ab7432a0c447ee6413 24-Jul-2007 Paul Mundt <lethal@linux-sh.org> sh: Add kmap_coherent()/kunmap_coherent() interface for SH-4.

This wires up kmap_coherent() and kunmap_coherent() on SH-4, and
moves away from the p3map_mutex and reserved P3 space, opting to
use fixmaps for colouring instead.

The copy_user_page()/clear_user_page() implementations are moved
to this, which fixes the nasty blowups with spinlock debugging
as a result of having some of these calls nested under the page
table lock.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
39e688a94b94eaba768b1494e19e96f828fc2688 05-Mar-2007 Paul Mundt <lethal@linux-sh.org> sh: Revert lazy dcache writeback changes.

These ended up causing too many problems on older parts,
revert for now..

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
11c1965687b0a472add948d4240dfe65a2fcb298 25-Dec-2006 Paul Mundt <lethal@linux-sh.org> sh: Fixup cpu_data references for the non-boot CPUs.

There are a lot of bogus cpu_data-> references that only end up working
for the boot CPU, convert these to current_cpu_data to fixup SMP.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
26b7a78c55fbc0e23a7dc19e89fd50f200efc002 28-Dec-2006 Paul Mundt <lethal@linux-sh.org> sh: Lazy dcache writeback optimizations.

This converts the lazy dcache handling to the model described in
Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
bonus, this slightly cuts down on the cache flushing frequency.

With that and the PTEA handling out of the way, the update_mmu_cache()
implementations can be consolidated, and we no longer have to worry
about which configuration the cache is in for the SH7705 case.

And finally, explicitly disable the lazy writeback on SMP (SH-4A).

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
37bda1da4570c2e9c6dd34e77d2120218e384950 09-Dec-2006 Paul Mundt <lethal@linux-sh.org> sh: Convert remaining remap_area_pages() users to ioremap_page_range().

A couple of these were missed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
510c72ad2dd4e05e6908755f51ac89482c6eb987 26-Nov-2006 Paul Mundt <lethal@linux-sh.org> sh: Fixup various PAGE_SIZE == 4096 assumptions.

There were a number of places that made evil PAGE_SIZE == 4k
assumptions that ended up breaking when trying to play with
8k and 64k page sizes, this fixes those up.

The most significant change is the way we load THREAD_SIZE,
previously this was done via:

mov #(THREAD_SIZE >> 8), reg
shll8 reg

to avoid a memory access and allow the immediate load. With
a 64k PAGE_SIZE, we're out of range for the immediate load
size without resorting to special instructions available in
later ISAs (movi20s and so on). The "workaround" for this is
to bump up the shift to 10 and insert a shll2, which gives a
bit more flexibility while still being much cheaper than a
memory access.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
52e27782e1c4afa1feca0fdf194d279595e0431c 21-Nov-2006 Paul Mundt <lethal@linux-sh.org> sh: p3map_sem sem2mutex conversion.

Simple sem2mutex conversion for the p3map semaphores.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
33573c0e3243aaa38b6ad96942de85a1b713c2ff 27-Sep-2006 Paul Mundt <lethal@linux-sh.org> sh: Fix occasional flush_cache_4096() stack corruption.

IRQs disabling in flush_cache_4096 for cache purge. Under certain
workloads we would get an IRQ in the middle of a purge operation,
and the cachelines would remain in an inconsistent state, leading
to occasional stack corruption.

Signed-off-by: Takeo Takahashi <takahashi.takeo@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
28ccf7f91b1ac42ee1f18480a69d2a7486b625ce 27-Sep-2006 Paul Mundt <lethal@linux-sh.org> sh: Selective flush_cache_mm() flushing.

flush_cache_mm() wraps in to flush_cache_all(), which is rather
excessive given that the number of PTEs within the specified context
are generally quite low. Optimize for walking the mm's VMA list and
selectively flushing the VMA ranges from the dcache. Invalidate the
icache only if a VMA sets VM_EXEC.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
298476220d1f793ca0ac6c9e5dc817e1ad3e9851 27-Sep-2006 Paul Mundt <lethal@linux-sh.org> sh: Add control register barriers.

Currently when making changes to control registers, we
typically need some time for changes to take effect (8
nops, generally). However, for sh4a we simply need to
do an icbi..

This is a simple patch for implementing a general purpose
ctrl_barrier() which functions as a control register write
barrier. There's some additional documentation in the patch
itself, but it's pretty self explanatory.

There were also some places where we were not doing the
barrier, which didn't seem to have any adverse effects on
legacy parts, but certainly did on sh4a. It's safer to have
the barrier in place for legacy parts as well in these cases,
though this does make flush_tlb_all() more expensive (by an
order of 8 nops). We can ifdef around the flush_tlb_all()
case for now if it's clear that all legacy parts won't have
a problem with this.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
b638d0b921dc95229af0dfd09cd24850336a2f75 27-Sep-2006 Richard Curnow <richard.curnow@st.com> sh: Optimized cache handling for SH-4/SH-4A caches.

This reworks some of the SH-4 cache handling code to more easily
accomodate newer-style caches (particularly for the > direct-mapped
case), as well as optimizing some of the old code.

Signed-off-by: Richard Curnow <richard.curnow@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
fdfc74f9fcebdda14609159d5010b758a9409acf 27-Sep-2006 Paul Mundt <lethal@linux-sh.org> sh: Support for SH-4A memory barriers.

SH-4A supports 'synco' as a barrier, sprinkle it around
the cache ops as necessary..

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
a252710fc5b63b24934905ca47ecf661702d7f00 27-Sep-2006 Paul Mundt <lethal@linux-sh.org> sh: flush_cache_range() cleanup and optimizations.

flush_cache_range() wasn't page aligning the end of the range,
we can't assume that it will always be page aligned, and we
ended up getting unaligned faults in some rare call paths.

Additionally, we add a small optimization to just purge the
dcache entirely if the range is large enough that the page
table walking will take longer. We use an arbitrary value of
64 pages for the large range size, as per sh64.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 17-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org> Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!