History log of /arch/s390/lib/spinlock.c
Revision Date Author Comments
bbae71bf9c2fe90dc5642d4cddbbc1994861fd92 22-Sep-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/rwlock: use the interlocked-access facility 1 instructions

Make use of the load-and-add, load-and-or and load-and-and instructions
to atomically update the read-write lock without a compare-and-swap loop.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
94232a4332de3bc210e7067fd43521b3eb12336a 22-Sep-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/rwlock: improve writer fairness

Set the write-lock bit in the out-of-line rwlock code to indicate that
a writer is waiting. Additional readers will no be able to get the lock
until at least one writer got the lock. Additional writers have to wait
for the first writer to release the lock again.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2684e73a861fe7b2ab763f442207025a1d9bb6a6 22-Sep-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/rwlock: remove interrupt-enabling rwlock variant.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
d59b93da5e572703e1a7311c13dd3472a4e56e30 19-Sep-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/rwlock: use directed yield for write-locked rwlocks

Add an owner field to the arch_rwlock_t to be able to pass the timeslice
of a virtual CPU with diagnose 0x9c to the lock owner in case the rwlock
is write-locked. The undirected yield in case the rwlock is acquired
writable but the lock is read-locked is removed.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
470ada6b1a1d80a173586c036f84e2c3a486ebf9 16-May-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/spinlock: refactor arch_spin_lock_wait[_flags]

Reorder the spinlock wait code to make it more readable.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
939c5ae4029e1679bb93f7d09afb8c831db985bd 16-May-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/rwlock: add missing local_irq_restore calls

The out of line _raw_read_lock_wait_flags/_raw_write_lock_wait_flags
functions for the arch_read_lock_flags/arch_write_lock_flags calls
fail to re-enable the interrupts after another unsuccessful try to
get the lock with compare-and-swap. The following wait would be
done with interrupts disabled which is suboptimal.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
bae8f567344a7cb6a23ca6e13096ba785c69eb42 15-May-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/spinlock,rwlock: always to a load-and-test first

In case a lock is contended it is better to do a load-and-test first
before trying to get the lock with compare-and-swap. This helps to avoid
unnecessary cache invalidations of the cacheline for the lock if the
CPU has to wait for the lock. For an uncontended lock doing the
compare-and-swap directly is a bit better, if the CPU does not have the
cacheline in its cache yet the compare-and-swap will get it read-write
immediately while a load-and-test would get it read-only first.

Always to the load-and-test first to avoid the cacheline invalidations
for the contended case outweight the potential read-only to read-write
cacheline upgrade for the uncontended case.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2e4006b34d06681ed95d55510d4450f29a13c417 06-May-2014 Gerald Schaefer <gerald.schaefer@de.ibm.com> s390/spinlock: fix system hang with spin_retry <= 0

On LPAR, when spin_retry is set to <= 0, arch_spin_lock_wait() and
arch_spin_lock_wait_flags() may end up in a while(1) loop w/o doing
any compare and swap operation. To fix this, use do/while instead of
for loop.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
6c8cd5bbda7e6be166cf2e2dd4be5890193e17ac 07-Apr-2014 Philipp Hachtmann <phacht@linux.vnet.ibm.com> s390/spinlock: optimize spinlock code sequence

Use lowcore constant to improve the code generated for spinlocks.

[ Martin Schwidefsky: patch breakdown and code beautification ]

Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
5b3f683e694a835f5dfdab06102be1a50604c3b7 07-Apr-2014 Philipp Hachtmann <phacht@linux.vnet.ibm.com> s390/spinlock: cleanup spinlock code

Improve the spinlock code in several aspects:
- Have _raw_compare_and_swap return true if the operation has been
successful instead of returning the old value.
- Remove the "volatile" from arch_spinlock_t and arch_rwlock_t
- Rename 'owner_cpu' to 'lock'
- Add helper functions arch_spin_trylock_once / arch_spin_tryrelease_once

[ Martin Schwidefsky: patch breakdown and code beautification ]

Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
a53c8fab3f87c995c30ac226a03af95361243144 20-Jul-2012 Heiko Carstens <heiko.carstens@de.ibm.com> s390/comments: unify copyright messages and remove file names

Remove the file name from the comment at top of many files. In most
cases the file name was wrong anyway, so it's rather pointless.

Also unify the IBM copyright statement. We did have a lot of sightly
different statements and wanted to change them one after another
whenever a file gets touched. However that never happened. Instead
people start to take the old/"wrong" statements to use as a template
for new files.
So unify all of them in one go.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
8b646bd759086f6090fe27acf414c0b5faa737f4 11-Mar-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] rework smp code

Define struct pcpu and merge some of the NR_CPUS arrays into it, including
__cpu_logical_map, current_set and smp_cpu_state. Split smp related
functions to those operating on physical cpus and the functions operating
on a logical cpu number. Make the functions for physical cpus use a
pointer to a struct pcpu. This hides the knowledge about cpu addresses in
smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header.

The PSW restart mechanism is used to start secondary cpus, calling a
function on an online cpu, calling a function on the ipl cpu, and for
the nmi signal. Replace the different assembler functions with a
single function restart_int_handler. The new entry point calls a function
whose pointer is stored in the lowcore of the target cpu and it can wait
for the source cpu to stop. This covers all existing use cases.

Overall the code is now simpler and there are ~380 lines less code.

Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
59b697874529f5c3cbcaf5816b3d6c584af521e8 26-Feb-2010 Gerald Schaefer <gerald.schaefer@de.ibm.com> [S390] spinlock: check virtual cpu running status

This patch introduces a new function that checks the running status
of a cpu in a hypervisor. This status is not virtualized, so the check
is only correct if running in an LPAR. On acquiring a spinlock, if the
cpu holding the lock is scheduled by the hypervisor, we do a busy wait
on the lock. If it is not scheduled, we yield over to that cpu.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
fb380aadfe34e8d3ce628cb3e386882351940874 13-Jan-2010 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] Move __cpu_logical_map to smp.c

Finally move it to the place where it belongs to and make get rid of
it for !CONFIG_SMP.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
e5931943d02bf751b1ec849c0d2ade23d76a8d41 03-Dec-2009 Thomas Gleixner <tglx@linutronix.de> locking: Convert raw_rwlock functions to arch_rwlock

Name space cleanup for rwlock functions. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
fb3a6bbc912b12347614e5742c7c61416cdb0ca0 03-Dec-2009 Thomas Gleixner <tglx@linutronix.de> locking: Convert raw_rwlock to arch_rwlock

Not strictly necessary for -rt as -rt does not have non sleeping
rwlocks, but it's odd to not have a consistent naming convention.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
0199c4e68d1f02894bdefe4b5d9e9ee4aedd8d62 02-Dec-2009 Thomas Gleixner <tglx@linutronix.de> locking: Convert __raw_spin* functions to arch_spin*

Name space cleanup. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
445c89514be242b1b0080056d50bdc1b72adeb5c 02-Dec-2009 Thomas Gleixner <tglx@linutronix.de> locking: Convert raw_spinlock to arch_spinlock

The raw_spin* namespace was taken by lockdep for the architecture
specific implementations. raw_spin_* would be the ideal name space for
the spinlocks which are not converted to sleeping locks in preempt-rt.

Linus suggested to convert the raw_ to arch_ locks and cleanup the
name space instead of using an artifical name like core_spin,
atomic_spin or whatever

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
ce58ae6f7f6bdd48c87472ff830a6d586ff076b2 12-Jun-2009 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] implement interrupt-enabling rwlocks

arch backend for f5f7eac41db827a47b2163330eecd7bb55ae9f12
"Allow rwlocks to re-enable interrupts".

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
894cdde26b538c77b9943bc72f0570abf6e58e37 26-Jan-2008 Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp> [S390] do local_irq_restore while spinning in spin_lock_irqsave.

In s390's spin_lock_irqsave, interrupts remain disabled while
spinning. In other architectures like x86 and powerpc, interrupts are
re-enabled while spinning if IRQ is not masked before spin_lock_irqsave
is called.

The following patch re-enables interrupts through local_irq_restore
while spinning for a lock acquisition.
This can improve system response.

[heiko.carstens@de.ibm.com: removed saving of pc]

Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
3b4beb31759765efdda9f9431aebfedf828bbfe0 26-Jan-2008 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] Remove owner_pc member from raw_spinlock_t.

Used to contain the address of the holder of the lock. But since the
spinlock code is not inlined anymore all locks contain the same address
anyway. And since in addtition nobody complained about that for ages
its obviously unused. So remove it.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
3c1fcfe229e99752c74efb945a4a3f560be04204 01-Oct-2006 Martin Schwidefsky <schwidefsky@de.ibm.com> [PATCH] Directed yield: direct yield of spinlocks for s390.

Use the new diagnose 0x9c in the spinlock implementation for s390. It
yields the remaining timeslice of the virtual cpu that tries to acquire a
lock to the virtual cpu that is the current holder of the lock.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
96567161de0ceed45cd2eb0e5380e3c797f5c0f4 10-Mar-2006 Christian Ehrhardt <ehrhardt@de.ibm.com> [PATCH] s390: Increase spinlock retry code performance

Currently the code tries up to spin_retry times to grab a lock using the cs
instruction. The cs instruction has exclusive access to a memory region
and therefore invalidates the appropiate cache line of all other cpus. If
there is contention on a lock this leads to cache line trashing. This can
be avoided if we first check wether a cs instruction is likely to succeed
before the instruction gets actually executed.

Signed-off-by: Christian Ehrhardt <ehrhardt@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
0152fb37603e3a776768794030b809ae77027aa4 14-Jan-2006 Martin Schwidefsky <schwidefsky@de.ibm.com> [PATCH] s390: spinlock fixes

Remove useless spin_retry_counter and fix compilation for UP kernels.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
347a8dc3b815f0c0fa62a1df075184ffe4cbdcf1 06-Jan-2006 Martin Schwidefsky <schwidefsky@de.ibm.com> [PATCH] s390: cleanup Kconfig

Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X,
ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by
S390, 64BIT and COMPAT.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
fb1c8f93d869b34cacb8b8932e2b83d96a19d720 10-Sep-2005 Ingo Molnar <mingo@elte.hu> [PATCH] spinlock consolidation

This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code. It does the following
things:

- consolidates and enhances the spinlock/rwlock debugging code

- simplifies the asm/spinlock.h files

- encapsulates the raw spinlock type and moves generic spinlock
features (such as ->break_lock) into the generic code.

- cleans up the spinlock code hierarchy to get rid of the spaghetti.

Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c. (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)

Also, i've enhanced the rwlock debugging facility, it will now track
write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.

The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:

include/asm-i386/spinlock_types.h | 16
include/asm-x86_64/spinlock_types.h | 16

I have also split up the various spinlock variants into separate files,
making it easier to see which does what. The new layout is:

SMP | UP
----------------------------|-----------------------------------
asm/spinlock_types_smp.h | linux/spinlock_types_up.h
linux/spinlock_types.h | linux/spinlock_types.h
asm/spinlock_smp.h | linux/spinlock_up.h
linux/spinlock_api_smp.h | linux/spinlock_api_up.h
linux/spinlock.h | linux/spinlock.h

/*
* here's the role of the various spinlock/rwlock related include files:
*
* on SMP builds:
*
* asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
* initializers
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
* implementations, mostly inline assembly code
*
* (also included on UP-debug builds:)
*
* linux/spinlock_api_smp.h:
* contains the prototypes for the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*
* on UP builds:
*
* linux/spinlock_type_up.h:
* contains the generic, simplified UP spinlock type.
* (which is an empty structure on non-debug builds)
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* linux/spinlock_up.h:
* contains the __raw_spin_*()/etc. version of UP
* builds. (which are NOPs on non-debug, non-preempt
* builds)
*
* (included on UP-non-debug builds:)
*
* linux/spinlock_api_up.h:
* builds the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*/

All SMP and UP architectures are converted by this patch.

arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.

From: Grant Grundler <grundler@parisc-linux.org>

Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
Builds 32-bit SMP kernel (not booted or tested). I did not try to build
non-SMP kernels. That should be trivial to fix up later if necessary.

I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
are well tested and contained entirely inside arch specific code. I do NOT
expect any new issues to arise with them.

If someone does ever need to use debug/metrics with them, then they will
need to unravel this hairball between spinlocks, atomic ops, and bit ops
that exist only because parisc has exactly one atomic instruction: LDCW
(load and clear word).

From: "Luck, Tony" <tony.luck@intel.com>

ia64 fix

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjanv@infradead.org>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <willy@debian.org>
Signed-off-by: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se>
Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
951f22d5b1f0eaae35dafc669e3774a0c2084d10 27-Jul-2005 Martin Schwidefsky <schwidefsky@de.ibm.com> [PATCH] s390: spin lock retry

Split spin lock and r/w lock implementation into a single try which is done
inline and an out of line function that repeatedly tries to get the lock
before doing the cpu_relax(). Add a system control to set the number of
retries before a cpu is yielded.

The reason for the spin lock retry is that the diagnose 0x44 that is used to
give up the virtual cpu is quite expensive. For spin locks that are held only
for a short period of time the costs of the diagnoses outweights the savings
for spin locks that are held for a longer timer. The default retry count is
1000.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>