History log of /arch/s390/kernel/asm-offsets.c
Revision Date Author Comments
b5f87f15e20092c060f465b283b07a76af7f2e5f 01-Oct-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/idle: consolidate idle functions and definitions

Move the C functions and definitions related to the idle state handling
to arch/s390/include/asm/idle.h and arch/s390/kernel/idle.c. The function
s390_get_idle_time is renamed to arch_cpu_idle_time and vtime_stop_cpu to
enabled_wait.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
b7eacb59cd7fb5e98852186e485c0c865f862645 29-Aug-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/vdso: add vdso support for coarse clocks

Add CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE optimization to
the 64-bit and 31-bit vdso.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
63aef00b55d37e9fad837a8b38a2c261f0d32041 27-May-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/lowcore: replace lowcore irb array with a per-cpu variable

Remove the 96-byte irb array from the lowcore and create a per-cpu
variable instead. That way we will pick up any change in the definition
of the struct irb automatically.

Acked-By: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
d3a73acbc26a4a81a01a35fd162973e53d0386f5 15-Apr-2014 Martin Schwidefsky <schwidefsky@de.ibm.com> s390: split TIF bits into CIF, PIF and TIF bits

The oi and ni instructions used in entry[64].S to set and clear bits
in the thread-flags are not guaranteed to be atomic in regard to other
CPUs. Split the TIF bits into CPU, pt_regs and thread-info specific
bits. Updates on the TIF bits are done with atomic instructions,
updates on CPU and pt_regs bits are done with non-atomic instructions.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
da7cf2570c5fa29a02a3e8026bfff89620706d2f 26-Feb-2014 Jens Freimann <jfrei@linux.vnet.ibm.com> s390: add fields to lowcore definition

This patch adds fields which are currently missing but needed for the correct
injection of interrupts.

This is based on a patch by David Hildenbrand

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
21ee7ffd176a238cf185c142bd4c20d0152eda4f 26-Feb-2014 Jens Freimann <jfrei@linux.vnet.ibm.com> s390: rename and split lowcore field per_perc_atmid

per_perc_atmid is currently a two-byte field that combines two
fields, the PER code and the PER Addressing-and-Translation-Mode
Identification (ATMID)

Let's make them accessible indepently and also rename per_cause to
per_code.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
3d53b46ce8b1b873cf8501bac251b8c0cf489d4f 10-Feb-2014 Jens Freimann <jfrei@linux.vnet.ibm.com> s390: fix name of lowcore field at offset 0xa3

According to the Principles of Operation, at offset 0xA3
in the lowcore we have the "Architectural-Mode identification",
not an "access identification".

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
457f2180951cdcbfb4657ddcc83b486e93497f56 21-Mar-2014 Heiko Carstens <heiko.carstens@de.ibm.com> s390/uaccess: rework uaccess code - fix locking issues

The current uaccess code uses a page table walk in some circumstances,
e.g. in case of the in atomic futex operations or if running on old
hardware which doesn't support the mvcos instruction.

However it turned out that the page table walk code does not correctly
lock page tables when accessing page table entries.
In other words: a different cpu may invalidate a page table entry while
the current cpu inspects the pte. This may lead to random data corruption.

Adding correct locking however isn't trivial for all uaccess operations.
Especially copy_in_user() is problematic since that requires to hold at
least two locks, but must be protected against ABBA deadlock when a
different cpu also performs a copy_in_user() operation.

So the solution is a different approach where we change address spaces:

User space runs in primary address mode, or access register mode within
vdso code, like it currently already does.

The kernel usually also runs in home space mode, however when accessing
user space the kernel switches to primary or secondary address mode if
the mvcos instruction is not available or if a compare-and-swap (futex)
instruction on a user space address is performed.
KVM however is special, since that requires the kernel to run in home
address space while implicitly accessing user space with the sie
instruction.

So we end up with:

User space:
- runs in primary or access register mode
- cr1 contains the user asce
- cr7 contains the user asce
- cr13 contains the kernel asce

Kernel space:
- runs in home space mode
- cr1 contains the user or kernel asce
-> the kernel asce is loaded when a uaccess requires primary or
secondary address mode
- cr7 contains the user or kernel asce, (changed with set_fs())
- cr13 contains the kernel asce

In case of uaccess the kernel changes to:
- primary space mode in case of a uaccess (copy_to_user) and uses
e.g. the mvcp instruction to access user space. However the kernel
will stay in home space mode if the mvcos instruction is available
- secondary space mode in case of futex atomic operations, so that the
instructions come from primary address space and data from secondary
space

In case of kvm the kernel runs in home space mode, but cr1 gets switched
to contain the gmap asce before the sie instruction gets executed. When
the sie instruction is finished cr1 will be switched back to contain the
user asce.

A context switch between two processes will always load the kernel asce
for the next process in cr1. So the first exit to user space is a bit
more expensive (one extra load control register instruction) than before,
however keeps the code rather simple.

In sum this means there is no need to perform any error prone page table
walks anymore when accessing user space.

The patch seems to be rather large, however it mainly removes the
the page table walk code and restores the previously deleted "standard"
uaccess code, with a couple of changes.

The uaccess without mvcos mode can be enforced with the "uaccess_primary"
kernel parameter.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
b5e64b3de7fdd1db0a12871f7114fc1da899df8e 02-Dec-2013 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/vdso: ectg gettime support for CLOCK_THREAD_CPUTIME_ID

The code to use the ECTG instruction to calculate the cputime for the
current thread is currently used only for the per-thread CPU-clock
with the clockid -2 (PID=0, VIRT=1). Use the same code for the clockid
CLOCK_THREAD_CPUTIME_ID to speed up the more common clockid as well.

Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
79c74ecbebf76732f91b82a62ce7fc8a88326962 22-Nov-2013 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/time,vdso: convert to the new update_vsyscall interface

Switch to the improved update_vsyscall interface that provides
sub-nanosecond precision for gettimeofday and clock_gettime.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
48f6b00c6e3190b786c44731b25ac124c81c2247 17-Jun-2013 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/irq: store interrupt information in pt_regs

Copy the interrupt parameters from the lowcore to the pt_regs structure
in entry[64].S and reduce the arguments of the low level interrupt handler
to the pt_regs pointer only. In addition move the test-pending-interrupt
loop from do_IRQ to entry[64].S to make sure that interrupt information
is always delivered via pt_regs.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
49b99e1e0dedbd6cc93b2d2776b60fb7151ff3d7 17-May-2013 Christian Borntraeger <borntraeger@de.ibm.com> s390/kvm: Provide a way to prevent reentering SIE

Lets provide functions to prevent KVM from reentering SIE and
to kick cpus out of SIE. We cannot use the common kvm_vcpu_kick code,
since we need to kick out guests in places that hold architecture
specific locks (e.g. pgste lock) which might be necessary on the
other cpus - so no waiting possible.

So lets provide a bit in a private field of the sie control block
that acts as a gate keeper, after we claimed we are in SIE.
Please note that we do not reuse prog0c, since we want to access
that bit without atomic ops.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
95d38fd0bcf1996082f5f8762e6f1c849755e0c6 17-May-2013 Christian Borntraeger <borntraeger@de.ibm.com> s390/kvm: Mark if a cpu is in SIE

Lets track in a private bit if the sie control block is active.
We want to track this as closely as possible, so we also have to
instrument the interrupt and program check handler. Lets use the
existing HANDLE_SIE_INTERCEPT macro.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
616498813b11ffefe1ed36b9f2e4fd2cdbd22f15 24-Apr-2013 Martin Schwidefsky <schwidefsky@de.ibm.com> s390: system call path micro optimization

Add a pointer to the system call table to the thread_info structure.
The TIF_31BIT bit is set or cleared by SET_PERSONALITY exactly once
for the lifetime of a process. With the pointer to the correct system
call table in thread_info the system call code in entry64.S path can
drop the check for TIF_31BIT which saves a couple of instructions.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
d35339a42dd1f53b0bb86cf75418a9b7cf5f0f30 31-Jul-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> s390: add support for transactional memory

Allow user-space processes to use transactional execution (TX).
If the TX facility is available user space programs can use
transactions for fine-grained serialization based on the data
objects that are referenced during a transaction. This is
useful for lockless data structures and speculative compiler
optimizations.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
27f6b416626a240e1b46f646d2e0c5266f4eac95 20-Jul-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> s390/vtimer: rework virtual timer interface

The current virtual timer interface is inherently per-cpu and hard to
use. The sole user of the interface is appldata which uses it to execute
a function after a specific amount of cputime has been used over all cpus.

Rework the virtual timer interface to hook into the cputime accounting.
This makes the interface independent from the CPU timer interrupts, and
makes the virtual timers global as opposed to per-cpu.
Overall the code is greatly simplified. The downside is that the accuracy
is not as good as the original implementation, but it is still good enough
for appldata.

Reviewed-by: Jan Glauber <jang@linux.vnet.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
fbe765680d1fe9d08187ea4dad5041a7955a2c3a 05-Jun-2012 Heiko Carstens <heiko.carstens@de.ibm.com> s390/smp: make absolute lowcore / cpu restart parameter accesses more robust

Setting the cpu restart parameters is done in three different fashions:
- directly setting the four parameters individually
- copying the four parameters with memcpy (using 4 * sizeof(long))
- copying the four parameters using a private structure

In addition code in entry*.S relies on a certain order of the restart
members of struct _lowcore.

Make all of this more robust to future changes by adding a
mem_absolute_assign(dest, val) define, which assigns val to dest
using absolute addressing mode. Also the load multiple instructions
in entry*.S have been split into separate load instruction so the
order of the struct _lowcore members doesn't matter anymore.

In addition move the prototypes of memcpy_real/absolute from uaccess.h
to processor.h. These memcpy* variants are not related to uaccess at all.
string.h doesn't seem to match as well, so lets use processor.h.

Also replace the eight byte array in struct _lowcore which represents a
misaliged u64 with a u64. The compiler will always create code that
handles the misaligned u64 correctly.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
a0616cdebcfd575dcd4c46102d1b52fbb827fc29 28-Mar-2012 David Howells <dhowells@redhat.com> Disintegrate asm/system.h for S390

Disintegrate asm/system.h for S390.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-s390@vger.kernel.org
4c1051e37a0e2a941115c6fb7ba08c318f25a0f9 11-Mar-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] rework idle code

Whenever the cpu loads an enabled wait PSW it will appear as idle to the
underlying host system. The code in default_idle calls vtime_stop_cpu
which does the necessary voodoo to get the cpu time accounting right.
The udelay code just loads an enabled wait PSW. To correct this rework
the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts
to entry[64].S, vtime_stop_cpu can now be called from anywhere and
vtime_start_cpu is gone. The correction of the cpu time during wakeup
from an enabled wait PSW is done with a critical section in entry[64].S.
As vtime_start_cpu is gone, s390_idle_check can be removed as well.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
8b646bd759086f6090fe27acf414c0b5faa737f4 11-Mar-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] rework smp code

Define struct pcpu and merge some of the NR_CPUS arrays into it, including
__cpu_logical_map, current_set and smp_cpu_state. Split smp related
functions to those operating on physical cpus and the functions operating
on a logical cpu number. Make the functions for physical cpus use a
pointer to a struct pcpu. This hides the knowledge about cpu addresses in
smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header.

The PSW restart mechanism is used to start secondary cpus, calling a
function on an online cpu, calling a function on the ipl cpu, and for
the nmi signal. Replace the different assembler functions with a
single function restart_int_handler. The new entry point calls a function
whose pointer is stored in the lowcore of the target cpu and it can wait
for the source cpu to stop. This covers all existing use cases.

Overall the code is now simpler and there are ~380 lines less code.

Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
7e180bd8020d213bb0de15c3606968f8a9262439 11-Mar-2012 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] rename lowcore field

The 16 bit value at the lowcore location with offset 0x84 is the
cpu address that is associated with an external interrupt. Rename
the field from cpu_addr to ext_cpu_addr to make that clear.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
aa33c8cbbae2eb98489a3a363099b362146a8f4c 27-Dec-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] cleanup trap handling

Move the program interruption code and the translation exception identifier
to the pt_regs structure as 'int_code' and 'int_parm_long' and make the
first level interrupt handler in entry[64].S store the two values. That
makes it possible to drop 'prot_addr' and 'trap_no' from the thread_struct
and to reduce the number of arguments to a lot of functions. Finally
un-inline do_trap. Overall this saves 5812 bytes in the .text section of
the 64 bit kernel.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
c5328901aa1db134325607d65527742d8be07f7d 27-Dec-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] entry[64].S improvements

Another round of cleanup for entry[64].S, in particular the program check
handler looks more reasonable now. The code size for the 31 bit kernel
has been reduced by 616 byte and by 528 byte for the 64 bit version.
Even better the code is a bit faster as well.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
ddd6f9537dee9b713b87ecdc9ac920cd1935fdef 27-Dec-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] kvm: move cmf host id constant out of lowcore

There is no reason for the cpu-measurement-facility host id constant to
reside in the lowcore where space is precious. Use an entry in the literal
pool in HANDLE_SIE_INTERCEPT and a stack slot in sie64a.
While we are at it replace the id -1 with 0 to indicate host execution.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
20b40a794baf3b4b0320c0a77ce944d5d1a01f25 30-Oct-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] signal race with restarting system calls

For a ERESTARTNOHAND/ERESTARTSYS/ERESTARTNOINTR restarting system call
do_signal will prepare the restart of the system call with a rewind of
the PSW before calling get_signal_to_deliver (where the debugger might
take control). For A ERESTART_RESTARTBLOCK restarting system call
do_signal will set -EINTR as return code.
There are two issues with this approach:
1) strace never sees ERESTARTNOHAND, ERESTARTSYS, ERESTARTNOINTR or
ERESTART_RESTARTBLOCK as the rewinding already took place or the
return code has been changed to -EINTR
2) if get_signal_to_deliver does not return with a signal to deliver
the restart via the repeat of the svc instruction is left in place.
This opens a race if another signal is made pending before the
system call instruction can be reexecuted. The original system call
will be restarted even if the second signal would have ended the
system call with -EINTR.

These two issues can be solved by dropping the early rewind of the
system call before get_signal_to_deliver has been called and by using
the TIF_RESTART_SVC magic to do the restart if no signal has to be
delivered. The only situation where the system call restart via the
repeat of the svc instruction is appropriate is when a SA_RESTART
signal is delivered to user space.

Unfortunately this breaks inferior calls by the debugger again. The
system call number and the length of the system call instruction is
lost over the inferior call and user space will see ERESTARTNOHAND/
ERESTARTSYS/ERESTARTNOINTR/ERESTART_RESTARTBLOCK. To correct this a
new ptrace interface is added to save/restore the system call number
and system call instruction length.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
0edc8faa769095e98a5a5adc28db982f99f0d663 30-Oct-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] lowcore cleanup

Remove the save_area_64 field from the 0xe00 - 0xf00 area in the lowcore.
Use a free slot in the save_area array instead.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
480e5926ce3bb61ec229be2dab08bdce8abb8d2e 20-Sep-2011 Christian Borntraeger <borntraeger@de.ibm.com> [S390] kvm: fix address mode switching

598841ca9919d008b520114d8a4378c4ce4e40a1 ([S390] use gmap address
spaces for kvm guest images) changed kvm to use a separate address
space for kvm guests. This address space was switched in __vcpu_run
In some cases (preemption, page fault) there is the possibility that
this address space switch is lost.
The typical symptom was a huge amount of validity intercepts or
random guest addressing exceptions.
Fix this by doing the switch in sie_loop and sie_exit and saving the
address space in the gmap structure itself. Also use the preempt
notifier.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
7a0e42f168337d4af8fe230c1043cb04786c04c2 03-Aug-2011 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] asm offsets: fix coding style

Because of readability reasons we ignore the 80 character line limit
in asm offsets. Just one line per define, nothing else.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
7dd6b3343fdc190712d1620ee8848d25c4c77c33 03-Aug-2011 Michael Holzheu <holzheu@linux.vnet.ibm.com> [S390] Add PSW restart shutdown trigger

With this patch a new S390 shutdown trigger "restart" is added. If under
z/VM "systerm restart" is entered or under the HMC the "PSW restart" button
is pressed, the PSW located at 0 (31 bit) or 0x1a0 (64 bit) bit is loaded.
Now we execute do_restart() that processes the restart action that is
defined under /sys/firmware/shutdown_actions/on_restart. Currently the
following actions are possible: reipl (default), stop, vmcmd, dump, and
dump_reipl.

Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
e5992f2e6c3829cd43dbc4438ee13dcd6506f7f3 24-Jul-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] kvm guest address space mapping

Add code that allows KVM to control the virtual memory layout that
is seen by a guest. The guest address space uses a second page table
that shares the last level pte-tables with the process page table.
If a page is unmapped from the process page table it is automatically
unmapped from the guest page table as well.

The guest address space mapping starts out empty, KVM can map any
individual 1MB segments from the process virtual memory to any 1MB
aligned location in the guest virtual memory. If a target segment in
the process virtual memory does not exist or is unmapped while a
guest mapping exists the desired target address is stored as an
invalid segment table entry in the guest page table.
The population of the guest page table is fault driven.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
f2db2e6cb3f5f766cbb3788af44705685ff2445a 23-May-2011 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] pfault: cpu hotplug vs missing completion interrupts

On cpu hot remove a PFAULT CANCEL command is sent to the hypervisor
which in turn will cancel all outstanding pfault requests that have
been issued on that cpu (the same happens with a SIGP cpu reset).

The result is that we end up with uninterruptible processes where
the interrupt that would wake up these processes never arrives.

In order to solve this all processes which wait for a pfault
completion interrupt get woken up after a cpu hot remove. The worst
case that could happen is that they fault again and in turn need to
wait again.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
043d07084b5347a26eab0a07aa13a4a929ad9e71 23-May-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] Remove data execution protection

The noexec support on s390 does not rely on a bit in the page table
entry but utilizes the secondary space mode to distinguish between
memory accesses for instructions vs. data. The noexec code relies
on the assumption that the cpu will always use the secondary space
page table for data accesses while it is running in the secondary
space mode. Up to the z9-109 class machines this has been the case.
Unfortunately this is not true anymore with z10 and later machines.
The load-relative-long instructions lrl, lgrl and lgfrl access the
memory operand using the same addressing-space mode that has been
used to fetch the instruction.
This breaks the noexec mode for all user space binaries compiled
with march=z10 or later. The only option is to remove the current
noexec support.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
5e9a26928f550157563cfc06ce12c4ae121a02ec 05-Jan-2011 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] ptrace cleanup

Overhaul program event recording and the code dealing with the ptrace
user space interface.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
b3423982bd2ecb7160856ffd6618dbb929c786cc 29-Oct-2010 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] vdso: get rid of redefinition warnings

The CLOCK_* defines in asm-offsets.c are only used for the vdso code
however in the meantime they cause other trouble.
Just rename them to get permanently rid of this:

In file included from /home2/heicarst/linux-2.6/arch/s390/include/asm/asm-offsets.h:1:0,
from arch/s390/mm/fault.c:33:
include/generated/asm-offsets.h:53:0: warning: "CLOCK_REALTIME" redefined
include/linux/time.h:286:0: note: this is the location of the previous definition
include/generated/asm-offsets.h:54:0: warning: "CLOCK_MONOTONIC" redefined
include/linux/time.h:287:0: note: this is the location of the previous definition

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
f6649a7e5a9ee99e9623878f4a5579cc2f6cdd51 25-Oct-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] cleanup lowcore access from external interrupts

Read external interrupts parameters from the lowcore in the first
level interrupt handler in entry[64].S.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
1e54622e0403891b10f2105663e0f9dd595a1f17 25-Oct-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] cleanup lowcore access from program checks

Read all required fields for program checks from the lowcore in the
first level interrupt handler in entry[64].S. If the context that
caused the fault was enabled for interrupts we can now re-enable the
irqs in entry[64].S.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
215b3096371907e5d866bb219be7ef3d5ce6c083 26-May-2010 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] spp: fix compilation for CONFIG_32BIT

Fix build breakage for CONFIG_32BIT caused by cd3b70f5
"[S390] virtualization aware cpu measurement"

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
86f2552bbd0e17b19bb5e9881042533eaea553c7 17-May-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] add breaking event address for user space

Copy the last breaking event address from the lowcore to a new
field in the thread_struct on each system entry. Add a new
ptrace request PTRACE_GET_LAST_BREAK and a new utrace regset
REGSET_LAST_BREAK to query the last breaking event.

This is useful for debugging wild branches in user space code.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
cd3b70f5d4d82f85d1e1d6e822f38ae098cf7c72 17-May-2010 Carsten Otte <cotte@de.ibm.com> [S390] virtualization aware cpu measurement

Use the SPP instruction to set a tag on entry to / exit of the virtual
machine context. This allows the cpu measurement facility to distinguish
the samples from the host and the different guests.

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
6377981faf1a4425b0531e577736ef03df97c8f6 17-May-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] idle time accounting vs. machine checks

A machine check can interrupt the i/o and external interrupt handler
anytime. If the machine check occurs while the interrupt handler is
waking up from idle vtime_start_cpu can get executed a second time
and the int_clock / async_enter_timer values in the lowcore get
clobbered. This can confuse the cpu time accounting.
To fix this problem two changes are needed. First the machine check
handler has to use its own copies of int_clock and async_enter_timer,
named mcck_clock and mcck_enter_timer. Second the nested execution
of vtime_start_cpu has to be prevented. This is done in s390_idle_check
by checking the wait bit in the program status word.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
94038a99119c171aea27608f81c7ba359de98c4e 17-May-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] More cleanup for struct _lowcore

Remove cpu_id from lowcore and replace addr_t with __u64.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
157a1a27d5921fc94db8c14e0d01363d13de99b5 22-Apr-2010 Hendrik Brueckner <brueckner@linux.vnet.ibm.com> [S390] vdso: use ntp adjusted clock multiplier

Commit "timekeeping: Fix clock_gettime vsyscall time warp" (0696b711e)
introduced the new parameter "mult" to update_vsyscall(). This parameter
contains the internal NTP adjusted clock multiplier.

The s390x vdso did not use this adjusted multiplier. Instead, it used
the constant clock multiplier for gettimeofday() and clock_gettime()
variants. This may result in observable time warps as explained in
commit 0696b711e.

Make the NTP adjusted clock multiplier available to the s390x vdso
implementation and use it for time calculations.

Cc: <stable@kernel.org>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
cbb870c8221147ae337612e04b2bb0211f31a74b 26-Feb-2010 Heiko Carstens <heiko.carstens@de.ibm.com> [S390] Cleanup struct _lowcore usage and defines.

Use asm offsets to make sure the offset defines to struct _lowcore and
its layout don't get out of sync.
Also add a BUILD_BUG_ON() which checks that the size of the structure
is sane.
And while being at it change those sites which use odd casts to access
the current lowcore. These should use S390_lowcore instead.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
987bcdacb18a3adc2a48d85c9b005069c2f4dd7b 26-Feb-2010 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] use inline assembly contraints available with gcc 3.3.3

Drop support to compile the kernel with gcc versions older than 3.3.3.
This allows us to use the "Q" inline assembly contraint on some more
inline assemblies without duplicating a lot of complex code (e.g. __xchg
and __cmpxchg). The distinction for older gcc versions can be removed
which saves a few lines and simplifies the code.

Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
1aaf179d043856d80bbb354f9feaf706b9cfbcd3 22-Sep-2009 Michael Holzheu <michael.holzheu@linux.vnet.ibm.com> [S390] hibernate: Do real CPU swap at resume time

Currently, when the physical resume CPU is not equal to the physical suspend
CPU, we swap the CPUs logically, by modifying the logical/physical CPU mapping.
This has two major drawbacks: First the change is visible from user space (e.g.
CPU sysfs files) and second it is hard to ensure that nowhere in the kernel
the physical CPU ID is stored before suspend.
To fix this, we now really swap the physical CPUs, if the resume CPU is not
the pysical suspend CPU. We restart the suspend CPU and stop the resume CPU
using SIGP restart and SIGP stop. If the suspend CPU is no longer available,
we write a message and load a disabled wait PSW.

Signed-off-by: Michael Holzheu <michael.holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
5b409ed17bb32c8316b1f456466c70529454573a 14-Apr-2009 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] cpu hotplug and accounting values

Reset the cpu timer to the maximum value and correctly initialize the
cpu accounting values in the lowcore when the cpu is started.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
c742b31c03f37c5c499178f09f57381aa6c70131 31-Dec-2008 Martin Schwidefsky <schwidefsky@de.ibm.com> [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_ID

The extract cpu time instruction (ectg) instruction allows the user
process to get the current thread cputime without calling into the
kernel. The code that uses the instruction needs to switch to the
access registers mode to get access to the per-cpu info page that
contains the two base values that are needed to calculate the current
cputime from the CPU timer with the ectg instruction.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
b020632e40c3ed5e8c0c066d022672907e8401cf 25-Dec-2008 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] introduce vdso on s390

Add a vdso to speed up gettimeofday and clock_getres/clock_gettime for
CLOCK_REALTIME/CLOCK_MONOTONIC.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
59da21398e680e8100625d689c8bebee6a139e93 27-Nov-2008 Martin Schwidefsky <schwidefsky@de.ibm.com> [S390] fix system call parameter functions.

syscall_get_nr() currently returns a valid result only if the call
chain of the traced process includes do_syscall_trace_enter(). But
collect_syscall() can be called for any sleeping task, the result of
syscall_get_nr() in general is completely bogus.

To make syscall_get_nr() work for any sleeping task the traps field
in pt_regs is replace with svcnr - the system call number the process
is executing. If svcnr == 0 the process is not on a system call path.

The syscall_get_arguments and syscall_set_arguments use regs->gprs[2]
for the first system call parameter. This is incorrect since gprs[2]
may have been overwritten with the system call number if the call
chain includes do_syscall_trace_enter. Use regs->orig_gprs2 instead.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
4ca4d7bf7a650817c441073cb8d1c2c8dfbb9959 29-Apr-2008 Christoph Lameter <clameter@sgi.com> s390: use kbuild.h instead of defining macros in asm-offsets.c

New version that does not preserve the marker. Arch maintainers indicate
that the marker functionality is is not needed anymore.

Note you may simplify the s390 asm-offsets.c code further if you use the
OFFSET() macro instead of the DEFINE. See kbuild.h

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
7a88d7a8f467e4ab1d3393ed5bce3d68cdf9be2e 29-Apr-2008 Christoph Lameter <clameter@sgi.com> s390: use kbuild.h instead of defining macros in asm-offsets.c

s390 has a strange marker in DEFINE. Undefine the DEFINE from kbuild.h and
define it the way s390 wants it to preserve things as they were.

May be good if the arch maintainer could go over this and check if this
workaround is really necessary.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
f7e4217b007d1f73e7e3cf10ba4fea4a608c603f 09-May-2007 Roman Zippel <zippel@linux-m68k.org> rename thread_info to stack

This finally renames the thread_info field in task structure to stack, so that
the assumptions about this field are gone and archs have more freedom about
placing the thread_info structure.

Nonbroken archs which have a proper thread pointer can do the access to both
current thread and task structure via a single pointer.

It'll allow for a few more cleanups of the fork code, from which e.g. ia64
could benefit.

Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
[akpm@linux-foundation.org: build fix]
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ian Molton <spyro@f2s.com>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Richard Curnow <rc@rc0.org.uk>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
Cc: Andi Kleen <ak@muc.de>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 17-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org> Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!