History log of /arch/ia64/kernel/ivt.S
Revision Date Author Comments
c140d87995b68b428f70635c2e4071e4e8b3256e 28-Mar-2012 David Howells <dhowells@redhat.com> Disintegrate asm/system.h for IA64

Disintegrate asm/system.h for IA64.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
cc: linux-ia64@vger.kernel.org
2b55f3672c77e76b62efd0dba6bf29addac071fd 20-Feb-2010 Denys Vlasenko <vda.linux@googlemail.com> Rename .text.ivt to .text..ivt.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
32974ad4907cdde6c9de612cd1b2ee0568fb9409 08-Feb-2010 Tony Luck <tony.luck@intel.com> [IA64] Remove COMPAT_IA32 support

This has been broken since May 2008 when Al Viro killed altroot support.
Since nobody has complained, it would appear that there are no users of
this code (A plausible theory since the main OSVs that support ia64 prefer
to use the IA32-EL software emulation).

Signed-off-by: Tony Luck <tony.luck@intel.com>
94752a794ddfdef65289a16627faefa7e2e62d58 04-Mar-2009 Isaku Yamahata <yamahata@valinux.co.jp> ia64/pv_ops: paravirtualize mov = ar.itc.

paravirtualize mov reg = ar.itc.

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Tony Luck <tony.luck@intel.com>
749da7912e1270abbbaef04112a7a78febf3f0f4 17-Oct-2008 Isaku Yamahata <yamahata@valinux.co.jp> ia64/pv_ops: fix paravirtualization of ivt.S with CONFIG_SMP=n

When CONFIG_SMP=n, three instruction in ivt.S were missed to paravirtualize.
paravirtualize them.

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Tony Luck <tony.luck@intel.com>
9b3cbf725fb98733976fd02e2e557f0ae3028df0 03-Aug-2008 Isaku Yamahata <yamahata@valinux.co.jp> [IA64] pv_ops: fix ivt.S paravirtualization

Recent kernels are not booting on some HP systems (though
it does boot on others). James and Willy reported the
problem. James did the bisection to find the commit
that caused the problem:
498c5170472ff0c03a29d22dbd33225a0be038f4.
[IA64] pvops: paravirtualize ivt.S

Two instructions were wrongly paravirtualized such that
_FROM_ macro had been used where _TO_ was intended

Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: "Wilcox, Matthew R" <matthew.r.wilcox@intel.com>
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Tony Luck <tony.luck@intel.com>
4d58bbcc89e267d52b4df572acbf209a60a8a497 28-May-2008 Isaku Yamahata <yamahata@valinux.co.jp> [IA64] pv_ops: move some functions in ivt.S to avoid lack of space.

move interrupt, page_fault, non_syscall, dispatch_unaligned_handler and
dispatch_to_fault_handler to avoid lack of instructin space.
The change set 4dcc29e1574d88f4465ba865ed82800032f76418 bloated
SAVE_MIN_WITH_COVER, SAVE_MIN_WITH_COVER_R19 so that it bloated the
functions which uses those macros.
In the native case, only dispatch_illegal_op_fault had to be moved.
When paravirtualized case the all functions which use the macros need
to be moved to avoid the lack of space.

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Tony Luck <tony.luck@intel.com>
498c5170472ff0c03a29d22dbd33225a0be038f4 19-May-2008 Isaku Yamahata <yamahata@valinux.co.jp> [IA64] pvops: paravirtualize ivt.S

paravirtualize ivt.S which implements fault handler in hand written
assembly code.
They includes sensitive or performance critical privileged instructions.
So they need paravirtualization.

Cc: Keith Owens <kaos@ocs.com.au>
Cc: tgingold@free.fr
Cc: Akio Takebe <takebe_akio@jp.fujitsu.com>
Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com>
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Tony Luck <tony.luck@intel.com>
4dcc29e1574d88f4465ba865ed82800032f76418 27-May-2008 Tony Luck <tony.luck@intel.com> [IA64] Workaround for RSE issue

Problem: An application violating the architectural rules regarding
operation dependencies and having specific Register Stack Engine (RSE)
state at the time of the violation, may result in an illegal operation
fault and invalid RSE state. Such faults may initiate a cascade of
repeated illegal operation faults within OS interruption handlers.
The specific behavior is OS dependent.

Implication: An application causing an illegal operation fault with
specific RSE state may result in a series of illegal operation faults
and an eventual OS stack overflow condition.

Workaround: OS interruption handlers that switch to kernel backing
store implement a check for invalid RSE state to avoid the series
of illegal operation faults.

The core of the workaround is the RSE_WORKAROUND code sequence
inserted into each invocation of the SAVE_MIN_WITH_COVER and
SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded
constants that depend on the number of stacked physical registers
being 96. The rest of this patch consists of code to disable this
workaround should this not be the case (with the presumption that
if a future Itanium processor increases the number of registers, it
would also remove the need for this patch).

Move the start of the RBS up to a mod32 boundary to avoid some
corner cases.

The dispatch_illegal_op_fault code outgrew the spot it was
squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y
Move it out to the end of the ivt.

Signed-off-by: Tony Luck <tony.luck@intel.com>
b64f34cdfe5bef9dfed1304c513220b0f2862eca 29-Jan-2008 Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> [IA64] VIRT_CPU_ACCOUNTING (accurate cpu time accounting)

This patch implements VIRT_CPU_ACCOUNTING for ia64,
which enable us to use more accurate cpu time accounting.

The VIRT_CPU_ACCOUNTING is an item of kernel config, which s390
and powerpc arch have. By turning this config on, these archs
change the mechanism of cpu time accounting from tick-sampling
based one to state-transition based one.

The state-transition based accounting is done by checking time
(cycle counter in processor) at every state-transition point,
such as entrance/exit of kernel, interrupt, softirq etc.
The difference between point to point is the actual time consumed
during in the state. There is no doubt about that this value is
more accurate than that of tick-sampling based accounting.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
00b65985fb2fc542b855b03fcda0d0f2bab4f442 13-Oct-2006 Chen, Kenneth W <kenneth.w.chen@intel.com> [IA64] relax per-cpu TLB requirement to DTC

Instead of pinning per-cpu TLB into a DTR, use DTC. This will free up
one TLB entry for application, or even kernel if access pattern to
per-cpu data area has high temporal locality.

Since per-cpu is mapped at the top of region 7 address, we just need to
add special case in alt_dtlb_miss. The physical address of per-cpu data
is already conveniently stored in IA64_KR(PER_CPU_DATA). Latency for
alt_dtlb_miss is not affected as we can hide all the latency. It was
measured that alt_dtlb_miss handler has 23 cycles latency before and
after the patch.

The performance effect is massive for applications that put lots of tlb
pressure on CPU. Workload environment like database online transaction
processing or application uses tera-byte of memory would benefit the most.
Measurement with industry standard database benchmark shown an upward
of 1.6% gain. While smaller workloads like cpu, java also showing small
improvement.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
d2a28ad9fa7bf16761d070d8a3338375e1574b32 24-Mar-2006 Russ Anderson <rja@sgi.com> [IA64] MCA recovery: kernel context recovery table

Memory errors encountered by user applications may surface
when the CPU is running in kernel context. The current code
will not attempt recovery if the MCA surfaces in kernel
context (privilage mode 0). This patch adds a check for cases
where the user initiated the load that surfaces in kernel
interrupt code.

An example is a user process lauching a load from memory
and the data in memory had bad ECC. Before the bad data
gets to the CPU register, and interrupt comes in. The
code jumps to the IVT interrupt entry point and begins
execution in kernel context. The process of saving the
user registers (SAVE_REST) causes the bad data to be loaded
into a CPU register, triggering the MCA. The MCA surfaces in
kernel context, even though the load was initiated from
user context.

As suggested by David and Tony, this patch uses an exception
table like approach, puting the tagged recovery addresses in
a searchable table. One difference from the exception table
is that MCAs do not surface in precise places (such as with
a TLB miss), so instead of tagging specific instructions,
address ranges are registers. A single macro is used to do
the tagging, with the input parameter being the label
of the starting address and the macro being the ending
address. This limits clutter in the code.

This patch only tags one spot, the interrupt ivt entry.
Testing showed that spot to be a "heavy hitter" with
MCAs surfacing while saving user registers. Other spots
can be added as needed by adding a single macro.

Signed-off-by: Russ Anderson (rja@sgi.com)
Signed-off-by: Tony Luck <tony.luck@intel.com>
d8117ce5a679ff1f48df247da30fb62c16d562c5 08-Mar-2006 Christoph Lameter <clameter@engr.sgi.com> [IA64] Fix race in the accessed/dirty bit handlers

A pte may be zapped by the swapper, exiting process, unmapping or page
migration while the accessed or dirty bit handers are about to run. In that
case the accessed bit or dirty is set on an zeroed pte which leads the VM to
conclude that this is a swap pte. This may lead to

- Messages from the vm like

swap_free: Bad swap file entry 4000000000000000

- Processes being aborted

swap_dup: Bad swap file entry 4000000000000000
VM: killing process ....

Page migration is particular suitable for the creation of this race since
it needs to remove and restore page table entries.

The fix here is to check for the present bit and simply not update
the pte if the page is not present anymore. If the page is not present
then the fault handler should run next which will take care of the problem
by bringing the page back and then mark the page dirty or move it onto the
active list.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
5d1a88af826b03edaac4d2bd2f25af56a54f26e6 16-Feb-2006 Zhang, Yanmin <yanmin_zhang@linux.intel.com> [IA64] Delete a redundant instruction in unaligned_access

unaligned_access does fetch cr.ipsr, then calls
dispatch_unaligned_handler, but dispatch_unaligned_handler fetches
cr.ipsr again, so delete the first one.

Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
e8aabc47168d24eabc08418db4e034a4c625721c 17-Nov-2005 Chen, Kenneth W <kenneth.w.chen@intel.com> [IA64] polish comments for tlb fault handler in ivt.S

Polish the comments specifically in vhpt_miss and nested_dtlb_miss
handlers. I think it's better to explicitly name each page table
level with its name instead of numerically name them. i.e., use
pgd, pud, pmd, and pte instead of referring as L1, L2, L3 etc.
Along the line, remove some magic number in the comments like:
"PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)". No code change at
all, pure comment update. Feel free to shoot anything you have,
darts or tomahawk cruise missile. I will duck behind a bunker ;-)

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
fedb25fae72bc2c3709448a43be067439643da87 17-Nov-2005 Chen, Kenneth W <kenneth.w.chen@intel.com> [IA64] 4 level page table bug fix in vhpt_miss

From source code inspection, I think there is a bug with 4 level
page table with vhpt_miss handler. In the code path of rechecking
page table entry against previously read value after tlb insertion,
*pte value in register r18 was overwritten with value newly read
from pud pointer, render the check of new *pte against previous
*pte completely wrong. Though the bug is none fatal and the penalty
is to purge the entry and retry. For functional correctness, it
should be fixed. The fix is to use a different register so new
*pud don't trash *pte. (btw, the comments in the cmp statement is
wrong as well, which I will address in the next patch).

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
837cd0bdf54dd954cd6aa43d250f75ab5db79617 11-Nov-2005 Robin Holt <holt@sgi.com> [IA64] 4-level page tables

This patch introduces 4-level page tables to ia64. I have run
some benchmarks and found nothing interesting. Performance has
consistently fallen within the noise range.

It also introduces a config option (setting the default to 3
levels). The config option prevents having 4 level page
tables with 64k base page size.

Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
05f335ea04881ecb912b572c83e281a016149169 11-Sep-2005 Keith Owens <kaos@sgi.com> [IA64] MCA/INIT: remove the physical mode path from minstate.h

Remove the physical mode path from minstate.h.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
39e01cb874cbf694bd0b0c44f54c4f270e2aa556 09-Sep-2005 Sam Ravnborg <sam@mars.(none)> kbuild: ia64 use generic asm-offsets.h support

Delete obsolete stuff from arch Makefile
Rename file to asm-offsets.h
The trick used in the arch Makefile to circumvent the circular
dependency is kept.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
458f935527372499b714bf4f8e646a68bb0f52e3 04-May-2005 David Mosberger-Tang <davidm@hpl.hp.com> [IA64] Speed up lfetch.fault [NULL]

This patch greatly speeds up the handling of lfetch.fault instructions
which result in NaT consumption. Due to the NaT-page mapped at address
0, this is guaranteed to happen when lfetch.fault'ing a NULL pointer.
With this patch in place, we can even define prefetch()/prefetchw() as
lfetch.fault without significant performance degradation. More
importantly, it allows compilers to be more aggressive with using
lfetch.fault on pointers that might be NULL.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
0393eed5c3220c9c3823a09a2d02329b8ff08b45 21-Jun-2005 Ken Chen <kenneth.w.chen@intel.com> [IA64] fix nested_dtlb_miss handler for hugetlb address

The nested_dtlb_miss handler currently does not handle fault from
hugetlb address correctly. It walks the page table assuming PAGE_SIZE.
Thus when taking a fault triggered from hugetlb address, it would not
calculate the pgd/pmd/pte address correctly and thus result an incorrect
invocation of ia64_do_page_fault(). In there, kernel will signal SIGBUS
and application dies (The faulting address is perfectly legal and we
have a valid pte for the corresponding user hugetlb address as well).
This patch fix the described kernel bug. Since nested_dtlb_miss is a
rare event and a slow path anyway, I'm making the change without #ifdef
CONFIG_HUGETLB_PAGE for code readability. Tony, please apply.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
f8fa5448fc9b4a7806b1297a0b57808f12fe4d43 28-Apr-2005 David Mosberger-Tang <davidm@hpl.hp.com> [IA64] Reschedule break_fault() for better performance.

This patch reorganizes break_fault() to optimistically assume that a
system-call is being performed from user-space (which is almost always
the case). If it turns out that (a) we're not being called due to a
system call or (b) we're being called from within the kernel, we fixup
the no-longer-valid assumptions in non_syscall() and .break_fixup(),
respectively.

With this approach, there are 3 major phases:

- Phase 1: Read various control & application registers, in
particular the current task pointer from AR.K6.
- Phase 2: Do all memory loads (load system-call entry,
load current_thread_info()->flags, prefetch
kernel register-backing store) and switch
to kernel register-stack.
- Phase 3: Call ia64_syscall_setup() and invoke
syscall-handler.

Good for 26-30 cycles of improvement on break-based syscall-path.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
060561ff79b01eea58e6d72abfb8e7580ff21f2a 28-Apr-2005 David Mosberger-Tang <davidm@hpl.hp.com> [IA64] In syscall-entry, use st8 instead of stf8 to clear pt_regs.r8

Using stf8 seemed like a clever idea at the time, but stf8 forces
the cache-line to be invalidated in the L1D (if it happens to be
there already). This patch eliminates a guaranteed L1D cache-miss
and, by itself, is good for a 1-2 cycle improvement for heavy-weight
syscalls.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 17-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org> Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!