History log of /arch/x86/include/asm/timer.h
Revision Date Author Comments
20d1c86a57762f0a33a78988e3fc8818316badd4 29-Nov-2013 Peter Zijlstra <peterz@infradead.org> sched/clock, x86: Rewrite cyc2ns() to avoid the need to disable IRQs

Use a ring-buffer like multi-version object structure which allows
always having a coherent object; we use this to avoid having to
disable IRQs while reading sched_clock() and avoids a problem when
getting an NMI while changing the cyc2ns data.

MAINLINE PRE POST

sched_clock_stable: 1 1 1
(cold) sched_clock: 329841 331312 257223
(cold) local_clock: 301773 310296 309889
(warm) sched_clock: 38375 38247 25280
(warm) local_clock: 100371 102713 85268
(warm) rdtsc: 27340 27289 24247
sched_clock_stable: 0 0 0
(cold) sched_clock: 382634 372706 301224
(cold) local_clock: 396890 399275 399870
(warm) sched_clock: 38194 38124 25630
(warm) local_clock: 143452 148698 129629
(warm) rdtsc: 27345 27365 24307

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-s567in1e5ekq2nlyhn8f987r@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
57c67da274f3fab38e08d2c9edf08b89e1d9c71d 29-Nov-2013 Peter Zijlstra <peterz@infradead.org> sched/clock, x86: Move some cyc2ns() code around

There are no __cycles_2_ns() users outside of arch/x86/kernel/tsc.c,
so move it there.

There are no cycles_2_ns() users.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-01lslnavfgo3kmbo4532zlcj@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
5dd12c2152743747ca9f50ef80281e54cc416dc0 29-Nov-2013 Peter Zijlstra <peterz@infradead.org> sched/clock, x86: Use mul_u64_u32_shr() for native_sched_clock()

Use mul_u64_u32_shr() so that x86_64 can use a single 64x64->128 mul.

Before:

0000000000000560 <native_sched_clock>:
560: 44 8b 1d 00 00 00 00 mov 0x0(%rip),%r11d # 567 <native_sched_clock+0x7>
567: 55 push %rbp
568: 48 89 e5 mov %rsp,%rbp
56b: 45 85 db test %r11d,%r11d
56e: 75 4f jne 5bf <native_sched_clock+0x5f>
570: 0f 31 rdtsc
572: 89 c0 mov %eax,%eax
574: 48 c1 e2 20 shl $0x20,%rdx
578: 48 c7 c1 00 00 00 00 mov $0x0,%rcx
57f: 48 09 c2 or %rax,%rdx
582: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
589: 65 8b 04 25 00 00 00 mov %gs:0x0,%eax
590: 00
591: 48 98 cltq
593: 48 8b 34 c5 00 00 00 mov 0x0(,%rax,8),%rsi
59a: 00
59b: 48 89 d0 mov %rdx,%rax
59e: 81 e2 ff 03 00 00 and $0x3ff,%edx
5a4: 48 c1 e8 0a shr $0xa,%rax
5a8: 48 0f af 14 0e imul (%rsi,%rcx,1),%rdx
5ad: 48 0f af 04 0e imul (%rsi,%rcx,1),%rax
5b2: 5d pop %rbp
5b3: 48 03 04 3e add (%rsi,%rdi,1),%rax
5b7: 48 c1 ea 0a shr $0xa,%rdx
5bb: 48 01 d0 add %rdx,%rax
5be: c3 retq

After:

0000000000000550 <native_sched_clock>:
550: 8b 3d 00 00 00 00 mov 0x0(%rip),%edi # 556 <native_sched_clock+0x6>
556: 55 push %rbp
557: 48 89 e5 mov %rsp,%rbp
55a: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
55e: 85 ff test %edi,%edi
560: 75 2c jne 58e <native_sched_clock+0x3e>
562: 0f 31 rdtsc
564: 89 c0 mov %eax,%eax
566: 48 c1 e2 20 shl $0x20,%rdx
56a: 48 09 c2 or %rax,%rdx
56d: 65 48 8b 04 25 00 00 mov %gs:0x0,%rax
574: 00 00
576: 89 c0 mov %eax,%eax
578: 48 f7 e2 mul %rdx
57b: 65 48 8b 0c 25 00 00 mov %gs:0x0,%rcx
582: 00 00
584: c9 leaveq
585: 48 0f ac d0 0a shrd $0xa,%rdx,%rax
58a: 48 01 c8 add %rcx,%rax
58d: c3 retq

MAINLINE POST

sched_clock_stable: 1 1
(cold) sched_clock: 329841 331312
(cold) local_clock: 301773 310296
(warm) sched_clock: 38375 38247
(warm) local_clock: 100371 102713
(warm) rdtsc: 27340 27289
sched_clock_stable: 0 0
(cold) sched_clock: 382634 372706
(cold) local_clock: 396890 399275
(warm) sched_clock: 38194 38124
(warm) local_clock: 143452 148698
(warm) rdtsc: 27345 27365

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-piu203ses5y1g36bnyw2n16x@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
663b55b9b39fa9c848cca273ca4e12bf29b32c71 07-Jan-2014 Paul Gortmaker <paul.gortmaker@windriver.com> x86: Delete non-required instances of include <linux/init.h>

None of these files are actually using any __init type directives
and hence don't need to include <linux/init.h>. Most are just a
left over from __devinit and __cpuinit removal, or simply due to
code getting copied from one driver to the next.

[ hpa: undid incorrect removal from arch/x86/kernel/head_32.S ]

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Link: http://lkml.kernel.org/r/1389054026-12947-1-git-send-email-paul.gortmaker@windriver.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
9993bc635d01a6ee7f6b833b4ee65ce7c06350b1 10-Mar-2012 Salman Qazi <sqazi@google.com> sched/x86: Fix overflow in cyc2ns_offset

When a machine boots up, the TSC generally gets reset. However,
when kexec is used to boot into a kernel, the TSC value would be
carried over from the previous kernel. The computation of
cycns_offset in set_cyc2ns_scale is prone to an overflow, if the
machine has been up more than 208 days prior to the kexec. The
overflow happens when we multiply *scale, even though there is
enough room to store the final answer.

We fix this issue by decomposing tsc_now into the quotient and
remainder of division by CYC2NS_SCALE_FACTOR and then performing
the multiplication separately on the two components.

Refactor code to share the calculation with the previous
fix in __cycles_2_ns().

Signed-off-by: Salman Qazi <sqazi@google.com>
Acked-by: John Stultz <john.stultz@linaro.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Turner <pjt@google.com>
Cc: john stultz <johnstul@us.ibm.com>
Link: http://lkml.kernel.org/r/20120310004027.19291.88460.stgit@dungbeetle.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
4cecf6d401a01d054afc1e5f605bcbfe553cb9b9 15-Nov-2011 Salman Qazi <sqazi@google.com> sched, x86: Avoid unnecessary overflow in sched_clock

(Added the missing signed-off-by line)

In hundreds of days, the __cycles_2_ns calculation in sched_clock
has an overflow. cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing
the final value to become zero. We can solve this without losing
any precision.

We can decompose TSC into quotient and remainder of division by the
scale factor, and then use this to convert TSC into nanoseconds.

Signed-off-by: Salman Qazi <sqazi@google.com>
Acked-by: John Stultz <johnstul@us.ibm.com>
Reviewed-by: Paul Turner <pjt@google.com>
Cc: stable@kernel.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111115221121.7262.88871.stgit@dungbeetle.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
072b198a4ad48bd722ec6d203d65422a4698eae7 12-Nov-2010 Don Zickus <dzickus@redhat.com> x86, nmi_watchdog: Remove all stub function calls from old nmi_watchdog

Now that the bulk of the old nmi_watchdog is gone, remove all
the stub variables and hooks associated with it.

This touches lots of files mainly because of how the io_apic
nmi_watchdog was implemented. Now that the io_apic nmi_watchdog
is forever gone, remove all its fingers.

Most of this code was not being exercised by virtue of
nmi_watchdog != NMI_IO_APIC, so there shouldn't be anything to
risky here.

Signed-off-by: Don Zickus <dzickus@redhat.com>
Cc: fweisbec@gmail.com
Cc: gorcunov@openvz.org
LKML-Reference: <1289578944-28564-3-git-send-email-dzickus@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2d826404f0bdcac2a4dd7e3c446b70d6a3b63b78 20-Aug-2009 Thomas Gleixner <tglx@linutronix.de> x86: Move tsc_calibration to x86_init_ops

TSC calibration is modified by the vmware hypervisor and paravirt by
separate means. Moorestown wants to add its own calibration routine as
well. So make calibrate_tsc a proper x86_init_ops function and
override it by paravirt or by the early setup of the vmware
hypervisor.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
64fcbac1f38882d8ae82c44a1c2a676cfa5e79e1 20-Aug-2009 Thomas Gleixner <tglx@linutronix.de> x86: Simplify timer_ack magic in time_32.c

Let the compiler optimize the timer_ack magic away in the 32bit timer
interrupt and put the same code into time_64.c. It's optimized out for
CONFIG_X86_IO_APIC on 32bit and for 64bit because timer_ack is const 0
in both cases.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
845b3944bbdf9e9247849bf037f27ff3a3f26d87 19-Aug-2009 Thomas Gleixner <tglx@linutronix.de> x86: Add timer_init to x86_init_ops

The timer init code is convoluted with several quirks and the paravirt
timer chooser. Figuring out which code path is actually taken is not
for the faint hearted.

Move the numaq TSC quirk to tsc_pre_init x86_init_ops function and
replace the paravirt time chooser and the remaining x86 quirk with a
simple x86_init_ops function.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
84599f8a59e77699f18f06948cea171a349a3f0f 16-Jun-2009 Peter Zijlstra <peterz@infradead.org> sched, x86: Fix cpufreq + sched_clock() TSC scaling

For freqency dependent TSCs we only scale the cycles, we do not account
for the discrepancy in absolute value.

Our current formula is: time = cycles * mult

(where mult is a function of the cpu-speed on variable tsc machines)

Suppose our current cycle count is 10, and we have a multiplier of 5,
then our time value would end up being 50.

Now cpufreq comes along and changes the multiplier to say 3 or 7,
which would result in our time being resp. 30 or 70.

That means that we can observe random jumps in the time value due to
frequency changes in both fwd and bwd direction.

So what this patch does is change the formula to:

time = cycles * frequency + offset

And we calculate offset so that time_before == time_after, thereby
ridding us of these jumps in time.

[ Impact: fix/reduce sched_clock() jumps across frequency changing events ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Chucked-on-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
25c1a411e8a0a709abe3449866125dc290711ea8 30-Mar-2009 Stephen Rothwell <sfr@canb.auug.org.au> x86: fix mismerge in arch/x86/include/asm/timer.h

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
199785eac892a1fa1b71cc22bec58e8b156d9311 21-Feb-2009 Matthias-Christian Ott <ott@mirix.org> [CPUFREQ] p4-clockmod reports wrong frequency.

http://bugzilla.kernel.org/show_bug.cgi?id=10968

[ Updated for current tree, and fixed compile failure
when p4-clockmod was built modular -- davej]

From: Matthias-Christian Ott <ott@mirix.org>
Signed-off-by: Dominik Brodowski <linux@brodo.de>
Signed-off-by: Dave Jones <davej@redhat.com>
8e6dafd6c741cd4679b4de3c5d9698851e4fa59c 23-Feb-2009 Ingo Molnar <mingo@elte.hu> x86: refactor x86_quirks support

Impact: cleanup

Make x86_quirks support more transparent. The highlevel
methods are now named:

extern void x86_quirk_pre_intr_init(void);
extern void x86_quirk_intr_init(void);

extern void x86_quirk_trap_init(void);

extern void x86_quirk_pre_time_init(void);
extern void x86_quirk_time_init(void);

This makes it clear that if some platform extension has to
do something here that it is considered ... weird, and is
discouraged.

Also remove arch_hooks.h and move it into setup.h (and other
header files where appropriate).

Signed-off-by: Ingo Molnar <mingo@elte.hu>
1965aae3c98397aad957412413c07e97b1bd4e64 23-Oct-2008 H. Peter Anvin <hpa@zytor.com> x86: Fix ASM_X86__ header guards

Change header guards named "ASM_X86__*" to "_ASM_X86_*" since:

a. the double underscore is ugly and pointless.
b. no leading underscore violates namespace constraints.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
bb8985586b7a906e116db835c64773b7a7d51663 18-Aug-2008 Al Viro <viro@zeniv.linux.org.uk> x86, um: ... and asm-x86 move

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>