Jonathan Lundell wrote:
Well, not actually a time warp, though it feels like one.
I'm doing some real-time bit-twiddling in a driver, using the TSC to
measure out delays on the order of hundreds of nanoseconds. Because I
want an upper limit on the delay, I disable interrupts around it.
The logic is something like:
local_irq_save
out(set a bit)
t0 = TSC
wait while (t = (TSC - t0)) < delay_time
out(clear the bit)
local_irq_restore
From time to time, when I exit the delay, t is *much* bigger than
delay_time. If delay_time is, say, 300ns, t is usually no more than
325ns. But every so often, t can be 2000, or 10000, or even much higher.
The value of t seems to depend on the CPU involved, The worst case is
with an Intel 915GV chipset, where t approaches 500 microseconds (!).
Probably not the same thing, but on 2.4 I was noticing
large TSC jumps, the magnitude of which was dependent on CPU speed.
They were always around 1.26ms on my 3.4GHz dual HT xeon system.
That's (2^32)/(3.4*10^9) which suggested it was a 32 bit overflow
somewhere, which pointed me at:
http://lxr.linux.no/source/arch/i386/kernel/time.c?v=2.4.28#L96
This implied the TSCs were drifting relative to each other
(even between logical CPUs on 1 package).
I worked around the problem by setting the IRQ affinity
for my ethernet IRQs (the source of the do_gettimeofday()s)
to a particular logical CPU rather than a physical CPU and also
tied the timer interrupt to CPU0.
I guess I could also maintain a last_tsc_low for each CPU also?
Pádraig.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]