Hi Andi,
I recently bought a AMD64 X2 box and I have been playing with
the timekeeping code. I ran into the problem of unsynchronized
TSCs and decided to put some tracing into the timer_interrupt.
I got a couple interesting results.
Consider the following block of code. I assume that Jan Beulich's
new inline assembler version is still implements this algorithm.
vxtime.last_tsc = tsc - vxtime.quot * delay / vxtime.tsc_quot;
if ((((tsc - vxtime.last_tsc) * vxtime.tsc_quot) >> 32) < offset)
vxtime.last_tsc = tsc - (((long) offset << 32) / vxtime.tsc_quot) - 1;
The first line is correct. It sets the last_tsc value to a reasonable
estimate of when the PIT timer fired.
If we ignore the scaling the next couple lines are roughly:
if (delay < offset)
last_tsc = tsc - offset - 1;
Now assume that the offset value is just slightly larger than the
delay (again assume these values were converted to a common unit).
The last_tsc value will be set to a value which will result in
a slightly larger offset on the next tick. This repeats until
the offset accumulates a value large enough to trigger the
lost tick check. In my case even after the offset overflows the
the remainder was still greater than the delay and the process
continued. I'm curious if you know what this code was trying
to achieve?
I also notice on a 2.6.13 vintage kernel that the PM timer was
detected and the vxtime.quot value was set appropriately for the
PM timer but the kernel decided to use the PIT/TSC timekeeping.
I have not checked to see if this still happens with more
recent kernels.
Jim Houston - Concurrent Computer Corp.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]