On Thu, 2006-07-13 at 15:05 -0700, john stultz wrote:
> Also note: Assuming no NTP changes during a period, the maximum error in
> nanosecond accumulated over a period is the number of clocksource cycles
> in that period shifted down by the clocksource shift value. (This is
> since the smallest adjustment is 1 mult unit, and mult/2^shift is
> 1/freq)
>
> So for a counter w/ a 4Ghz frequency and a shift value of 22 (similar to
> a TSC). Over 1 second, if there was no high-precision adjustment, you
> could get a *maximum* of error of ~1us (4b/2^22 = ~1024ns), which is a
> 1ppm drift, if left uncorrected.
>
> Now, if we have high-precision adjustments, the question is "how fast
> should we fix that 1us error?". If the code assumes we're going to have
> a tick every 1 ms, it can make a stronger adjustment (of 10 mult units)
> which it will remove after the next tick. This however could cause
> problems if we lost ticks and that interrupt was delayed as we would
> overshoot.
>
> Thus, if we assume that ticks will show up worse case, about once a
> second or two, we can make an adjustment of a single mult unit and
> assume that we'll correct it over the next second. This is what Roman
> was saying would be "slow adjustments", but I don't see it as too
> unacceptable.
There is really no need to worry about sub micro second accuracy, when
you look at the latencies in todays machines. We need a robust design
and no ivory tower experiments with precisions which are not useful for
anything.
tglx
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]