Hi Linus,
On Wednesday 01 August 2007 19:17, Linus Torvalds wrote:
> And the "approximates" thing would be about the fact that we don't
> actually care about "absolute" microseconds as much as something
> that is in the "roughly a microsecond" area. So if we say "it doesn't
> have to be microseconds, but it should be within a factor of two of a
> ms", we could avoid all the expensive divisions (even if they turn
> into multiplications with reciprocals), and just let people *shift*
> the CPU counter instead.
On that theme, expressing the subsecond part of high precision time in
decimal instead of left-aligned binary always was an insane idea.
Applications end up with silly numbers of multiplies and divides
(likely as not incorrect) whereas they would often just need a simple
shift as you say, if the tv struct had been defined sanely from the
start. As a bonus, whenever precision gets bumped up, the new bits
appear on the right in formerly zero locations on the right, meaning
little if any code needs to change. What we have in the incumbent libc
timeofday scheme is the moral equivalent of BCD.
Of course libc is unlikely ever to repent, but we can at least put off
converting into the awkward decimal format until the last possible
instant. In other words, I do not see why xtime is expressed as a tv
instead of simple 32.32 fixed point. Perhaps somebody can elucidate
me?
Regards,
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]