On Dec 2, 2005, at 09:43:45, Roman Zippel wrote:
Hi,
On Thu, 1 Dec 2005, David Lang wrote:
In addition, once you remove the bulk of these uses from the
picture (by
makeing them use a new timer type that's optimized for their
useage pattern,
the 'unlikly to expire' case) the remainder of the timer users
easily fall
into the catagory where the timer is expected to expire, so that
code can
accept a performance hit for removing events prior to them going
off that
would not be acceptable in a general case version.
[snip timer wheel is efficient for lots of add remove, timer tree
is not]
This means timers which run for less than 2^14 jiffies are better
off using the timer wheel, unless they require the higher
resolution of the new timer system.
PRECISELY!!! The point is to provide a new and more flexible API,
either as two different sets of timer manipulation functions (one for
timer wheel and one for timer tree) or as a single set with multiple
backends), and migrate old timers to the new system, reclassifying
them based on the timer needs. Some timers/timeouts/whatevers don't
care much about delivery accuracy and could be placed into a much
smaller and more efficient timer wheel with only quarter-second or
half-second accuracy, and maybe even coalesced to one-or-two-second
boundaries without harming functionality, because they are almost
certainly going to be removed before they run. This would be a _big_
help in tickless systems, where we can schedule a bunch of networking
timers to all be expired simultaneously, keeping caches hot and
allowing longer sleep times. Likewise, we would port some to the new
API such that they just use the same old ordinary timer wheel. Other
timers that want highres guarantees would use the highres part of the
kernel API (either flags in a structure or a separate set of
functions) and would be added to a slow but very accurate timer tree.
The fact remains that we have two reasonably useful internal timer
structures; one which is optimized for lots of timers being added and
removed frequently, which has poor accuracy, and the other which
doesn't handle a million timers very well, and is poor at adding and
removing timers but has excellent accuracy. We should come up with a
set of recommendations for when to use each interface. The _best_
way to explain that to most kernel developers who don't really
understand the guts of it is:
1) If you need high resolution and you add the timer and let it
expire normally, use the ktimer/whatever API.
2) If you just want to time-out an operation or fail when something
doesn't happen, or a timer that doesn't care about accuracy, use the
ktimeout/whatever2 API.
So can we please stop this likely/unlikely expiry nonsense? It's
great if you want to tell aunt Tillie about kernel hacking, but
it's terrible advice to kernel programmers. When it comes to
choosing a timer implementation, the delivery is completely and
utterly unimportant.
The fact is, we have a _lot_ of timers, a _lot_ of kernel hackers,
and we need some easy way to tell people which of two subsystems to
use. The fact that the likely/unlikely stuff is easy to tell aunt
Tillie is precisely what makes it useful to tell kernel hackers with
a half-million other things on their minds. Hopefully it will be
easy enough to understand that when they get around to using timers
for something or another, they'll pick the right API for their task.
Cheers,
Kyle Moffett
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]