Alan,
On Tue, 2007-07-10 at 14:59 -0400, Alan Stern wrote:
> Thomas:
>
> Here's a question for you or anyone else who can help.
>
> I've got a low-precision kernel timer, with a delay measured in seconds
> (and rounded off to a second boundary). Under some circumstances the
> timer might be cancelled and restarted many times in quick succession
> (a few thousand times perhaps). Alternatively the timer could simply
> be allowed to expire and then restarted, with the callback routine
> doing a rather small amount of work.
>
> Which is the most efficient? Or to put it another way, how many times
> can I cancel and restart a low-precision timer before it uses up as
> much CPU time as allowing the timer to expire once?
Hard to tell.
> Is there a reasonable way to answer this? I can't think of any good
> tests. Or is the difference in overhead so small as to be meaningless?
The insertion / deletion needs to take the timer->base->lock, but this
is cheap as long as the insert / cancel happens on the same CPU.
The other overhead which might be "visible" is when the timer needs to
be re-cascaded in the wheel. See the table below:
100 250 1000 HZ
[1] 256 10 4 1 ms
[2] 64 2560 1024 256 ms
[3] 64 164 66 16 s
[4] 64 175 70 17 m
[5] 64 186 75 19 h
In the 250Hz case the timer < 1024ms is never re-cascaded. A timer <66s
is re-cascaded at max. once.
So I guess your frequent cancel/restart scheme is just fine. It does not
re-trigger any kind of hardware event and the insertion/deletion is
O(1).
The network code relies on this cheap mechanism on high loaded server
machines.
Hope that helps,
tglx
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]