Ingo Molnar wrote:
* Jeff Garzik <[email protected]> wrote:
Tasklets fill a niche not filled by either workqueues (slower,
requiring context switches, and possibly much latency is all wq's
processes are active) [...]
... workqueues are also possibly much more scalable (percpu workqueues
are easy without changing anything in your code but the call where you
create the workqueue).
All that scalability is just overhead, and overkill, for what
tasklets/softirqs are used for.
the context-switch argument i'll believe if i see numbers. You'll
probably need in excess of tens of thousands of irqs/sec to even be able
to measure its overhead. (workqueues are driven by nice kernel threads
so there's no TLB overhead, etc.)
As Alexey said... I would have thought YOU needed to provide numbers,
rather than just handwaving as justification for tasklet removal.
the only remaining argument is latency: but workqueues are already
pretty high-prio (with a default priority of nice -5) - and you can
increase it even further. You can make it SCHED_FIFO prio 98 if latency
is so important.
You skipped the very relevant latency killer: N threads in wq, and you
submit the (N+1)th task.
I just cannot see how that is acceptable replacement for a network
driver that uses tasklets. Who wants to wait that long for packet RX or TX?
Tasklets on the other hand are _unconditionally_
high-priority. So this argument is more of an arms-race argument: "i
want _my_ processing to be done immediately!". The fact that workqueues
can be preempted and that their priorities can be adjusted flexibly is
an optional _bonus_, not a disadvantage. If low-prio workqueues hurts
your workflow, make them high-prio.
How about letting us stick with a solution that is WORKING now?
Of course tasklets are unconditionally high priority. So are hardirqs.
So are softirqs. This is not a problem, this is an expected and
assumed-upon feature of the system.
And moving code -back- into hardirq is just the wrong thing to do,
usually.
agreed - except if the in-tasklet processing is really thin and there's
already a softirq layer in the workflow. (which the case was for the
example that was cited.) In such a case moving either to the hardirq or
to the softirq looks like the right thing - instead of the tasklet
intermediary.
Wrong, for all the examples I care about -- drivers. Network drivers in
particular. Just look at the comment in include/linux/interrupt.h if it
wasn't clear:
/* PLEASE, avoid to allocate new softirqs, if you need not _really_ high
frequency threaded job scheduling. For almost all the purposes
tasklets are more than enough. F.e. all serial device BHs et
al. should be converted to tasklets, not to softirqs.
*/
There is a good reason for this advice, as hinted at by the code
immediately following the comment:
enum
{
HI_SOFTIRQ=0,
TIMER_SOFTIRQ,
NET_TX_SOFTIRQ,
NET_RX_SOFTIRQ,
BLOCK_SOFTIRQ,
TASKLET_SOFTIRQ,
SCHED_SOFTIRQ,
#ifdef CONFIG_HIGH_RES_TIMERS
HRTIMER_SOFTIRQ,
#endif
};
softirqs cannot really be used by drivers, because they are not modular.
They are a scarce resource in any case.
Guess what? All this is why we have tasklets.
tasklet != workqueue
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]