David Miller wrote:
From: Jeff Garzik <[email protected]>
Date: Mon, 08 Oct 2007 10:22:28 -0400
In terms of overall parallelization, both for TX as well as RX, my gut
feeling is that we want to move towards an MSI-X, multi-core friendly
model where packets are LIKELY to be sent and received by the same set
of [cpus | cores | packages | nodes] that the [userland] processes
dealing with the data.
The problem is that the packet schedulers want global guarantees
on packet ordering, not flow centric ones.
That is the issue Jamal is concerned about.
Oh, absolutely.
I think, fundamentally, any amount of cross-flow resource management
done in software is an obstacle to concurrency.
That's not a value judgement, just a statement of fact.
"traffic cops" are intentional bottlenecks we add to the process, to
enable features like priority flows, filtering, or even simple socket
fairness guarantees. Each of those bottlenecks serves a valid purpose,
but at the end of the day, it's still a bottleneck.
So, improving concurrency may require turning off useful features that
nonetheless hurt concurrency.
The more I think about it, the more inevitable it seems that we really
might need multiple qdiscs, one for each TX queue, to pull this full
parallelization off.
But the semantics of that don't smell so nice either. If the user
attaches a new qdisc to "ethN", does it go to all the TX queues, or
what?
All of the traffic shaping technology deals with the device as a unary
object. It doesn't fit to multi-queue at all.
Well the easy solutions to networking concurrency are
* use virtualization to carve up the machine into chunks
* use multiple net devices
Since new NIC hardware is actively trying to be friendly to
multi-channel/virt scenarios, either of these is reasonably
straightforward given the current state of the Linux net stack. Using
multiple net devices is especially attractive because it works very well
with the existing packet scheduling.
Both unfortunately impose a burden on the developer and admin, to force
their apps to distribute flows across multiple [VMs | net devs].
The third alternative is to use a single net device, with SMP-friendly
packet scheduling. Here you run into the problems you described "device
as a unary object" etc. with the current infrastructure.
With multiple TX rings, consider that we are pushing the packet
scheduling from software to hardware... which implies
* hardware-specific packet scheduling
* some TC/shaping features not available, because hardware doesn't
support it
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]