On 8/3/06, Evgeniy Polyakov <[email protected]> wrote:
On Fri, Aug 04, 2006 at 03:59:37PM +1000, Herbert Xu ([email protected]) wrote:
> Interesting. Could you guys post figures on alloc_page speed vs. kmalloc?
They probalby measured kmalloc cache access, which only falls to
alloc_pages when cache is refilled, so it will be faster for some short
period of time, but in general (especially for such big-sized
allocations) it is essencially the same.
I think you're right about that. In particular, I think Jesse was
looking at the impact that changing the drivers buffer allocation
method would have on 1500 byte MTU users. With a running network
driver you should see lots of fixed size allocations hitting the slab
cache, and occasionally causing an alloc_pages. If you replace that
with a call to alloc_pages for every packet that ever gets received
it's a performance hit.
So how many skb allocation schemes do you code into a single driver?
Kmalloc everything, page alloc everything, combination of kmalloc and
page buffers for hardware that does header split? That's three
versions of the drivers receive processing and skb allocation that
need to be maintained.
> Also, getting memory slower is better than not getting them at all :)
Yep.
- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]