Re: Avoiding external fragmentation with a placement policy Version 12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2005-06-03 at 06:57 -0700, Martin J. Bligh wrote:
> 
> >>> Actually, even with TSO enabled, you'll get large order
> >>> allocations, but for receive packets, and these allocations
> >>> happen in software interrupt context.
> >> 
> >> Sounds like we still need to cope then ... ?
> > 
> > Sure. Although we should try to not use higher order allocs if
> > possible of course. Even with a fallback mode, you will still be
> > putting more pressure on higher order areas and thus degrading
> > the service for *other* allocators, so such schemes should
> > obviously be justified by performance improvements.
> 
> My point is that outside of a benchmark situation (where we just
> rebooted the machine to run a test) you will NEVER get an order 4
> block free anyway, so it's pointless.

I ran a little test overnight on a 16GB i386 system.

	cat /dev/zero | ./nc localhost 9999 & ; ./nc -l -p 9999

It pushed around 200MB of traffic through lo.  Is that (relatively low)
transmission rate due to having to kick off kswapd any time it wants to
send a packet?

partial mem/buddyinfo before:
MemTotal:     16375212 kB
MemFree:        214248 kB
HighTotal:    14548952 kB
HighFree:       198272 kB
LowTotal:      1826260 kB
LowFree:         15976 kB
Cached:       14415800 kB

Node 0, zone      DMA    217     35      2      1      1      1      1      0      1      1      1
Node 0, zone   Normal   7236   3020   3885    104      7      0      0      0      0      0      1
Node 0, zone  HighMem     18    503      0      0      1      0      0      1      0      0      0

partial mem/buddyinfo after:
MemTotal:     16375212 kB
MemFree:      13471604 kB
HighTotal:    14548952 kB
HighFree:     13450624 kB
LowTotal:      1826260 kB
LowFree:         20980 kB
Cached:         972988 kB

Node 0, zone      DMA      1      0      1      1      1      1      1      0      1      1      1
Node 0, zone   Normal   1488     52     10     66      7      0      0      0      0      0      1
Node 0, zone  HighMem   1322   3541   3165  20611  20651  14062   8054   5400   2643    664    169

There was surely plenty of other stuff going on, but it looks like
ZONE_HIGHMEM got eaten, and has plenty of large contiguous areas
available.  This probably shows the collateral damage when kswapd goes
randomly shooting down pages.  Are those loopback allocations
GFP_KERNEL?

-- Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux