>> > Not every time there is a request for higher order pages. That surely
>> > will defeat the purpose of pcps. But my suggestion is only to drain
>> > when the the global pool is not able to service the request. In the
>> > pathological case where the higher order and zero order requests are
>> > alternating you could have thrashing in terms of pages moving to pcp for
>> > them to move back to global list.
>>
>> OK, seems fair enough. But there's multiple "harder and harder" attempts
>> within __alloc_pages to do that ... which one are you going for? just
>> before we OOM / fail the alloc? That'd be hard to argue with, though I'm
>> unsure what the locking is to dump out other CPUs queues - you going to
>> global IPI and ask them to do it - that'd seem to cause it to race to
>> refill (as you mention).
>>
>
> Thinking of initiating this drain operation after the swapper daemon is
> woken up. hopefully that will allow other possible pages to be put back
> on freelist and reduce the possible thrash of pages between freemem pool
> and pcps.
OK, but waking up kswapd doesn't indicate a low memory condition.
It's standard procedure .... we'll have to wake it up whenever we dip
below the high watermarks. Perhaps before dropping into direct reclaim
would be more appropriate?
> As a first step, I will be draining the local cpu's pcp. IPI or lazy
> purging of pcps could be used as a a very last resort to drain other
> CPUs pcps for the scnearios where nothing else has worked to get more
> pages. For these extreme low memory conditions I'm not sure if we
> should worry about thrashing any more than having free pages lying
> around and not getting used.
Sounds fair.
>> >> Could you elaborate on what the benefits were from this change in the
>> >> first place? Some page colouring thing on ia64? It seems to have way more
>> >> downside than upside to me.
>> >
>> > The original change was to try to allocate a higher order page to
>> > service a batch size bulk request. This was with the hope that better
>> > physical contiguity will spread the data better across big caches.
>>
>> OK ... but it has an impact on fragmentation. How much benefit are you
>> getting?
>
> Benefit is in terms of reduced performance variation (and expected
> throughput) of certain workloads from run to run on the same kernel.
Mmmm. how much are you talking about in terms of throughput, and on what
platforms? all previous attempts to measure page colouring seemed to
indicate it did nothing at all - maybe some specific types of h/w are
more susceptible?
M.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|