Balbir Singh wrote:
[snip]
>> And what about a hard limit - how would you fail in page fault in
>> case of limit hit? SIGKILL/SEGV is not an option - in this case we
>> should run synchronous reclamation. This is done in beancounter
>> patches v6 we've sent recently.
>>
>
> I thought about running synchronous reclamation, but then did not follow
> that approach, I was not sure if calling the reclaim routines from the
> page fault context is a good thing to do. It's worth trying out, since
Each page fault potentially calls reclamation by allocating
required page with __GFP_IO | __GFP_FS bits set. Synchronous
reclamation in page fault is really normal.
[snip]
>> Please correct me if I'm wrong, but does this reclamation work like
>> "run over all the zones' lists searching for page whose controller
>> is sc->container" ?
>>
>
> Yeah, that's correct. The code can also reclaim memory from all over-the-limit
OK. What if I have a container with 100 pages limit in a 4Gb
(~ million of pages) machine and this group starts reclaiming
its pages. In case this group uses its pages heavily they will
be at the beginning of an LRU list and reclamation code would
have to scan through all (million) pages before it finds proper
ones. This is not optimal!
> containers (by passing SC_OVERLIMIT_ALL). The idea behind using such a scheme
> is to ensure that the global LRU list is not broken.
isolate_lru_pages() helps in this. As far as I remember this
was introduced to reduce lru lock contention and keep lru
lists integrity.
In beancounters patches this is used to shrink BC's pages.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]