On Fri, 17 Nov 2006 14:25:33 +0100 (CET)
> For a customer the main reason to use guarantee is to be sure that
> some pages of a job remain in memory when the system is low on free
> memory. This should be true even for a job in group/container A with
That actually doesn't appear a very useful definition.
There are two reasons for wanting memory guarantees
#1 To be sure a user can't toast the entire box but just their own
compartment (eg web hosting)
#2 To ensure all apps continue to make progress
The simple approach doesn't seem to work for either. There is a threshold
above which #1 and #2 are the same thing, below that trying to keep a few
pages in memory will thrash not make progress and will harm overall
behaviour thus failing to solve #1 or #2. At that point you have to
decide whether what you have is a misconfiguration or whether the system
should be prepared to do temporary cycling overcommits so containers take
it in turn to make progress when overcommitted.
> If the limit is a "hard limit" then we have implemented reservation and
> this is too strict.
Thats fundamentally a judgement based on your particular workload and
constraints. If I am web hosting then I don't generally care if my end
users compartment blows up under excess load, I care that the other 200
customers using the box don't suffer and all phone me to complain.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]