On Saturday 27 May 2006 11:28, Peter Williams wrote:
> Con Kolivas wrote:
> > On Friday 26 May 2006 14:20, Peter Williams wrote:
> >> Although the rlimit mechanism already has a CPU usage limit (RLIMIT_CPU)
> >> it is a total usage limit and therefore (to my mind) not very useful.
> >> These patches provide an alternative whereby the (recent) average CPU
> >> usage rate of a task can be limited to a (per task) specified proportion
> >> of a single CPU's capacity. The limits are specified in parts per
> >> thousand and come in two varieties -- hard and soft.
> >
> > Why 1000?
>
> Probably a hang over from a version where the units were proportion of a
> whole machine. Percentage doesn't work very well if there are more than
> 1 CPU in that case (especially if there are more than 100 CPUs :-)).
> But it's also useful to have the extra range if your trying to cap
> processes (or users) from outside the scheduler using these primitives.
>
> > I doubt that degree of accuracy is possible in cpu accounting and
> > accuracy or even required. To me it would seem to make more sense to just
> > be a percentage.
>
> It's not meant to imply accuracy :-). The main issue is avoiding
> overflow when doing the multiplications during the comparisons.
Well you could always expose a smaller more meaningful value than what is
stored internally. However you've already implied that there are requirements
in userspace for more granularity in the proportioning than percentage can
give.
--
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]