On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
> On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
> >> yeah. If you could come up with a sane definition that also translates
> >> into low overhead on the algorithm side that would be great!
>
> On Tue, Apr 17, 2007 at 05:08:09PM -0500, Matt Mackall wrote:
> > How's this:
> > If you're running two identical CPU hog tasks A and B differing only by nice
> > level (Anice, Bnice), the ratio cputime(A)/cputime(B) should be a
> > constant f(Anice - Bnice).
> > Other definitions make things hard to analyze and probably not
> > well-bounded when confronted with > 2 tasks.
> > I -think- this implies keeping a separate scaled CPU usage counter,
> > where the scaling factor is a trivial exponential function of nice
> > level where f(0) == 1. Then you schedule based on this scaled usage
> > counter rather than unscaled.
> > I also suspect we want to keep the exponential base small so that the
> > maximal difference is 10x-100x.
>
> I'm already working with this as my assumed nice semantics (actually
> something with a specific exponential base, suggested in other emails)
> until others start saying they want something different and agree.
Good. This has a couple nice mathematical properties, including
"bounded unfairness" which I mentioned earlier. What base are you
looking at?
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]