William Lee Irwin III wrote:
> Con Kolivas wrote:
> >> Looks good, thanks. Ingo's been hard at work since then and has v8 out
> >> by now. SD has not changed so you wouldn't need to do the whole lot of
> >> tests on SD again unless you don't trust some of the results.
>
> On Thu, May 03, 2007 at 02:11:39AM +0300, Al Boldi wrote:
> > Well, I tried cfs-v8 and it still shows some nice regressions wrt
> > mainline/sd. SD's nice-levels look rather solid, implying fairness.
>
> That's odd. The ->load_weight changes should've improved that quite
> a bit. There may be something slightly off in how lag is computed,
> or maybe the O(n) lag issue Ying Tang spotted is biting you.
Is it not biting you too?
> Also, I should say that the nice number affairs don't imply fairness
> per se. The way that works is that when tasks have "weights" (like
> nice levels in UNIX) the definition of fairness changes so that each
> task gets shares of CPU bandwidth proportional to its weight instead
> of one share for one task.
Ok, but you can easily expose scheduler unfairness by using nice levels as
relative magnifiers; provided nice levels are implemented correctly.
> It takes a bit closer inspection than feel tests to see if weighted
> fairness is properly implemented. One thing to try is running a number
> of identical CPU hogs at the same time at different nice levels for a
> fixed period of time (e.g. 1 or 2 minutes) so they're in competition
> with each other and seeing what percent of the CPU each gets. From
> there you can figure out how many shares each is getting for its nice
> level. Trying different mixtures of nice levels and different numbers
> of tasks should give consistent results for the shares of CPU bandwidth
> the CPU hogs get for being at a particular nice level. A scheduler gets
> "bonus points" (i.e. is considered better at prioritizing) for the user
> being able to specify how the weightings come out. The finer-grained
> the control, the more bonus points.
>
> Maybe con might want to take a stab at having users be able to specify
> the weights for each nice level individually.
>
> CFS actually has a second set of weights for tasks, namely the
> timeslice for a given task. At the moment, they're all equal. It should
> be the case that the shorter the timeslice a given task has, the less
> latency it gets. So there is a fair amount of room for it to manuever
> with respect to feel tests. It really needs to be done numerically to
> get results we can be sure mean something.
>
> The way this goes is task t_i gets a percent of the CPU p_i when the
> tasks t_1, t_2, ..., t_n are all competing, and task t_i has nice level
> n_i. The share corresponding to nice level n_i is then
>
> p_i
> w_i = -------
> sum p_j
>
> One thing to check for is that if two tasks have the same nice level
> that their weights come out about equal. So for t_i and t_j, if n_i
> = n_j then you check that at least approximately, w_i = w_j, or even
> p_i = p_j, since we're not starting and stopping tasks in the midst of
> the test. Also, you can't simplify sum p_j to 1, since the set of tasks
> may not be the only things running.
>
> The other thing to do is try a different number of tasks with a
> different mix of nice levels. The weight w_i for a given nice
> level n_i should be the same even in a different mix of tasks
> and nice levels if the nice levels are the same.
>
> If this sounds too far out, there's nothing to worry about. You can
> just run the different numbers of tasks with different mixes of nice
> levels and post the %cpu numbers. Or if that's still a bit far out
> for you, a test that does all this is eventually going to get written.
chew.c does exactly that, just make sure sched_granularity_ms >= 5,000,000.
Thanks!
--
Al
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]