Re: [PATCH 2/6] sched: make sched_slice() group scheduling savvy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2007-11-01 at 17:01 +0530, Srivatsa Vaddagiri wrote:
> On Wed, Oct 31, 2007 at 10:10:32PM +0100, Peter Zijlstra wrote:
> > Currently the ideal slice length does not take group scheduling into account.
> > Change it so that it properly takes all the runnable tasks on this cpu into
> > account and caluclate the weight according to the grouping hierarchy.
> > 
> > Also fixes a bug in vslice which missed a factor NICE_0_LOAD.
> > 
> > Signed-off-by: Peter Zijlstra <[email protected]>
> > CC: Srivatsa Vaddagiri <[email protected]>
> > ---
> >  kernel/sched_fair.c |   42 +++++++++++++++++++++++++++++++-----------
> >  1 file changed, 31 insertions(+), 11 deletions(-)
> > 
> > Index: linux-2.6/kernel/sched_fair.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched_fair.c
> > +++ linux-2.6/kernel/sched_fair.c
> > @@ -331,10 +331,15 @@ static u64 __sched_period(unsigned long 
> >   */
> >  static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >  {
> > -	u64 slice = __sched_period(cfs_rq->nr_running);
> > +	unsigned long nr_running = rq_of(cfs_rq)->nr_running;
> > +	u64 slice = __sched_period(nr_running);
> > 
> > -	slice *= se->load.weight;
> > -	do_div(slice, cfs_rq->load.weight);
> > +	for_each_sched_entity(se) {
> > +		cfs_rq = cfs_rq_of(se);
> > +
> > +		slice *= se->load.weight;
> > +		do_div(slice, cfs_rq->load.weight);
> > +	}
> > 
> >  	return slice;
> 
> 
> Lets say we have two groups A and B on CPU0, of equal weight (1024).
> 
> Further,
> 
> A has 1 task (A0)
> B has 1000 tasks (B0 .. B999) 
> 
> Agreed its a extreme case, but illustrates the problem I have in mind
> for this patch.
> 
> All tasks of same weight=1024.
> 
> Before this patch
> =================
> 
> 	sched_slice(grp A) = 20ms * 1/2 = 10ms
> 	sched_slice(A0) = 20ms
> 
> 	sched_slice(grp B) = 20ms * 1/2 = 10ms
> 	sched_slice(B0) = (20ms * 1000/20) * 1 / 1000 = 1ms
> 	sched_slice(B1) = ... = sched_slice(B99) = 1 ms
> 
> Fairness between groups and tasks would be obtained as below:
> 
>     A0       B0-B9     A0    B10-B19     A0     B20-B29
>  |--------|--------|--------|--------|--------|--------|-----//--| 
>  0       10ms	   20ms	   30ms     40ms     50ms     60ms
> 
> After this patch
> ================
> 
> 	sched_slice(grp A) = (20ms * 1001/20) * 1/2 ~= 500ms
> 	sched_slice(A0) = 500ms

Hmm, right that is indeed not intended

> 	sched_slice(grp B) = 500ms
> 	sched_slice(B0) = 0.5ms 

This 0.5 is indeed correct, whereas the previous 1ms was not

> Fairness between groups and tasks would be obtained as below:
> 
> 	    A0		          B0 - B99  	            A0
>  |-----------------------|-----------------------|-----------------------|
>  0		        500ms			1000ms 			1500ms
> 
> Did I get it right? If so, I don't like the fact that group A is allowed to run 
> for a long time (500ms) before giving chance to group B.

Hmm, quite bad indeed.

> Can I know what real problem is being addressed by this change to
> sched_slice()?

sched_slice() is about lantecy, its intended purpose is to ensure each
task is ran exactly once during sched_period() - which is
sysctl_sched_latency when nr_running <= sysctl_sched_nr_latency, and
otherwise linearly scales latency.


Attachment: signature.asc
Description: This is a digitally signed message part


[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux