Re: [PATCH 1/2] sched: Minor cleanups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 19, 2007 at 02:08:03PM +0100, Ingo Molnar wrote:
> >  #define for_each_leaf_cfs_rq(rq, cfs_rq) \
> > -	list_for_each_entry(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
> > +	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
> >  
> >  /* Do the two (enqueued) entities belong to the same group ? */
> >  static inline int
> > @@ -1126,7 +1126,10 @@ static void print_cfs_stats(struct seq_f
> >  #ifdef CONFIG_FAIR_GROUP_SCHED
> >  	print_cfs_rq(m, cpu, &cpu_rq(cpu)->cfs);
> >  #endif
> > +
> > +	rcu_read_lock();
> >  	for_each_leaf_cfs_rq(cpu_rq(cpu), cfs_rq)
> >  		print_cfs_rq(m, cpu, cfs_rq);
> > +	rcu_read_unlock();
> 
> hm, why is this a cleanup?

Sorry for the wrong subject. It was supposed to include the above bug fix,
related to how we walk the task group list.

Thinking abt it now, I realize that print_cfs_rq() can potentially
sleep and hence it cannot be surrounded by rcu_read_lock()/unlock().

And as Dipankar just pointed me, sched_create/destroy_group aren't
serialized at all currently, so we need a mutex to protect them. The
same mutex can be then used when walking the list in print_cfs_stats() ..

Will send update patches soon ..


-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux