On Tue, Apr 19, 2005 at 09:44:06AM +1000, Nick Piggin wrote:
> Very good, I was wondering when someone would try to implement this ;)
Thank you for the feedback !
> >-static void __devinit arch_init_sched_domains(void)
> >+static void attach_domains(cpumask_t cpu_map)
> > {
>
> This shouldn't be needed. There should probably just be one place that
> attaches all domains. It is a bit difficult to explain what I mean when
> you have 2 such places below.
>
Can you explain a bit more, not sure I understand what you mean
> Interface isn't bad. It would seem to be able to handle everything, but
> I think it can be made a bit simpler.
>
> fn_name(cpumask_t span1, cpumask_t span2)
>
> Yeah? The change_map is implicitly the union of the 2 spans. Also I don't
> really like the name. It doesn't rebuild so much as split (or join). I
> can't think of anything good off the top of my head.
Yeah agreed. It kinda lived on from earlier versions I had
>
> >+ unsigned long flags;
> >+ int i;
> >+
> >+ local_irq_save(flags);
> >+
> >+ for_each_cpu_mask(i, change_map)
> >+ spin_lock(&cpu_rq(i)->lock);
> >+
>
> Locking is wrong. And it has changed again in the latest -mm kernel.
> Please diff against that.
>
I havent looked at the RCU sched domain changes as yet, but I put this in
to address some problems I noticed during stress testing.
Basically with the current hotplug code, it is possible to have a scenario
like this
rebuild domains load balance
| |
| take existing sd pointer
| |
attach to dummy domain |
| loop thro sched groups
change sched group info |
access invalid pointer and panic
> >+ if (!cpus_empty(span1))
> >+ build_sched_domains(span1);
> >+ if (!cpus_empty(span2))
> >+ build_sched_domains(span2);
> >+
>
> You also can't do this - you have to 'offline' the domains first before
> building new ones. See the CPU hotplug code that handles this.
>
By offline if you mean attach to dummy domain, see above
> This makes a hotplug event destroy your nicely set up isolated domains,
> doesn't it?
>
> This looks like the most difficult problem to overcome. It needs some
> external information to redo the cpuset splits at cpu hotplug time.
> Probably a hotplug handler in the cpusets code might be the best way
> to do that.
Yes I am aware of this. What I have in mind is for the hotplug code
from scheduler to call into cpusets code. This will just return say 1
when cpusets is not compiled in and the sched code can continue to do
what it is doing right now, else the cpusets code will find the leaf
cpuset that contains the hotplugged cpu and rebuild the domains accordingly
However the question still remains as to how cpusets should handle
this hotplugged cpu
-Dinakar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]