On Thu, Aug 23, 2007 at 07:52:11PM +0530, Gautham R Shenoy wrote:
> On Thu, Aug 23, 2007 at 06:15:01AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 23, 2007 at 03:44:44PM +0530, Gautham R Shenoy wrote:
> > > On Thu, Aug 23, 2007 at 01:54:56AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Aug 23, 2007 at 09:56:39AM +0530, Gautham R Shenoy wrote:
> > > > >
> > > > > I feel we should still be able to use for_each_online_cpu(cpu) instead
> > > > > of for_each_possible_cpu. Again, there's a good chance that I might
> > > > > be mistaken!
> > > > >
> > > > > How about the following ?
> > > > >
> > > > > preempt_disable(); /* We Dont want cpus going down here */
> > > > > for_each_online_cpu(cpu)
> > > > > for (i = 0; i < RCU_BOOST_ELEMENTS; i++) {
> > > > > rbdp = per_cpu(rcu_boost_dat, cpu);
> > > > > sum.rbs_blocked += rbdp[i].rbs_blocked;
> > > > > sum.rbs_boost_attempt += rbdp[i].rbs_boost_attempt;
> > > > > sum.rbs_boost += rbdp[i].rbs_boost;
> > > > > sum.rbs_unlock += rbdp[i].rbs_unlock;
> > > > > sum.rbs_unboosted += rbdp[i].rbs_unboosted;
> > > > > }
> > > > > preempt_enable();
> > > > >
> > > > >
> > > > > static int rcu_boost_cpu_callback(struct notifier_bloack *nb,
> > > > > unsigned long action, void *hcpu)
> > > > > {
> > > > > int this_cpu, cpu;
> > > > > rcu_boost_data *rbdp, *this_rbdp;
> > > > >
> > > > > switch (action) {
> > > > > case CPU_DEAD:
> > > > > this_cpu = get_cpu();
> > > > > cpu = (long)hcpu;
> > > > > this_cpu = smp_processor_id();
> > > > > rbdp = per_cpu(rcu_boost_dat, cpu);
> > > > > this_rbdp = per_cpu(rcu_boost_dat, cpu);
> > > > > /*
> > > > > * Transfer all of rbdp's statistics to
> > > > > * this_rbdp here.
> > > > > */
> > > > > put_cpu();
> > > > >
> > > > > return NOTIFY_OK;
> > > > > }
> > > > > }
> > > > >
> > > > >
> > > > > Won't this work in this case?
> > > >
> > > > Hello, Gautham,
> > > >
> > > > We could do something similar. If there was a global rcu_boost_data
> > > > variable that held the sums of the fields of the rcu_boost_data
> > > > structures for all offline CPUs, and if we used a new lock to protect
> > > > that global rcu_boost data variable (both when reading and when
> > > > CPU hotplugging), then we could indeed scan only the online CPUs'
> > > > rcu_boost_data elements.
> > > >
> > > > We would also have to maintain a cpumask_t for this purpose, and
> > > > we would need to add a CPU's contribution when it went offline and
> > > > subtract it when that CPU came back online.
> > >
> > > The additional cpumask_t beats me though! Doesn't the cpu_online_map
> > > suffice here?
> > > The addition and subtraction of a hotplugged cpu's
> > > contribution from the global rcu_boost_data could be done while
> > > handling the CPU_ONLINE and CPU_DEAD (or CPU_UP_PREPARE
> > > and CPU_DOWN_PREPARE, whichever suits better), in the cpu hotplug
> > > callback.
> > >
> > > Am I missing something ?
> >
> > Don't we need to synchronize the manipulation of the hotplugged CPU's
> > contribution and the manipulation of cpu_online_map? Otherwise, if
> > stats are called for just before (or just after, depending on the
> > ordering of hotplug operations) the invocation will get the wrong
> > statistics.
>
> Oh, yes we need to synchronize that :-)
>
> Can't we use lock_cpu_hotplug/unlock_cpu_hotplug (or it's variant when
> it is available) around any access to cpu_online_map ? With that, it's
> guaranteed that no cpu-hotplug operation will be permitted while you're
> iterating over the cpu_online_map, and hence you have a
> consistent view of global rcu_boost_data.
>
> Even if we use another cpumask_t, whenever a cpu goes down or comes up,
> that will be reflected in this map, no? So what's the additional
> advantage of using it?
The additional map allows the code to use something other than the
lock_cpu_hotplug/unlock_cpu_hotplug, and also is robust against any
changes to the hotplug synchronization mechanism. Might well be
better just to use the current hotplug synchronization mechanism,
but I was feeling paranoid. ;-)
Thanx, Paul
> > > > The lock should not be a problem even on very large systems because
> > > > of the low frequency of statistics printing -- and of hotplug operations,
> > > > for that matter.
> > >
> > > The lock is not a problem, so long as somebody else doesn't call
> > > the function taking the lock from their cpu-hotplug callback path :-)
> > > Though I don't see it happening here.
> >
> > There are some ways to decrease its utilization if it should become
> > a problem in any case.
>
> Cool!
>
> >
> > Thanx, Paul
> >
>
> Thanks and Regards
> gautham.
> --
> Gautham R Shenoy
> Linux Technology Center
> IBM India.
> "Freedom comes with a price tag of responsibility, which is still a bargain,
> because Freedom is priceless!"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]