Re: [patch] cpufreq: mark cpufreq_tsc() as core_initcall_sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 17, 2006 at 02:27:15PM -0500, Alan Stern wrote:
> On Fri, 17 Nov 2006, Paul E. McKenney wrote:
> 
> > > It works for me, but the overhead is still large. Before it would take
> > > 8-12 jiffies for a synchronize_srcu() to complete without there actually
> > > being any reader locks active, now it takes 2-3 jiffies. So it's
> > > definitely faster, and as suspected the loss of two of three
> > > synchronize_sched() cut down the overhead to a third.
> > 
> > Good to hear, thank you for trying it out!
> > 
> > > It's still too heavy for me, by far the most calls I do to
> > > synchronize_srcu() doesn't have any reader locks pending. I'm still a
> > > big advocate of the fastpath srcu_readers_active() check. I can
> > > understand the reluctance to make it the default, but for my case it's
> > > "safe enough", so if we could either export srcu_readers_active() or
> > > export a synchronize_srcu_fast() (or something like that), then SRCU
> > > would be a good fit for barrier vs plug rework.
> > 
> > OK, will export the interface.  Do your queues have associated locking?
> > 
> > > > Attached is a patch that compiles, but probably goes down in flames
> > > > otherwise.
> > > 
> > > Works here :-)
> > 
> > I have at least a couple bugs that would show up under low-memory
> > situations, will fix and post an update.
> 
> Perhaps a better approach to the initialization problem would be to assume 
> that either:
> 
>     1.  The srcu_struct will be initialized before it is used, or
> 
>     2.  When it is used before initialization, the system is running
> 	only one thread.

Are these assumptions valid?  If so, they would indeed simplify things
a bit.

> In other words, statically allocated SRCU strucures that get used during
> system startup must be initialized before the system starts multitasking.  
> That seems like a reasonable requirement.
> 
> This eliminates worries about readers holding mutexes.  It doesn't 
> solve the issues surrounding your hardluckref, but maybe it makes them 
> easier to think about.

For the moment, I cheaped out and used a mutex_trylock.  If this can block,
I will need to add a separate spinlock to guard per_cpu_ref allocation.

Hmmm...  How to test this?  Time for the wrapper around alloc_percpu()
that randomly fails, I guess.  ;-)

						Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux