* Andi Kleen <[email protected]> wrote:
> > what i meant is a pretty common-sense thing: the more independent the
> > locks are, the more shortlived locking is, the less latencies there are.
>
> At least on SMP the most finegrained locking is not always the best;
> you can end up with bouncing cache lines all the time, with two CPUs
> synchronizing to each other all the time, which is just slow.
yeah, and i wasnt arguing for the most finegrained locking: cacheline
bouncing hurts worst-case latencies just as much (in fact, more, being a
worst-case) than it hurts scalability.
> it is sometimes better to batch things with less locks. And every lock
> has a cost even when not taken, and they add up pretty quickly.
(the best is obviously to have no locking at all, unless there's true
resource sharing.)
> > The reverse is true too: most of the latency-breakers move code out from
> > under locks - which obviously improves scalability too. So if you are
> > working on scalability you'll indirectly improve latencies - and if you
> > are working on reducing latencies, you often improve scalability.
>
> But I agree that often less latency is good even for scalability.
>
>
> > > > but it's certainly not for free. Just like there's no zero-cost
> > > > virtualization, or there's no zero-cost nanokernel approach either,
> > > > there's no zero-cost single-kernel-image deterministic system either.
> > > >
> > > > and the argument about binary kernels - that's a choice up to vendors
> > >
> > > It is not only binary distribution kernels. I always use my own self
> > > compiled kernels, but I certainly would not want a special kernel just
> > > to do something normal that requires good latency (like sound use).
> >
> > for good sound you'll at least need PREEMPT_VOLUNTARY. You'll need
> > CONFIG_PREEMPT for certain workloads or pro-audio use.
>
> AFAIK the kernel has quite regressed recently, but that was not true
> (for reasonable sound) at least for some earlier 2.6 kernels and some
> of the low latency patchkit 2.4 kernels.
>
> So it is certainly possible to do it without preemption.
PREEMPT_VOLUNTARY does it without preemption. PREEMPT_VOLUNTARY is quite
similar to most of the lowlatency patchkits, just simpler.
> > the impact of PREEMPT on the codebase has a positive effect as well: it
> > forces us to document SMP data structure dependencies better. Under
> > PREEMPT_NONE it would have been way too easy to get into the kind of
> > undocumented interdependent data structure business that we so well know
> > from the big kernel lock days. get_cpu()/put_cpu() precisely marks the
> > critical section where we use a given per-CPU data structure.
>
> Nah, there is still quite some code left that is unmarked, but ignores
> this case for various reason (e.g. in low level exception handling
> which is preempt off anyways). However you are right it might have
> helped a bit for generic code. But it is still quite ugly...
there's a slow trend related to RCU: rcu_read_lock() is silent about
what kind of implicit lock dependencies there are. So when we convert a
spinlock-using piece of code to RCU we lose that information, making it
harder to convert it to another type of locking later on. (But this is
not a complaint against RCU, just a demonstration that we do lose
information.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]