Con Kolivas wrote on Friday, June 02, 2006 6:17 AM
> On Friday 02 June 2006 20:30, Con Kolivas wrote:
> > On Friday 02 June 2006 18:56, Nick Piggin wrote:
> > > And why do we lock all siblings in the other case, for that matter? (not
> > > that it makes much difference except on niagara today).
> >
> > If we spinlock (and don't trylock as you're proposing) we'd have to do a
> > double rq lock for each sibling. I guess half the time double_rq_lock will
> > only be locking one runqueue... with 32 runqueues we either try to lock all
> > 32 or lock 1.5 runqueues 32 times... ugh both are ugly.
>
> Thinking some more on this it is also clear that the concept of per_cpu_gain
> for smt is basically wrong once we get beyond straight forward 2 thread
> hyperthreading. If we have more than 2 thread units per physical core, the
> per cpu gain per logical core will decrease the more threads are running on
> it. While it's always been obvious the gain per logical core is entirely
> dependant on the type of workload and wont be a simple 25% increase in cpu
> power, it is clear that even if we assume an "overall" increase in cpu for
> each logical core added, there will be some non linear function relating
> power increase to thread units used. :-|
In the context of having more than 2 sibling CPUs in a domain, doesn't the
current code also suffer from thunder hurd problem as well? When high
priority task goes to sleep, it will wake up *all* sibling sleepers and
then they will all fight for CPU resource, but potentially only one will win?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]