* Steven Rostedt <[email protected]> wrote:
> > The numbers make me suspect that Ingo's mutexes are unfair too, but I've
> > not looked at the code yet.
>
> Yes, Ingo's code does act like this unfairness. Interesting also is
> that Ingo's original code for his rt_mutexes was fair, and it killed
> performance for high priority processes. I introduced a "lock
> stealing" algorithm that would check if the process trying to grab the
> lock again was a higher priority then the one about to get it, and if
> it was, it would "steal" the lock from it unfairly as you said.
yes, it's unfair - but stock semaphores are unfair too, so what i've
measured is still a fair comparison of the two implementations.
lock stealing i've eliminated from this patch-queue, and i've moved the
point of acquire to after the schedule(). (lock-stealing is only
relevant for PI, where we always need to associate an owner with the
lock, hence we pass ownership at the point of release.)
> Now, you are forgetting about PREEMPT. Yes, on multiple CPUs, and
> that is what Ingo is testing, to wait for the other CPU to schedule in
> and run is probably not as bad as with PREEMPTION. (Ingo, did you have
> preemption on in these tests?). [...]
no, CONFIG_PREEMPT was disabled in every test result i posted. (but i
get similar results even with it enabled.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]