--
On Tue, 9 Oct 2007, mike kravetz wrote:
>
> I did something like this a while ago for another scheduling project.
> A couple 'possible' optimizations to think about are:
> 1) Only scan the remote runqueues once and keep a local copy of the
> remote priorities for subsequent 'scans'. Accessing the remote
> runqueus (CPU specific cache lines) can be expensive.
You mean to keep the copy for the next two tries?
> 2) When verifying priorities, just perform spin_trylock() on the remote
> runqueue. If you can immediately get it great. If not, it implies
> someone else is messing with the runqueue and there is a good chance
> the data you pre-fetched (curr->Priority) is invalid. In this case
> it might be faster to just 'move on' to the next candidate runqueue/CPU.
> i.e. The next highest priority that the new task can preempt.
I was a bit scared of grabing the lock anyway, because that's another
cache hit (write side). So only grabbing the lock when needed would save
us from dirting the runqueue lock for each CPU.
>
> Of course, these 'optimizations' would change the algorithm. Trying to
> make any decision based on data that is changing is always a crap shoot. :)
Yes indeed. The aim for now is to solve the latencies that you've been
seeing. But really, there is still holes (small ones) that can cause a
latency if a schedule happened "just right". Hopefully the final result of
this work will close them too.
-- Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]