Re: [REPORT] cfs-v4 vs sd-0.44

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 26, 2007 at 10:57:48AM -0700, Li, Tong N wrote:
> On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> > On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
> > 
> > > Adjustments to the lag computation for for arrivals and departures
> > > during execution are among the missing pieces. Some algorithmic devices
> > > are also needed to account for the varying growth rates of lags of tasks
> > > waiting to run, which arise from differing priorities/weights.
> > 
> > that was the principle of my proposal of sorting tasks by expected completion
> > time and using +/- credit to compensate for too large/too short slice used.
> > 
> > Willy
> 
> Yeah, it's a good algorithm. It's a variant of earliest deadline first
> (EDF). There are also similar ones in the literature such as earliest
> eligible virtual deadline first (EEVDF) and biased virtual finishing
> time (BVFT). Based on wli's explanation, I think Ingo's approach would
> also fall into this category. With careful design, all such algorithms
> that order tasks based on some notion of time can achieve good fairness.
> There are some subtle differences. Some algorithms of this type can
> achieve a constant lag bound, but some only have a constant positive lag
> bound, but O(N) negative lag bound,

Anyway, we're working in discrete time, not linear time. Lag is
unavoidable. At best it can be bounded and compensated for. First time
I thought about this algorithm, I was looking at a line drawn using the
Bresenham algorithm. This algorithm is all about error compensation for
a perfect expectation. Too high, too far, too high, too far... I thought
that the line could represent a task progress as a function of time, and
the pixels the periods the task spends on the CPU. On short intervals,
you lose. On large ones, you're very close to the ideal case.

> meaning some tasks could receive
> much more CPU time than it would under ideal fairness when the number of
> tasks is high.

It's not a problem that a task receives much more CPU than it should,
provided that :

  a) it may do so for a short and bound time, typically less than the
     maximum acceptable latency for other tasks

  b) the excess of CPU it received is accounted for so that it is
     deduced from next passes.


> On the other hand, the log(N) complexity of this type of algorithms has
> been a concern in the research community. This motivated O(1)

I've been for O(1) algorithms for a long time, but seeing how dumb the
things you can do in O(1) are compared to O(logN), I definitely switched
my mind. Also, in O(logN), you often have a lot of common operations
still O(1). Eg: you insert in O(logN) in a time-ordered tree, but you
read from it in O(1). But you still have the ability to change its
content in O(logN) if you need.

Last but not least, spending one hundred cycles a few thousand times
a second is nothing compared to electing the wrong task for a full
time-slice.

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux