Peter Williams wrote:
Willy Tarreau wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many nice level related suggestions in -v4.
I have a suggestion I'd like to make that addresses both nice and
fairness at the same time. As I understand the basic principle
behind this scheduler it to work out a time by which a task should
make it onto the CPU and then place it into an ordered list (based on
this value) of tasks waiting for the CPU. I think that this is a
great idea and my suggestion is with regard to a method for working
out this time that takes into account both fairness and nice.
First suppose we have the following metrics available in addition to
what's already provided.
rq->avg_weight_load /* a running average of the weighted load on the
CPU */
p->avg_cpu_per_cycle /* the average time in nsecs that p spends on
the CPU each scheduling cycle */
where a scheduling cycle for a task starts when it is placed on the
queue after waking or being preempted and ends when it is taken off
the CPU either voluntarily or after being preempted. So
p->avg_cpu_per_cycle is just the average amount of time p spends on
the CPU each time it gets on to the CPU. Sorry for the long
explanation here but I just wanted to make sure there was no chance
that "scheduling cycle" would be construed as some mechanism being
imposed on the scheduler.)
We can then define:
effective_weighted_load = max(rq->raw_weighted_load,
rq->avg_weighted_load)
If p is just waking (i.e. it's not on the queue and its load_weight
is not included in rq->raw_weighted_load) and we need to queue it, we
say that the maximum time (in all fairness) that p should have to
wait to get onto the CPU is:
expected_wait = p->avg_cpu_per_cycle * effective_weighted_load /
p->load_weight
Calculating p->avg_cpu_per_cycle costs one add, one multiply and one
shift right per scheduling cycle of the task. An additional cost is
that you need a shift right to get the nanosecond value from value
stored in the task struct. (i.e. the above code is simplified to give
the general idea). The average would be number of cycles based
rather than time based and (happily) this simplifies the calculations.
If the expected time to get onto the CPU (i.e. expected_wait plus the
current time) for p is earlier than the equivalent time for the
currently running task then preemption of that task would be justified.
I 100% agree on this method because I came to nearly the same
conclusion on
paper about 1 year ago. What I'd like to add is that the expected wake
up time
is not the most precise criterion for fairness.
It's not the expected wake up time being computed. It's the expected
time that the task is selected to run after being put on the queue
(either as a result of waking or being pre-empted).
I think that your comments below are mainly invalid because of this
understanding.
The expected completion
time is better. When you have one task t1 which is expected to run for T1
nanosecs and another task t2 which is expected to run for T2, what is
important for the user for fairness is when the task completes its
work. If
t1 should wake up at time W1 and t2 at W2, then the list should be
ordered
by comparing W1+T1 and W2+T2.
This is a point where misunderstanding results in a wrong conclusion.
However, the notion of expected completion time is useful. I'd
calculate the expected completion time for the task as it goes on to the
CPU and if while it is running a new task is queued whose expected "on
CPU" time is before the current task's expected end time you can think
about preempting when the new tasks "on CPU" task arrives -- how
hard/easy this would be to implement is moot.
What I like with this method is that it remains fair with nice tasks
because
because in order to renice a task tN, you just have to change TN, and
if it
has to run shorter, it can be executed before CPU hogs and stay there
for a
very short time.
Also, I found that if we want to respect interactivity, we must
conserve a
credit for each task.
This is one point where the misunderstanding results in an invalid
conclusion. There is no need for credits as the use of the average time
on CPU each cycle makes the necessary corrections that credits would be
used to address.
It is a bounded amount of CPU time left to be used. When
the task t3 has the right to use T3 nsecs, and wakes up at W3, if it
does not
spend T3 nsec on the CPU, but only N3<T3, then we have a credit C3
computed
like this :
C3 = MAX(MAX_CREDIT, C3 + T3 - N3)
And if a CPU hog uses more than its assigned time slice due to scheduler
resolution, then C3 can become negative (and bounded too) :
C3 = MAX(MIN_CREDIT, C3 + T3 - N3)
Now how is the credit used ? Simple: the credit is a part of a
timeslice, so
it's systematically added to the computed timeslice when ordering the
tasks.
So we indeed order tasks t1 and t2 by W1+T1+C1 and W2+T2+C2.
I think this is overcomplicating matters.
However, some compensating credit might be in order for tasks that get
pre-empted e.g. base their next expected "on CPU" time on the difference
between their average on CPU time and they amount they got to use before
they were pre-empted.
Oops. I made a mistake here it should be just the amount they got this
time if that's less than their average and otherwise the average. That
way if they're booted off early they get back sooner and if they get
booted off late in the game they wait their normal amount.
Peter
--
Peter Williams [email protected]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]