* Peter Williams <[email protected]> wrote:
> The problem:
>
> The current scheduler implementation maps 40 nice values and 10 bonus
> values into only 40 priority slots on the run queues. This results in
> the dynamic priorities of tasks at either end of the nice scale being
> squished up. E.g. all tasks with nice in the range -20 to -16 and the
> maximum of 10 bonus points will end up with a dynamic priority of
> MAX_RT_PRIO and all tasks with nice in the range 15 to 19 and no bonus
> points will end up with a dynamic priority of MAX_PRIO - 1.
>
> Although the fact that niceness is primarily implemented by time slice
> size means that this will have little or no adverse effect on the long
> term allocation of CPU resources due to niceness, it could adversely
> effect latency as it will interfere with preemption.
this property of the priority distribution was intentional from me, i
wanted to have an easy way to test 'no priority boosting downwards'
(nice +19) and 'no priority boosting upwards' (nice -20) conditions. But
i like your patch, because it simplifies effective_prio() a bit, and
with SCHED_BATCH we'll have the 'no boosting' property anyway. Could you
redo the patch against the current scheduler queue in -mm, so that we
can try it out in -mm?
Btw., another user-visible effect is that task_prio() will return the
new range, which will be visible in e.g. 'top'. I dont think it will be
confusing though.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]