I spied a few more issues from http://lkml.org/lkml/2007/11/20/590.
Patch is below..
Regards,
-Greg
-----------------
Include cpu 0 in the search, and eliminate the redundant cpu_set since
the bit should already be set in the mask.
Signed-off-by: Gregory Haskins <[email protected]>
---
kernel/sched_rt.c | 7 +++----
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 28feeff..fbf4fb1 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -297,7 +297,7 @@ static DEFINE_PER_CPU(cpumask_t, local_cpu_mask);
static int find_lowest_cpus(struct task_struct *task, cpumask_t *lowest_mask)
{
int lowest_prio = -1;
- int lowest_cpu = 0;
+ int lowest_cpu = -1;
int count = 0;
int cpu;
@@ -319,7 +319,7 @@ static int find_lowest_cpus(struct task_struct *task, cpumask_t *lowest_mask)
* and the count==1 will cause the algorithm
* to use the first bit found.
*/
- if (lowest_cpu) {
+ if (lowest_cpu != -1) {
cpus_clear(*lowest_mask);
cpu_set(rq->cpu, *lowest_mask);
}
@@ -335,7 +335,6 @@ static int find_lowest_cpus(struct task_struct *task, cpumask_t *lowest_mask)
lowest_cpu = cpu;
count = 0;
}
- cpu_set(rq->cpu, *lowest_mask);
count++;
} else
cpu_clear(cpu, *lowest_mask);
@@ -346,7 +345,7 @@ static int find_lowest_cpus(struct task_struct *task, cpumask_t *lowest_mask)
* runqueues that were of higher prio than
* the lowest_prio.
*/
- if (lowest_cpu) {
+ if (lowest_cpu != -1) {
/*
* Perhaps we could add another cpumask op to
* zero out bits. Like cpu_zero_bits(cpumask, nrbits);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]