I minimized my patch and against to 2.6.12-rc4 this time, see below.
The new schedstat fields are for the test propose only, so I removed
them completedly from patch. Theoritically, requeue_task() is always
cheaper than dequeue_task() followed by enqueue_task(). So, if 99% of
priority recalculation trigger requeue_task(), it will save.
In addition, my load is to build the kernel, which took around 30
minutes with around 30% CPU usage on 2x2 processors (duel processors
with HT enable).
Here is the statistics:
CPU0: priority_changed (669 times), priority_unchanged(335,138 times)
CPU1: priority_changed (784 times), priority_unchanged(342,419 times)
CPU2: priority_changed (782 times), priority_unchanged(283,494 times)
CPU3: priority_changed (872 times), priority_unchanged(365,865 times)
Thanks,
-chen
/*=====Patch=====*/
--- linux-2.6.12-rc4.orig/kernel/sched.c 2005-05-19 14:57:55.000000000 -0700
+++ linux-2.6.12-rc4/kernel/sched.c 2005-05-19 23:47:22.000000000 -0700
@@ -2613,7 +2613,7 @@
struct list_head *queue;
unsigned long long now;
unsigned long run_time;
- int cpu, idx;
+ int cpu, idx, prio;
/*
* Test if we are atomic. Since do_exit() needs to call into
@@ -2735,9 +2735,17 @@
delta = delta * (ON_RUNQUEUE_WEIGHT * 128 / 100) / 128;
array = next->array;
- dequeue_task(next, array);
+ prio = next->prio;
+
recalc_task_prio(next, next->timestamp + delta);
- enqueue_task(next, array);
+
+ if (unlikely(prio != next->prio))
+ {
+ dequeue_task(next, array);
+ enqueue_task(next, array);
+ }
+ else
+ requeue_task(next, array);
}
next->activated = 0;
switch_tasks:
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]