From: john stultz <[email protected]>
After discussing w/ Thomas over IRC, it seems the issue is the sched
tick fires on every cpu at the same time, causing extra lock contention.
This smaller change, adds an extra offset per cpu so the ticks don't
line up. This patch also drops the idle latency from 40us down to under
20us.
Signed-off-by: john stultz <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
kernel/time/tick-sched.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
Index: linux-2.6.22-rc4-mm/kernel/time/tick-sched.c
===================================================================
--- linux-2.6.22-rc4-mm.orig/kernel/time/tick-sched.c 2007-06-23 14:38:56.000000000 +0200
+++ linux-2.6.22-rc4-mm/kernel/time/tick-sched.c 2007-06-23 14:38:58.000000000 +0200
@@ -573,6 +573,7 @@ void tick_setup_sched_timer(void)
{
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
ktime_t now = ktime_get();
+ u64 offset;
/*
* Emulate tick processing via per-CPU hrtimers:
@@ -581,8 +582,12 @@ void tick_setup_sched_timer(void)
ts->sched_timer.function = tick_sched_timer;
ts->sched_timer.cb_mode = HRTIMER_CB_IRQSAFE_NO_SOFTIRQ;
- /* Get the next period */
+ /* Get the next period (per cpu) */
ts->sched_timer.expires = tick_init_jiffy_update();
+ offset = ktime_to_ns(tick_period) >> 1;
+ do_div(offset, NR_CPUS);
+ offset *= smp_processor_id();
+ ts->sched_timer.expires = ktime_add_ns(ts->sched_timer.expires, offset);
for (;;) {
hrtimer_forward(&ts->sched_timer, now, tick_period);
--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]