Hi,
On Sun, Jun 10, 2007 at 09:44:05AM -0000, Thomas Gleixner wrote:
> From: john stultz <[email protected]>
>
> After discussing w/ Thomas over IRC, it seems the issue is the sched
> tick fires on every cpu at the same time, causing extra lock contention.
Hmm, the cpu-specific offset calculation isn't too expensive, hopefully?
(div/mul in patch, maybe this could be done differently)
And is it granted that the do_div() compiles into a nice plain void
on non-SMP? Would be good to verify this.
And calculation order? Do multiply before division to minimize calculation
error?
(for a timer tick it probably doesn't matter, however)
And of course OTOH doing it the other way might lead to overflows...
> This smaller change, adds an extra offset per cpu so the ticks don't
> line up. This patch also drops the idle latency from 40us down to under
> 20us.
Very nice, thanks!
> + /* Get the next period (per cpu)*/
> ts->sched_timer.expires = tick_init_jiffy_update();
> + offset = ktime_to_ns(tick_period) >> 1;
> + do_div(offset, NR_CPUS);
> + offset *= smp_processor_id();
Andreas Mohr
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]