On Fri, 2006-11-03 at 18:47 +0100, Andi Kleen wrote:
> > So unless there is some other array that is sized by NR_IRQs
> > in the context switch path which could account for this in
> > other ways. It looks like you just got unlucky.
>
>
> TLB/cache profiling data might be useful?
> My bet would be more on cache effects.
>
The TLB miss, cache miss and page walk profiles did not change when I
measured it.
I have a suspicion that the overhead to pin parent and child
processes to specific cpu had something to do with the
change in time observed. Lmbench includes this overhead in
the fork time it reported. I had chosen the lmbench option to
set parent and child process on specific cpu.
When I skip this by picking another lmbench option to let scheduler
pick the placement of parent and child process. I see that
the fork time now stays unchanged with this setting. Wonder if
the increase in time is in sched_setaffinity.
Tim
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]