On 10/11/05, Ingo Molnar <[email protected]> wrote:
>
> * Mark Knecht <[email protected]> wrote:
>
> > > ( softirq-timer/0-3 |#0): new 3997 us maximum-latency critical section.
> >
> > So the root cause of this 4mS delay is the 250Hz timer. If I change
> > the system to use the 1Khz timer then the time in this section drops,
> > as expected, to 1mS.
>
> this was a bug in the critical-section-latency measurement code of x64.
> The timer irq is the one that leaves userspace running for the longest
> time, between two kernel calls.
>
> I have fixed these bugs in -rc4-rt1, could you try it? It should report
> much lower latencies, regardless of PM settings.
>
> Ingo
>
Ingo,
This test now reports much more intersting data:
( dmesg-8010 |#0): new 13 us maximum-latency critical section.
=> started at timestamp 117628604: <do_IRQ+0x29/0x50>
=> ended at timestamp 117628618: <do_IRQ+0x39/0x50>
Call Trace: <IRQ> <ffffffff8014c42c>{check_critical_timing+492}
<ffffffff8014c64b>{sub_preempt_count_ti+75} <ffffffff80110159>{do_IRQ+57}
<ffffffff8010e16c>{ret_from_intr+0} <EOI>
<ffffffff80249a75>{copy_page+5}
<ffffffff801671f1>{do_no_page+737}
<ffffffff801674ee>{__handle_mm_fault+414}
<ffffffff8014c64b>{sub_preempt_count_ti+75}
<ffffffff803eac99>{do_page_fault+1049}
<ffffffff8014c64b>{sub_preempt_count_ti+75}
<ffffffff80168eee>{vma_link+286}
<ffffffff8010e4c1>{error_exit+0} <ffffffff8024a866>{__clear_user+22}
<ffffffff801a46fb>{padzero+27} <ffffffff801a4b42>{load_elf_interp+850}
<ffffffff801a5b1d>{load_elf_binary+3341}
<ffffffff8014c64b>{sub_preempt_count_ti+75}
<ffffffff8014c64b>{sub_preempt_count_ti+75}
<ffffffff801a4e10>{load_elf_binary+0}
<ffffffff80182ee0>{search_binary_handler+272}
<ffffffff801831f5>{do_execve+405}
<ffffffff8010dbc6>{system_call+126} <ffffffff8010c634>{sys_execve+68}
<ffffffff8010dfea>{stub_execve+106}
=> dump-end timestamp 117628777
( nautilus-7955 |#0): new 14 us maximum-latency critical section.
=> started at timestamp 127874927: <do_IRQ+0x29/0x50>
=> ended at timestamp 127874941: <do_IRQ+0x39/0x50>
Call Trace: <IRQ> <ffffffff8014c42c>{check_critical_timing+492}
<ffffffff8014c64b>{sub_preempt_count_ti+75} <ffffffff80110159>{do_IRQ+57}
<ffffffff8010e16c>{ret_from_intr+0} <EOI>
<ffffffff8010dbc6>{system_call+126}
=> dump-end timestamp 127874996
( nautilus-7955 |#0): new 14 us maximum-latency critical section.
=> started at timestamp 128647499: <do_IRQ+0x29/0x50>
=> ended at timestamp 128647514: <do_IRQ+0x39/0x50>
Call Trace: <IRQ> <ffffffff8014c42c>{check_critical_timing+492}
<ffffffff8014c64b>{sub_preempt_count_ti+75} <ffffffff80110159>{do_IRQ+57}
<ffffffff8010e16c>{ret_from_intr+0} <EOI>
=> dump-end timestamp 128647567
lightning ~ #
NOTE: In my first kernel build I turned on latency measurement for
both max preempt and ITQ-off. The kernel segfaulted immediately after
the boot messages reminding me that these were on. I rebuilt the
kernel with only the max preempt and that worked. I suspect some
problem with the IRQ-off section since that's the only change I made
and it did not segfault in -rc3-rt13.
Thanks,
Mark
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]