Re: PREEMPT_RT vs I-PIPE: the numbers, part 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Karim Yaghmour <[email protected]> wrote:

> If I wanted to show "my" project in such a good light, would I have 
> gone back and redone tests, and then published them even if those 
> numbers now showed that the "other" project was as good as "mine"? 
> Would I have even listened to any of your suggestions and gone back 
> and had the tests changed to fit your requirements? Would I still be 
> telling that we're going to further fix the tests based on your 
> feedback?

if anything i wrote offended you i'd like to apologize for it. I feel 
pretty strongly about the stuff i do, but i always regret 99.9% of the 
flames in the next morning :) Also, i only realized when reading your 
reply that you took my "vendor sponsored benchmarking" remark literally 
(and that's my bad too). I never thought of you as a 'vendor' or having 
any commercial interest in this benchmarking - it was just a stupid 
analogy from me. I should have written "supporter driven benchmarking" 
or so - that would still have been a pretty nice flame ;)

also please consider the other side of the coin. You posted numbers that 
initially put PREEMPT_RT in a pretty bad light. Those numbers are still 
being linked to from your website, without any indication to suggest 
that they are incorrect. Even in your above paragraph you are not 
talking about flawed numbers, you are talking about 'changing the tests 
to fit my requirements'. Heck i have no 'requirements' other than to see 
fair numbers. And if adeos/ipipe happens to beat PREEMPT_RT in a fair 
irq latency test you wont hear a complaint from me. (you might see a 
patch quite soon though ;)

And i know what irq latencies to expect from PREEMPT_RT. It takes me 5 
minutes to do a 10 million samples irq test using LPPTEST, the histogram 
takes only 200 bytes on the screen, and the numbers i'm getting differ 
from your numbers - but obviously i cannot run it on your hardware. The 
rtc_wakeup and built-in latency-tracer numbers differ too. They could be 
all wrong though, so i'm curious what your independent method will 
yield.

your lmbench results look accurate and fair, the slowdown during 
irq-load is a known and expected effect of IRQ threading. If you flood 
ping a box and generate context-switches instead of plain interrupts, 
there will be noticeable overhead. I checked some of the lmbench numbers 
today on my testbox, and while there's overhead, it's significantly less 
than the 90% degradation you were seeing. That's why i suggested to you 
to retest using the current base - but you of course dont 'have to'.  
There were a number of bugs fixed in the past few dozen iterations of 
patches that affected various components of lmbench :)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux