Re: Ten percent test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* William Lee Irwin III <[email protected]> wrote:
>> I strongly suggest assembling a battery of cleanly and properly 
>> written, configurable testcases, and scripting a series of regression 
>> tests as opposed to just randomly running kernel compiles and relying 
>> on Braille.

On Mon, Apr 09, 2007 at 08:03:56PM +0200, Ingo Molnar wrote:
> there's interbench, written by Con (with the purpose of improving 
> RSDL/SD), which does exactly that, but vanilla and SD performs quite the 
> same in those tests.
> it's quite hard to test interactivity, because it's both subjective and 
> because even for objective workloads, things depend so much on exact 
> circumstances. So the best way is to wait for actual complaints, and/or 
> actual testcases that trigger badness, and victims^H^H^H^H^H testers.
> (also note that often it needs _that precise_ workload to trigger some 
> badness. For example make -j depends on the kind of X shell terminal 
> that is used - gterm behaves differently from xterm, etc.)

Interactivity will probably have to stay squishy. The DoS affairs like
fiftyp.c, tenp.c, etc. are more of what I had in mind. There are also
a number of instances where CPU bandwidth distributions are gauged by
top(1) with noninteractive tests where the scriptable testcase affair
should be coming into play.

There are other, relatively obvious testcases for basic functionality
missing, too. For instance, where is the testcase to prove that nice
levels have the intended effect upon CPU bandwidth distribution between
sets of CPU-bound tasks? Or one that gauges the CPU bandwidth
distribution between a task that sleeps some (command-line configurable)
percentage of the time and some (command-line configurable) number of
competing CPU-bound tasks? Or one that gauges the CPU bandwidth
distribution between sets of cooperating processes competing with
ordinary CPU-bound processes? Can it be proven that any of this is
staying constant across interactivity or other changes? Is any of it
being changed as an unintended side-effect? Are the CPU bandwidth
distributions among such sets of competing tasks even consciously decided?

There should be readily-available answers to these questions, but they
are not so.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux