Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 12 Jul 2005, Con Kolivas wrote:


On Tue, 12 Jul 2005 21:57, David Lang wrote:
this looks very interesting, however one thing that looks odd to me in
this is the thought of comparing the results for significantly different
hardware.

for some of the loads you really are going to be independant of the speed
of the hardware (burn, compile, etc will use whatever you have) however
for others (X, audio, video) saying that they take a specific percentage
of the cpu doesn't seem right.

if I have a 400MHz cpu each of these will take a much larger percentage of
the cpu to get the job done then if I have a 4GHz cpu for example.

for audio and video this would seem to be a fairly simple scaleing factor
(or just doing a fixed amount of work rather then a fixed percentage of
the CPU worth of work), however for X it is probably much more complicated
(is the X load really linearly random in how much work it does, or is it
weighted towards small amounts with occasional large amounts hitting? I
would guess that at least beyond a certin point the liklyhood of that much
work being needed would be lower)

Actually I don't disagree. What I mean by hardware changes is more along the
lines of changing the hard disk type in the same setup. That's what I mean by
careful with the benchmarking. Taking the results from an athlon XP and
comparing it to an altix is silly for example.


however if you defined these loads in terms of the amount of work (number of loops) rather then percentage of the CPU I think you would be pretty close (especially if there was a way to tweak these for a test if needed). This sort of testing would be especially useful for low-end/embedded applications.

for example a series 1 DirectTv tivo manages to write two program streams to disk while reading and viewing a third, all on a 75MHz PPC with 64M of ram (on a tweaked 2.1 kernel) you are pretty close to being able to simulate this load

which brings up another possible config option/test case, changing the read/write tests to try to do X MB/sec rather then the max possible speed (probably defaulting to max if nothing is specified)

thanks again for working to define a good test case

David Lang


--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux