On Thu, 14 Jul 2005 10:46, Con Kolivas wrote: > On Thu, 14 Jul 2005 10:31, David Lang wrote: > > On Thu, 14 Jul 2005, Con Kolivas wrote: > > > On Thu, 14 Jul 2005 03:54, Bill Davidsen wrote: > > >> Con Kolivas wrote: > > >>> On Tue, 12 Jul 2005 21:57, David Lang wrote: > > >>>> for audio and video this would seem to be a fairly simple scaleing > > >>>> factor (or just doing a fixed amount of work rather then a fixed > > >>>> percentage of the CPU worth of work), however for X it is probably > > >>>> much more complicated (is the X load really linearly random in how > > >>>> much work it does, or is it weighted towards small amounts with > > >>>> occasional large amounts hitting? I would guess that at least beyond > > >>>> a certin point the liklyhood of that much work being needed would be > > >>>> lower) > > >>> > > >>> Actually I don't disagree. What I mean by hardware changes is more > > >>> along the lines of changing the hard disk type in the same setup. > > >>> That's what I mean by careful with the benchmarking. Taking the > > >>> results from an athlon XP and comparing it to an altix is silly for > > >>> example. > > >> > > >> I'm going to cautiously disagree. If the CPU needed was scaled so it > > >> represented a fixed number of cycles (operations, work units) then the > > >> effect of faster CPU would be shown. And the total power of all > > >> attached CPUs should be taken into account, using HT or SMP does have > > >> an effect of feel. > > > > > > That is rather hard to do because each architecture's interpretation of > > > fixed number of cycles is different and this doesn't represent their > > > speed in the real world. The calculation when interbench is first run > > > to see how many "loops per ms" took quite a bit of effort to find just > > > how many loops each different cpu would do per ms and then find a way > > > to make that not change through compiler optimised code. The "loops per > > > ms" parameter did not end up being proportional to cpu Mhz except on > > > the same cpu type. > > > > right, but the amount of cpu required to do a specific task will also > > vary significantly between CPU families for the same task as well. as > > long as the loops don't get optimized away by the compiler I think you > > can setup some loops to do the same work on each CPU, even if they take > > significantly different amounts of time (as an off-the-wall, obviously > > untested example you could make your 'loop' be a calculation of Pi and > > for the 'audio' test you compute the first 100 digits, for the video test > > you compute the first 1000 digits, and for the X test you compute a > > random number of digits between 10 and 10000) > > Once again I don't disagree, and the current system of loops_per_ms does > exactly that and can be simply used as a fixed number of loops already. My > point was there'd be argument about what sort of "loop" or load should be > used as each cpu type would do different "loops" faster and they won't > necessarily represent video, audio or X in the real world. Currently the > loop in interbench is simply: > for (i = 0 ; i < loops ; i++) > asm volatile("" : : : "memory"); > > and if noone argues i can use that for fixed workload. What I mean is if you take the loops_per_ms from one machine and plug it in using the -l option on another you can do it now without any modification to the interbench code. Cheers, Con
Attachment:
pgpcVG1wI05L5.pgp
Description: PGP signature
- References:
- [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- From: Con Kolivas <[email protected]>
- Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- From: David Lang <[email protected]>
- Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- From: Con Kolivas <[email protected]>
- [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- Prev by Date: [patch 03/04] swsusp: switch pm_message_t to struct
- Next by Date: Re: [patch] inotify.
- Previous by thread: Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- Next by thread: Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
- Index(es):