Re: Sched - graphic smoothness under load - cfs-v13 sd-0.48

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bill Davidsen wrote:
Miguel Figueiredo wrote:
Bill Davidsen wrote:
I was unable to reproduce the numbers Miguel generated, comments below. The -ck2 patch seems to run nicely, although the memory repopulation from swap would be most useful on system which have a lot of memory pressure.

I spent a few hours running the -ck2 patch, and I didn't see any numbers like yours. What I did see is going up with my previous results as http://www.tmr.com/~davidsen/sched_smooth_04.html. While there were still some minor pauses in glxgears with my test, performance was very similar to the sd-0.48 results. And I did try watching video with high load, without problems. Only when I run a lot of other screen-changing processes can I see pauses in the display.
Your subjective impressions would be helpful, and you may find that the package in the www.tmr.com/~public/source is slightly easier to use and gives more stable results. The documentation suggests the way to take samples (the way I did it) but if you feel more or longer samples would help it is tunable.

I added Con to the cc list, he may have comments or suggestions (against the current versions, please). Or he may feel that video combined with other heavy screen updating is unrealistic or not his chosen load. I'm told the load is similar to games which use threads and do lots of independent action, if that's a reference.

I'll include the -ck2 patch in my testing on other hardware.


Hi Bill,

 the numbers i posted before are repeatable on that machine.

The numbers you posted in <[email protected]> are not the same... From my inbox I grab some very non-matching values:
=====
Here's the funny part...

Lets call:

a) to "random number of processes run while glxgears is running", gl_fairloops file

b) to "generated frames while running a burst of processes" aka "massive and uknown amount of operations in one process", gl_gears file

kernel    2.6.21-cfs-v13    2.6.21-ck2
a)    194464        254669       b)    54159        124
=====

The numbers in your glitch1.html file show a close correlation for cfs and -ck2, well within what I would expect. The stddev for the loops is larger for -cf2, but not out of line with what I see, and nothing like the numbers you originally sent me (which may have been testing something else, or from an old version before I made improvements, or ???). In any case thanks for testing.


I did run, again, glitch1 on my laptop (T2500 CoreDuo, also Nvidia) please check: http://www.debianpt.org/~elmig/pool/kernel/20070523/


Thanks, those data seem as expected.


These numbers are from a different machine (this has 2 cores).
The other machine it's Debian Unstable and this Debian stable.


--

Com os melhores cumprimentos/Best regards,

Miguel Figueiredo
http://www.DebianPT.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux