Re: [patch] CFS scheduler, -v19

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Willy Tarreau <[email protected]> wrote:

> > The biggest user-visible change in -v19 is reworked sleeper 
> > fairness: it's similar in behavior to -v18 but works more 
> > consistently across nice levels. Fork-happy workloads (like kernel 
> > builds) should behave better as well. There are also a handful of 
> > speedups: unsigned math, 32-bit speedups, O(1) task pickup, 
> > debloating and other micro-optimizations.
> 
> Interestingly, I also noticed the possibility of O(1) task pickup when 
> playing with v18, but did not detect any noticeable improvement with 
> it. Of course, it depends on the workload and I probably didn't 
> perform the most relevant tests.

yeah - it's a small tweak. CFS is O(31) in sleep/wakeup so it's now all 
a big O(1) family again :)

> V19 works very well here on 2.6.20.14. I could start 32k busy loops at 
> nice +19 (I exhausted the 32k pids limit), and could still perform 
> normal operations. I noticed that 'vmstat' scans all the pid entries 
> under /proc, which takes ages to collect data before displaying a 
> line. Obviously, the system sometimes shows some short latencies, but 
> not much more than what you get from and SSH through a remote DSL 
> connection.

great! I did not try to push it this far, yet.

> Here's a vmstat 1 output :
> 
>  r  b  w   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id
> 32437  0  0      0 809724    488   6196    0    0     1     0  135     0 24 72 4
> 32436  0  0      0 811336    488   6196    0    0     0     0  717     0 78 22 0

crazy :-)

> Amusingly, I started mpg123 during this test and it skipped quite a 
> bit. After setting all tasks to SCHED_IDLE, it did not skip anymore. 
> All this seems to behave like one could expect.

yeah. It behaves better than i expected in fact - 32K tasks is pushing 
things quite a bit. (we've got a 32K PID limit for example)

> I also started 30k processes distributed in 130 groups of 234 chained 
> by pipes in which one byte is passed. I get an average of 8000 in the 
> run queue. The context switch rate is very low and sometimes even null 
> in this test, maybe some of them are starving, I really do not know :
> 
>  r  b  w   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id
> 7752  0  1      0 656892    244   4196    0    0     0     0  725     0 16 84  0

hm, could you profile this? We could have some bottleneck somewhere 
(likely not in the scheduler) with that many tasks being runnable. [ 
With CFS you can actually run a profiler under this workload ;-) ]

> In my tree, I have replaced the rbtree with the ebtree we talked 
> about, but it did not bring any performance boost because, eventhough 
> insert() and delete() are faster, the scheduler is already quite good 
> at avoiding them as much as possible, mostly relying on rb_next() 
> which has the same cost in both trees. All in all, the only variations 
> I noticed were caused by cacheline alignment when I tried to reorder 
> fields in the eb_node. So I will stop my experimentations here since I 
> don't see any more room for improvement.

well, just a little bit of improvement would be nice to have too :)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux