Le mar 18/04/2006 à 23:01, Mingming Cao a écrit : > On Tue, 2006-04-18 at 09:30 +0200, Arjan van de Ven wrote: > > On Tue, 2006-04-18 at 09:14 +0200, Laurent Vivier wrote: > > > Le lun 17/04/2006 à 23:32, Ravikiran G Thirumalai a écrit : > > > > On Mon, Apr 17, 2006 at 11:09:36PM +0200, Arjan van de Ven wrote: > > > > > On Mon, 2006-04-17 at 14:07 -0700, Ravikiran G Thirumalai wrote: > > > > > > > > > > > > > > > > > > I ran the same tests on a 16 core EM64T box very similar to the one > > > > > > you ran > > > > > > dbench on :). Dbench results on ext3 varies quite a bit. I couldn't > > > > > > get > > > > > > to a statistically significant conclusion For eg, > > > > > > > > > > > > > > > dbench is not a good performance benchmark. At all. Don't use it for > > > > > that ;) > > > > > > > > Agreed. (I did not mean to use it in the first place :). I was just trying > > > > to verify the benchmark results posted earlier) > > > > > > > > Thanks, > > > > Kiran > > > > > > What is the good performance benchmark to know if we should use atomic_t > > > instead of percpu_counter ? > > > > you probably want something like postal/postmark instead or so (although > > that's not ideal either), at least that's reproducable > > > postmark is a single threaded benchmark. > > The ext3 filesystem free blocks counter is mostly being updated at block > allocation and free code. So, a test with many many threads doing block > allocation/deallocation simultaneously will stress the free blocks > counter accounting better than a single threaded fs benchmark. After > all, the main reason we choose to use percpu counter for the free blocks > counter at the first place, I believe, was to support parallel block > allocation. > > I would suggest run tiobench with many threads (>256), or even better, > run tiobench with many dd tests at the background. You can find attached my results with tiobench (256 threads, always on x440 with 8 CPUs hyperthreaded = 16). But, as the results are very different, I think we can't really conclude... in fact, I think atomic_t or percpu_counter have no impact on the results. Regards, Laurent -- Laurent Vivier Bull, Architect of an Open World (TM) http://www.bullopensource.org/ext4
tiobench.pl --size 16 --numruns 10 --threads 256 Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads atomic_long_t 16 4096 256 394.38 1540.% 2.822 5960.36 0.00000 0.00000 26 atomic_long_t 16 4096 256 384.70 1547.% 2.087 6187.27 0.00000 0.00000 25 atomic_long_t 16 4096 256 395.69 1548.% 2.001 5988.64 0.00000 0.00000 26 atomic_long_t 16 4096 256 396.10 1544.% 1.987 5836.74 0.00000 0.00000 26 atomic_long_t 16 4096 256 377.66 1551.% 2.435 6038.05 0.00000 0.00000 24 atomic_long_t 16 4096 256 378.35 1550.% 2.145 6369.26 0.00000 0.00000 24 atomic_long_t 16 4096 256 375.38 1545.% 2.236 6318.84 0.00000 0.00000 24 atomic_long_t 16 4096 256 399.97 1546.% 2.021 5911.10 0.00000 0.00000 26 atomic_long_t 16 4096 256 386.59 1542.% 2.045 5968.13 0.00000 0.00000 25 atomic_long_t 16 4096 256 401.45 1547.% 2.000 5918.17 0.00000 0.00000 26 percpu_counter 16 4096 256 396.22 1539.% 1.965 5984.66 0.00000 0.00000 26 percpu_counter 16 4096 256 403.50 1547.% 2.373 5503.98 0.00000 0.00000 26 percpu_counter 16 4096 256 388.54 1547.% 2.047 6100.05 0.00000 0.00000 25 percpu_counter 16 4096 256 397.43 1540.% 2.096 6010.45 0.00000 0.00000 26 percpu_counter 16 4096 256 398.81 1543.% 2.134 5677.78 0.00000 0.00000 26 percpu_counter 16 4096 256 399.85 1548.% 1.980 5805.21 0.00000 0.00000 26 percpu_counter 16 4096 256 394.70 1551.% 2.021 5960.13 0.00000 0.00000 25 percpu_counter 16 4096 256 396.64 1543.% 2.132 5901.40 0.00000 0.00000 26 percpu_counter 16 4096 256 390.70 1550.% 1.906 5972.05 0.00000 0.00000 25 percpu_counter 16 4096 256 401.98 1547.% 2.049 5839.44 0.00000 0.00000 26 Random Reads atomic_long_t 16 4096 256 100.79 1230.% 0.351 3.82 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.61 1342.% 0.367 13.42 0.00000 0.00000 8 atomic_long_t 16 4096 256 111.16 1354.% 0.503 450.22 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.79 1333.% 0.366 4.73 0.00000 0.00000 8 atomic_long_t 16 4096 256 114.61 1322.% 0.375 12.74 0.00000 0.00000 9 atomic_long_t 16 4096 256 111.18 1350.% 0.368 19.89 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.14 1344.% 0.395 143.76 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.41 1346.% 0.374 31.40 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.89 1353.% 0.372 12.64 0.00000 0.00000 8 atomic_long_t 16 4096 256 112.40 1354.% 0.365 10.88 0.00000 0.00000 8 percpu_counter 16 4096 256 112.68 1341.% 0.439 263.93 0.00000 0.00000 8 percpu_counter 16 4096 256 109.77 1356.% 0.372 35.58 0.00000 0.00000 8 percpu_counter 16 4096 256 114.18 1339.% 0.371 29.47 0.00000 0.00000 9 percpu_counter 16 4096 256 112.61 1339.% 0.430 241.78 0.00000 0.00000 8 percpu_counter 16 4096 256 111.09 1343.% 0.372 46.84 0.00000 0.00000 8 percpu_counter 16 4096 256 111.98 1355.% 0.358 16.41 0.00000 0.00000 8 percpu_counter 16 4096 256 112.55 1348.% 0.393 121.30 0.00000 0.00000 8 percpu_counter 16 4096 256 114.84 1324.% 0.398 112.03 0.00000 0.00000 9 percpu_counter 16 4096 256 111.45 1353.% 0.368 22.09 0.00000 0.00000 8 percpu_counter 16 4096 256 112.21 1352.% 0.405 150.62 0.00000 0.00000 8 Sequential Writes atomic_long_t 16 4096 256 28.72 350.0% 103.526 3087.83 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.47 350.1% 107.563 4127.54 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.10 346.6% 108.709 2767.21 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.24 345.1% 106.619 3025.58 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.35 358.4% 110.779 3844.84 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.14 349.9% 109.956 3580.34 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.23 349.6% 110.011 2770.21 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.46 355.4% 108.701 2694.87 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.14 346.3% 108.114 3461.73 0.00000 0.00000 8 atomic_long_t 16 4096 256 28.49 348.8% 109.625 3160.93 0.00000 0.00000 8 percpu_counter 16 4096 256 27.92 344.3% 109.420 3497.14 0.00000 0.00000 8 percpu_counter 16 4096 256 28.23 343.2% 110.146 3279.80 0.00000 0.00000 8 percpu_counter 16 4096 256 28.35 345.6% 110.027 3527.68 0.00000 0.00000 8 percpu_counter 16 4096 256 28.17 348.9% 107.143 3735.24 0.00000 0.00000 8 percpu_counter 16 4096 256 28.07 333.8% 107.581 2442.35 0.00000 0.00000 8 percpu_counter 16 4096 256 28.36 343.2% 106.625 2740.56 0.00000 0.00000 8 percpu_counter 16 4096 256 28.33 343.6% 107.201 3029.49 0.00000 0.00000 8 percpu_counter 16 4096 256 28.28 339.9% 107.849 3100.73 0.00000 0.00000 8 percpu_counter 16 4096 256 28.27 344.8% 108.008 3753.06 0.00000 0.00000 8 percpu_counter 16 4096 256 28.54 354.1% 108.254 2831.67 0.00000 0.00000 8 Random Writes atomic_long_t 16 4096 256 2.47 64.55% 5.690 1295.08 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.45 63.52% 6.021 1386.22 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.47 63.87% 5.621 912.91 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.45 64.34% 6.361 1444.26 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.47 64.04% 5.793 1307.02 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.47 64.14% 5.979 1690.00 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.49 64.63% 5.993 1820.59 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.48 64.98% 6.400 1829.93 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.44 64.05% 5.763 1631.14 0.00000 0.00000 4 atomic_long_t 16 4096 256 2.53 66.19% 5.728 919.34 0.00000 0.00000 4 percpu_counter 16 4096 256 2.43 63.69% 5.927 851.10 0.00000 0.00000 4 percpu_counter 16 4096 256 2.42 62.73% 5.997 1371.13 0.00000 0.00000 4 percpu_counter 16 4096 256 2.46 64.17% 6.223 1808.24 0.00000 0.00000 4 percpu_counter 16 4096 256 2.46 64.04% 5.897 1410.98 0.00000 0.00000 4 percpu_counter 16 4096 256 2.47 63.98% 5.829 1197.30 0.00000 0.00000 4 percpu_counter 16 4096 256 2.48 64.15% 5.660 1079.85 0.00000 0.00000 4 percpu_counter 16 4096 256 2.47 63.81% 6.041 1401.02 0.00000 0.00000 4 percpu_counter 16 4096 256 2.48 64.88% 6.078 1218.58 0.00000 0.00000 4 percpu_counter 16 4096 256 2.49 64.90% 6.148 1468.47 0.00000 0.00000 4 percpu_counter 16 4096 256 2.47 65.64% 5.724 1148.28 0.00000 0.00000 4
Attachment:
signature.asc
Description: Ceci est une partie de message =?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e=2E?=
- References:
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Laurent Vivier <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Andrew Morton <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Laurent Vivier <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Ravikiran G Thirumalai <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Arjan van de Ven <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Ravikiran G Thirumalai <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Laurent Vivier <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Arjan van de Ven <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- From: Mingming Cao <[email protected]>
- Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- Prev by Date: Re: A missing i_mutex in rename? (Linux kernel 2.6.latest)
- Next by Date: Re: [RFC][PATCH 0/11] security: AppArmor - Overview
- Previous by thread: Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- Next by thread: Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
- Index(es):