Re: io performance...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Susi wrote:
Did you direct the program to use O_DIRECT?

I'm just using the s/w (iorate/bonnie++) with default options - I'm no expert. I could try though.

If not then I believe the problem you are seeing is that the generic block layer is not performing large enough readahead to keep all the disks in the array reading at once, because the stripe width is rather large. What stripe factor did you format the array using?

I left the stripe size at the default, which, I believe, is 64K bytes; same as your fakeraid below.

I did play with 'blockdev --setra' too.

I noticed it was 256 with a single disk, and, with s/w raid, it increased by 256 for each extra disk in the array. IE for the raid 0 array with 4 drives, it was 1024.

With h/w raid, however, it did not increase when I added disks. Should I use 'blockdev --setra 320' (ie 64 x 5 = 320, since we're now running RAID5 on 5 drives)?

I have a sata fakeraid at home of two drives using a stripe factor of 64 KB. If I don't issue O_DIRECT IO requests of at least 128 KB ( the stripe width ), then throughput drops significantly. If I issue multiple async requests of smaller size that totals at least 128 KB, throughput also remains high. If you only issue a single 32 KB request at a time, then two requests must go to one drive and be completed before the other drive gets any requests, so it remains idle a lot of the time.

I think that makes sense (which is a change in this RAID performance business :( ).

Thanks.

Max.

Max Waterman wrote:
Hi,

I've been referred to this list from the linux-raid list.

I've been playing with a RAID system, trying to obtain best bandwidth
from it.

I've noticed that I consistently get better (read) numbers from kernel 2.6.8
than from later kernels.

For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
kernels.

I'm using this :

<http://www.sharcnet.ca/~hahn/iorate.c>

to measure the iorate. I'm using the debian distribution. The h/w is a MegaRAID 320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 15Krpm drives.

The later kernels I've been using are :

2.6.12-1-686-smp
2.6.14-2-686-smp
2.6.15-1-686-smp

The kernel which gives us the best results is :

2.6.8-2-386

(note that it's not an smp kernel)

I'm testing on an otherwise idle system.

Any ideas to why this might be? Any other advice/help?

Thanks!

Max.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux