Re: io performance...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Susi wrote:
Right, the kernel does not know how many disks are in the array, so it can't automatically increase the readahead. I'd say increasing the readahead manually should solve your throughput issues.

Any guesses for a good number?

We're in RAID10 (2+2) at the moment on 2.6.8-smp. These are the block numbers I'm getting using bonnie++ :

ra	wr	rd
256	68K	46K
512	67K	59K
640	67K	64K
1024	66K	73K
2048	67K	88K
3072	67K	91K
8192	71K	96K
9216	67K	92K
16384	67K	90K

I think we might end up going for 8192.

We're still wondering why rd performance is so low - seems to be the same as a single drive. RAID10 should be the same performance as RAID0 over two drives, shouldn't it?

Max.


Max Waterman wrote:

I left the stripe size at the default, which, I believe, is 64K bytes; same as your fakeraid below.

I did play with 'blockdev --setra' too.

I noticed it was 256 with a single disk, and, with s/w raid, it increased by 256 for each extra disk in the array. IE for the raid 0 array with 4 drives, it was 1024.

With h/w raid, however, it did not increase when I added disks. Should I use 'blockdev --setra 320' (ie 64 x 5 = 320, since we're now running RAID5 on 5 drives)?



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux