That makes sense, but how come the numbers for the large file (2TB)
doesn't seem to match with the Avg. Seek Time that 15k drives have? Avg
Seek Time for those drives are in the 5ms range, and I assume they
aren't just seeking in the first couple tracks when they come up with
that number (and Bonnie++ confirms this too). Any thoughts on why it is
double for me when I use the drives?
Avi Kivity wrote:
fitzboy wrote:
BUT... here is what I need to understand, the filesize has a drastic
effect on performance. If I am doing random reads from a 20GB file
(system only has 2GB ram, so caching is not a factor), I get
performance about where I want it to be: about 5.7 - 6ms per block. But
if that file is 2TB then the time almost doubles, to 11ms. Why is this?
No other factors changed, only the filesize.
With the 20GB file, the disk head is seeking over 1% of the tracks. With
the 2TB file, it is seeking over 100% of the tracks.
I am assuming that somewhere along the way, the kernel now has to do an
additional read from the disk for some metadata for xfs... perhaps the
btree for the file doesn't fit in the kernel's memory? so it actually
needs to do 2 seeks, one to find out where to go on disk then one to
No, the btree should be completely cached fairly soon into the test.
get the data. Is that the case? If so, how can I remedy this? How can I
tell the kernel to keep all of the files xfs data in memory?
Add more disks. If you're writing, use RAID 1.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]