On Fri, Apr 27, 2007 at 01:08:17AM +1000, Nick Piggin wrote:
> Andy Whitcroft wrote:
> >Nick Piggin wrote:
> >
>
> >>I don't understand what you mean at all. A block has always been a
> >>contiguous area of disk.
> >
> >
> >Lets take Nick's definition of block being a disk based unit for the
> >moment. That does not change the key contention here, that even with
> >hardware specifically designed to handle 4k pages that hardware handles
> >larger contigious areas more efficiently. David Chinner gives us
> >figures showing major overall throughput improvements from (I assume)
> >shorter scatter gather lists and better tag utilisation. I am loath to
> >say we can just blame the hardware vendors for poor design.
>
> So their controllers get double the throughput when going from 512K
> (128x4K pages) to 2MB (128x16K pages) requests. Do you really think
> it is to do with command processing overhead?
No - it has to do with things like the RAID controller caching behaviour, the
number of disks a single request can keep busy, getting I/os large
enough to avoid partial stripe writes, etc. Remember that this
controller is often on the other side of a HBA so large I/Os are
really desirable here....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]