Re: Some NCQ numbers...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 04 2007, James Bottomley wrote:
> On Wed, 2007-07-04 at 10:19 +0900, Tejun Heo wrote:
> > Michael Tokarev wrote:
> > > Well.  It looks like the results does not depend on the
> > > elevator.  Originally I tried with deadline, and just
> > > re-ran the test with noop (hence the long delay with
> > > the answer) - changing linux elevator changes almost
> > > nothing in the results - modulo some random "fluctuations".
> > 
> > I see.  Thanks for testing.
> > 
> > > In any case, NCQ - at least in this drive - just does
> > > not work.  Linux with its I/O elevator may help to
> > > speed things up a bit, but the disk does nothing in
> > > this area.  NCQ doesn't slow things down either - it
> > > just does not work.
> > > 
> > > The same's for ST3250620NS "enterprise" drives.
> > > 
> > > By the way, Seagate announced Barracuda ES 2 series
> > > (in range 500..1200Gb if memory serves) - maybe with
> > > those, NCQ will work better?
> > 
> > No one would know without testing.
> > 
> > > Or maybe it's libata which does not implement NCQ
> > > "properly"?  (As I shown before, with almost all
> > > ol'good SCSI drives TCQ helps alot - up to 2x the
> > > difference and more - with multiple I/O threads)
> > 
> > Well, what the driver does is minimal.  It just passes through all the
> > commands to the harddrive.  After all, NCQ/TCQ gives the harddrive more
> > responsibility regarding request scheduling.
> 
> Actually, in many ways the result support a theory of SCSI TCQ Jens used
> when designing the block layer.  The original TCQ theory held that the
> drive could make much better head scheduling decisions than the
> Operating System, so you just used TCQ to pass all the outstanding I/O
> unfiltered down to the drive to let it schedule.  However, the I/O
> results always seemed to indicate that the effect of TCQ was negligible
> at around 4 outstanding commands, leading to the second theory that all
> TCQ was good for was saturating the transport, and making scheduling
> decisions was, indeed, better left to the OS (hence all our I/O
> schedulers).

Indeed, the above I still find to be true. The only real case where
larger depths make a real difference, is a pure random reads (or writes,
with write caching off) workload. And those situations are largely
synthetic, hence benchmarks tend to show NCQ being a lot more beneficial
since they construct workloads that consist 100% of random IO. Real life
is rarely so black and white.

Additionally, there are cases where drive queue depths hurt a lot. The
drive has no knowledge of fairness, or process-to-io mappings. So AS/CFQ
has to artificially limit queue depths competing IO processes doing
semi (or fully) sequential workloads, or throughput plummets.

So while NCQ has some benefits, I typically tend to prefer managing the
IO queue largely in software instead of punting to (often) buggy
firmware.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux