Re: How much swap on laptop?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Les wrote:
> On trick I have done on heavy processing tasks is to put swap on a
> totally separate drive.  Thus the seeks were swap relative and required
> less access time.  In some cases this can provide dramatic increases in
> through put.  Also having a separate disk for tmp will provide you with
> some benefits depending on the type of software you are using.  But
> again, YMMV.

Hadders wrote:   
> Hmm, i'd imagine though that the limitation of the throughput will be
> dictated to by the maximum bus speed you're interfacing with here?
> (sounds wordy) Ummm, if you're using native PCI ATA, then that is
> 133MB/s ?

OK, there's a *number* of misconceptions there.

In most cases, you won't get anything *close* to that speed. The biggest
reason is access times.

Disks are physical things -- the head has to get in exactly the right
place, and then the disk has to rotate to exactly the right place,
before you can *start* transferring data. This means that if you've got
a lot of small accesses, the "access time" for everything to get in the
right place is going to be *much* larger than the time to transfer the
data. You'll be lucky to get 200 transfers a second on standard IDE
drives. If each of those transfers is of a single 4KB page, then you're
going to get up to 800 KB/s.

This *seriously* sucks. It's why hard disks can be quite so slow. It's
why big database systems have so many disks.

With things like command queuing, the disk can re-order requests so the
head can get to a request not long before the right part of the disk
gets there. This cuts down the "waiting for the disk" part of the access
time. The other thing you can do is make sure that the data you want is
physically close together -- hence the idea of putting it on a separate
drive.

This is the sort of situation that Les is talking about.

After that, normally the next big limitation will be how fast the disk
can get data off the drive once everything's in the right place. This
"transfer rate" tends to fall off as you go from the outside of the disk
to the inside: on a modern SATA disk it could be anywhere from about
80 MB/s to 30 MB/s.

Obviously, this is *far* less of a practical limitation. But you only
reach those speeds while you're continually reading or writing (and
you'd better hope that the file is laid out so it's physically
continuous on disk).

The next limitation will be the cable to the adapter (something like
SATA, IDE or SCSI), and then you have the connection between the adapter
and the rest of the computer. These days, it will be something PCI-like,
but it could be a traditional 133 MB/s PCI connection, a faster PCI,
PCI-X or PCI-E bus, or a connection within a chipset that just looks
PCI-like to software.

Note that PCI has overheads of its own -- some chipsets can only get
about 90 MB/s practical bandwidth to any one device out of a 133 MB/s
bus. And in any case, that 133 MB/s is shared among all devices on the
bus -- gigabit Ethernet can easily saturate it, too.

> That limit is not so with an actual RAID controller is it?
> As the traffic stays relative to the controller

Ultimately the data has to get off the RAID controller into main memory
(or the other direction). There is a potential bottleneck here -- it can
be a very real one data going to or from four high-speed SATA drives in
a RAID 0 configuration (potentially 300 MB/s) if that SATA adapter is
stuck on a 133 MB/s PCI bottleneck bus.

Hope this helps,

James.

-- 
E-mail:     james@ | I'll be more enthusiastic about encouraging thinking
aprilcottage.co.uk | outside the box when there's evidence of any thinking
                   | going on inside it.
                   |     -- Terry Pratchett


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux