Re: file system for solid state disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jens Axboe wrote:
On Thu, Aug 23 2007, Theodore Tso wrote:
On Thu, Aug 23, 2007 at 07:52:46AM +0200, Jan Engelhardt wrote:
On Aug 23 2007 01:01, Richard Ballantyne wrote:
What file system that is already in the linux kernel do people recommend
I use for my laptop that now contains a solid state disk?
If I had to choose, the list of options seems to be:

- logfs
  [unmerged]

- UBI layer with any fs you like
  [just a guess]
The question is whether the solid state disk gives you access to the
raw flash, or whether you have to go through the flash translation
layer because it's trying to look (exclusively) like a PATA or SATA
drive.  There are some SSD's that have a form factor and interfaces
that make them a drop-in replacement for a laptop hard drive, and a
number of the newer laptops that are supporting SSD's seem to be these
because (a) they don't have to radically change their design, (b) so
they can be compatible with Windows, and (c) so that users can
purchase the laptop either with a traditional hard drive or a SSD's as
an option, since at the moment SSD's are far more expensive than
disks.

So if you can't get access to the raw flash layer, then what you're
probably going to be looking at is a traditional block-oriented
filesystem, such as ext3, although there are clearly some things that
could be done such as disabling the elevator.

It's more complicated than that, I'd say. If the job of the elevator was
purely to sort request based on sector criteria, then I'd agree that
noop was the best way to go. But the elevator also abritrates access to
the disk for processes. Even if you don't pay a seek penalty, you still
would rather like to get your sync reads in without having to wait for
that huge writer that just queued hundreds of megabytes of io in front
of you (and will have done so behind your read, making you wait again
for a subsequent read).

In most cases the time in the elevator is minimal compared to the benefits. Even without your next suggestion.

My plan in this area is to add a simple storage profile and attach it to
the queue. Just start simple, allow a device driver to inform the block
layer that this device has no seek penalty. Then the io scheduler can
make more informed decisions on what to do - eg for ssd, sector
proximity may not have much meaning, so we should not take that into
account.

Eventually the optimal solution may require both bandwidth and seek information. If "solid state disk" means flash, it's on a peripheral bus, it's probably not all that fast at transfer rate. If it means NV memory, battery backed or core, probably nothing changes, again *if* it's on a peripheral bus, but if it's on a card plugged to the backplane, the transfer rate may be high enough to make ordering cost more than waiting. This could be extended to nbd and iSCSI devices as well, I think, to optimize performance.

Your plan seems a good one in this area, but if you agree that transfer rate will be important (if it isn't already), perhaps you will be able to design allowing for that capability to be easily added.

--
Bill Davidsen <[email protected]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux