Re: New filesystem for Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, 3 Nov 2006, Jörn Engel wrote:

On Fri, 3 November 2006 15:19:09 +0100, Mikulas Patocka wrote:

Really it can batch any number of modifications into one transaction
(unless fsync or sync is called). Transaction is closed only on
fsync/sync, if 2 minutes pass (can be adjusted) or when the disk runs out
of space.

Interesting.  Let me pick an example and see where we're going from
there.  You have four directories, A, B, C and D, none of which is the
parent of another.  Two cross-directory renames happen:
$ mv A/foo B/
$ mv C/bar D/

This will cause four modifications, one to each of the directories.  I
would have assumed that the modifications to A and B receive one
transaction number n, C and D get a different one, n+1 if nothing else
is going on in between.

They most likely receive the same transaction (this is not like journaling
transaction --- new transactions are issued only on conditions above).

That means one transaction in your terminology contains many
transactions in my terminology, which explains a bit of confusion.

They may be written in any order (that's some improvement over
journaling) by buffer thread.

And when you sync, with one write of memory_cct to disk, you make old
entries permanently invalid and new entries permanently valid.

Ok, I seem to understand it now.  Quite interesting.

If the machine crashes before sync (and some of directory sectors were
written and some not), new entries will always be considered invalid, and
old entries always valid, because new crash count will be used and crash
count table at old crash count index will never be modified.

So the only overflow you have to fear is the 16-bit cc overflow.  Once
that hits your filesystem is end-of-life and cannot be written to
anymore.

Has it ever occurred to you how similar your approach is to the
venerable sprite lfs?  Lfs syncs by writing a "checkpoint", which
contains quite a bit of information.  You sync by just writing a
single number.  But in the end, both designs have a lot of
non-committed data already written to disk which only becomes valid
once the final (checkpoint|transaction count) is written.

And considering that writing several kB of contiguous data to disk is
nearly free, compared to the initial seek, both commit operations
should take about the same time.

So which, if I may ask, are the advantages of your design over sprite
lfs?

http://citeseer.ist.psu.edu/rosenblum91design.html

Jörn

It is very different from LFS. LFS is log-filesystem, i.e. journal spans the whole device. The problem with this design is that it's fast for write (cool benchmark numbers) and slow in real-world workloads.

LFS places files according to time they were created, not according to their directory.

If you have directory with some project where you have files that you edited today, day ago, week ago, month ago etc., then any current filesystem (even ext2) will try to place files near each other --- while LFS will scatter the files over the whole partition according to time they were written. --- I believe that this is the reason why log-structured filesystems are not in wild use --- this is a case where optimizing for benchmark kills real-world performance.

Mikulas

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux