Re: ext3_ordered_writepage() questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 17, 2006 at 10:44:39PM +0000, Jamie Lokier wrote:
>    - Sometimes they do work on A, sometimes they do work on B.
>      Things like editing pictures or spreadsheets or whatever.
> 
>    - They use "rsync" to copy their working directory from A to B, or
>      B to A, when they move between computers.
> 
>    - They're working on A one day, and there's a power cut.

... and this is not a problem, because rsync works works by building
the file to be copied via a name such as .filename.NoC10k.  If there
is a power cut, there will be a stale dot file on A that might take up
some extra disk space, but it won't cause a problem with a loss of
synchronization between B and A.  

Hence, in your scenario, nothing would go wrong.  And since rsync
builds up a new file each time, and only deletes the old file and
moves the new file to replace the old file when it is successful, in
ordered data mode all of the data blocks will be forced to disk before
the metadata operations for the close and rename are allowed to be
commited.  Hence, it's perfectly safe, even with Badari's proposed
change.

I would also note that in the even with rsync doing the checksum test,
*even* if it didn't use the dotfile with the uniquifer, rsync always
did check to see if file sizes matched, and since the file sizes would
be different, it would have caught it that way.

						- Ted

P.S.  There is a potential problem with mtimes causing confusing
results with make, but it has nothing to do with ext3 journalling
modes, and everything to do with the fact that most programs,
including the compiler and linker, do not write their output files
using the rsync technique of using a temporary filename, and then
renaming the file to its final location once the compiler or linker
operation is complete.  So it doesn't matter what filesystem you use,
if you are writing out some garguantuan .o file, and the system
crashes before the .o file is written out, the fact that it exist
means and is newer than the source files will lead make(1) to conclude
that the file is up to date, and not try to rebuild it.  This has been
true for three decades or so that make has been around, yet I don't
see people running around in histronics about how this "horrible
problem".  If people did care, the right way to fix it would be to
make the C compiler use the temp filename and rename trick, or change
the default .c.o rule in Makefiles to do the same thing.  But in
reality, it doesn't seem to bother most developers, so no one has
really bothered.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux