On Thu, Jul 06, 2006 at 10:15:42PM -0400, Bill Davidsen wrote:
> Trond Myklebust wrote:
>
> >Nobody gives a rats arse about backups: those are infrequent and
> >can/should use more sophisticated techniques such as checksumming.
> >
> Actually, those of us who do run production servers care vastly about
> backups. And beside being utterly unscalable (checksum 20 TB of files
> four times a day to find what changed???), you would have to remember
> the checksums for all those files.
Not four times a day, but probably once a month or two it would be a
*very* good idea to do periodic sweeps of files to make sure the hard
drive hasn't corrupted the files out from under you. If you have 20+
TB of data, the probability of silent data corruption starts going up.
That would be justification for storing the checksum in the inode or
in the EA of the file, with the kernel automatically clearing it if
the file was *deliberately* changed. The goal is to detect the disk
silently changing the data for you, free of charge....
- Ted
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]