Jan does have a point about bad blocks. A couple years ago
I had a relatively new disk start to go bad on random blocks.
I detected it fairly quickly but did have some data loss.
All the compressed archives which were hit were near
total losses; most other files were at least partially
recoverable.
It is not a matter of your operating system writing
to bad blocks. It is a matter of what happens when the
blocks on which your data sit go bad underneath you.
This issue has also been discussed by people working
with revision control system. If you are archiving
data, how do you know you if your data is still good
unless you actually need it? If you do not know it
is bad, you may well get rid of good copies thinking
you do not need the extras... it does happen.
I would be quite hesitant to go with on disk compression
unless damage was limited to only the bad bits or blocks
and did not propagate through the rest of the file.
Perhaps if everyone used hardware RAID and the RAID
automatically detected a difference due to trashed
data on one disk and flagged the admin with a warning...
BTW: I'm a CMU Alum, so who are you working with Jan?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]