Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becoming an issue more and more that the sheer
amount of storage means that the undetected error rate from disks,
hosts, memory, cables and everything else is rising.
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
It just seems wholly alien to me that errors would go undetected, and
we're OK with that, so long as our filesystems are robust enough. If
it's an _undetected_ error, doesn't that cause way more problems
(impossible problems) than FS corruption? Ok, your FS is fine -- but
now your bank database shows $1k less on random accounts -- is that ok?
There has been a great deal of discussion about this at the filesystem
and kernel summits - and data is getting kicked the way of networking -
end to end not reliability in the middle.
Sounds good, but I've never let discussions by people smarter than me
prevent me from asking the stupid questions.
The sort of changes this needs hit the block layer and ever fs.
Seems it would need to hit every application also...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]