On Monday 21 November 2005 18:45, Pavel Machek wrote:
> Hi!
> > Sun is proposing it can predict what storage layout will be efficient for
> > as yet unheard of quantities of data, with unknown access patterns, at
> > least a couple decades from now. It's also proposing that data
> > compression and checksumming are the filesystem's job. Hands up anybody
> > who spots conflicting trends here already? Who thinks the 128 bit
> > requirement came from marketing rather than the engineers?
>
> Actually, if you are storing information in single protons, I'd say
> you _need_ checksumming :-).
You need error correcting codes at the media level. A molecular storage
system like this would probably look a lot more like flash or dram than it
would magnetic media. (For one thing, I/O bandwidth and seek times become a
serious bottleneck with high density single point of access systems.)
> [I actually agree with Sun here, not trusting disk is good idea. At
> least you know kernel panic/oops/etc can't be caused by bit corruption on
> the disk.]
But who said the filesystem was the right level to do this at?
Rob
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]