Pavel Machek wrote:
>
>Maybe the card is pretty close to going to crash, but... two disk
>successive disk errors still should not be cause for journal
>corruption.
>
>[Also errors could be corelated. Imagine severe overheat. You'll
>successive failing writes, but if you let cool it down, you'll still
>have working media... only with corrupt journal :-)]
> Pavel
>
>
Hmm... So how is this handled in other systems? E.g. if you yank a USB
device whilst there is a lot of outstanding data inside the device that
hasn't been ack:d yet.
The way I see it, filesystems should assume the following at a failed write:
* 0-n sectors were written successfully.
* 0-1 sectors have corrupt data.
* 0-m sectors have old data.
* The lower layer will report back 0-k successfully written sectors,
where k <= n.
So perhaps the best course of action is to remove the sector-by-sector
failsafe? It will increase the chance of k < n, but it will not break
above assumption.
Rgds
Pierre
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|