On Sat, 4 Nov 2006, Grzegorz Kulewski wrote:
On Sat, 4 Nov 2006, Mikulas Patocka wrote:
> If it overflows, it increases crash count instead. So really you have >
2^47
> transactions or 65536 crashes and 2^31 transactions between each crash.
it seems to me that you only need to be able to represent a range of the
most recent 65536 crashes... and could have an online process which goes
about "refreshing" old objects to move them forward to the most recent
crash state. as long as you know the minimm on-disk crash count you can
use it as an offset.
After 65536 crashes you have to run spadfsck --reset-crash-counts. Maybe I
add that functionality to kernel driver too, so that it will be formally
corect.
Is there any reason you can not make these fields 64 or even 128 bits in size
to increase these "limits" dramatically?
Yes
First --- you need a table of 65536 entries. Table of 4G entries would be
too large.
Second --- it will make structures larger and thus some operations (like
scanning directory with find) slower.
Mikulas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]