> This is a terrible assumption in general (i.e. if filesize % blocksize
> is close to uniformly distributed). If you remove one byte and the data
> is stored with blocksize B, then you either save zero bytes with
> probability 1-1/B or you save B bytes with probability 1/B. The
> expected number of bytes saved is B*1/B=1. Since expectation is linear,
> if you remove x bytes, the expected number of bytes saved is x (even if
> there is more than one byte removed per file).
You didn't calculate the probability of actually saving a full block
or not (that's the only thing that matters). I assumed it's relatively
small and can be ignored in practice since the amount of end white
space is negligible compared to total file size.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]