Compressing pages [was: Re: Smaller compressed kernel source tarballs?]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 3 October 2006 14:24:01 -0400, Phillip Susi wrote:
> Jan Engelhardt wrote:
> >There are lots of obscure compression formats that achieve somewhat 
> >better compression at the cost of MUCH more time (neglecting they are 
> >not too open), such as MS CAB and ACE.
> 
> CAB is an archive container format, not a compression algorithm.  Last 
> time I worked on some code to handle it, they used the standard LZW 
> algorithm implemented by gzip ( but had the ability to support others in 
> the future ) and could only compress 32kb blocks.  The small block size 
> led to poor compression.

Actually, compression in 4KiB blocks is a _very_ interesting
benchmark.  Jffs2 works with that size for compression and other
compressed filesystems likely do the same, although possibly with
something larger like 64KiB.

And the results are completely different in that benchmark.  Gzip
actually beats bzip2 hands-down on compression ratio, for example.

I used to have a script, but cannot find it anymore.  Basically
something like:

while (read next 4KiB from input file) {
	compress chunk
	add compressed_size to total
}
print total

Jörn

-- 
Unless something dramatically changes, by 2015 we'll be largely
wondering what all the fuss surrounding Linux was really about.
-- Rob Enderle
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux