On Sun, Apr 03, 2005 at 01:45:58PM +0400, Artem B. Bityuckiy wrote:
>
> Here is a cite from RFC-1951 (page 4):
>
> A compressed data set consists of a series of blocks, corresponding
> to successive blocks of input data. The block sizes are arbitrary,
> except that non-compressible blocks are limited to 65,535 bytes.
>
> Thus,
>
> 1. 64K is only applied to non-compressible data, in which case zlib just
> copies it as it is, adding a 1-byte header and a 1-byte EOB marker.
I think the overhead could be higher. But even if it is 2 bytes
per block, then for 1M of incompressible input the total overhead is
2 * 1048576 / 65536 = 32
bytes.
Also I'm not certain if it will actually create 64K blocks. It might
be as low as 16K according to the zlib documentation.
> 3. If zlib compressed data (i.e., applied LZ77 & Huffman), blocks may
> have arbitrary length.
Actually there is a limit on that too but that's not relevant to
this discussion.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]