On Sun, Apr 03, 2005 at 01:45:58PM +0400, Artem B. Bityuckiy wrote: > > Here is a cite from RFC-1951 (page 4): > > A compressed data set consists of a series of blocks, corresponding > to successive blocks of input data. The block sizes are arbitrary, > except that non-compressible blocks are limited to 65,535 bytes. > > Thus, > > 1. 64K is only applied to non-compressible data, in which case zlib just > copies it as it is, adding a 1-byte header and a 1-byte EOB marker.
I think the overhead could be higher. But even if it is 2 bytes per block, then for 1M of incompressible input the total overhead is 2 * 1048576 / 65536 = 32 bytes. Also I'm not certain if it will actually create 64K blocks. It might be as low as 16K according to the zlib documentation. > 3. If zlib compressed data (i.e., applied LZ77 & Huffman), blocks may > have arbitrary length. Actually there is a limit on that too but that's not relevant to this discussion. Cheers, -- Visit Openswan at http://www.openswan.org/ Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/