On Tue, Apr 19, 2005 at 04:51:56PM +0400, Artem B. Bityuckiy wrote:
>
> JFFS2 wants the following from pcompress():
> 1. compressible data: compress it; the offered formerly algorithm works
> just fine here.
Yes but the existing JFFS algorithm does a lot more than that. It tries
to pack as much
Herbert Xu wrote:
Actually, for JFFS2 we need to leave the uncompressable data
uncompressed. So if the pcompress interface have only been for JFFS2,
I'd just return an error rather then expand data. Is such behavior
acceptable for common Linux's parts pike CryptoAPI ?
You mean you no longer nee
Please keep [EMAIL PROTECTED] in the loop.
On Mon, Apr 18, 2005 at 07:09:29PM +0400, Artem B. Bityuckiy wrote:
>
> Actually, for JFFS2 we need to leave the uncompressable data
> uncompressed. So if the pcompress interface have only been for JFFS2,
> I'd just return an error rather then expand da
Hello,
Well, with Mark Adler's help I've realized that extending zlib isn't
than simple task.
Herbert Xu wrote:
What I was suggesting is to invert the calculation that deflateBound
is doing so that it gives a lower bound on the input buffer size
that does not exceed a given output buffer size.
Ac
Herbert Xu wrote:
This relies on implementation details within zlib_deflate, which may
or may not be the case.
It should be easy to test though. Just write a user-space program
which does exactly this and feed it something from /dev/urandom.
Well, Herbert, you're right. In case of non-compressible
Herbert Xu wrote:
Each crypto/deflate user gets their own private zlib instance.
Where is the problem?
Hmm, OK. No problems, that was just RFC. :-)
--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
On Sun, Apr 03, 2005 at 04:01:19PM +0400, Artem B. Bityuckiy wrote:
>
> >For instance for JFFS2 it's absolutely incorrect since it breaks
> >compatibility. Incidentally, JFFS should create a new compression
> >type that doesn't include the zlib header so that we don't need the
> >head-skipping sp
On Sun, Apr 03, 2005 at 03:53:40PM +0400, Artem B. Bityuckiy wrote:
> Herbert Xu wrote:
> >Can you please point me to the paragraph in RFC 1950 that says this?
>
> Ok, if to do s/correct/compliant/, here it is:
>
> Section 2.3, page 7
Sorry, I thought you were referring to an RFC that defined IP
Herbert Xu wrote:
On Sun, Apr 03, 2005 at 03:41:07PM +0400, Artem B. Bityuckiy wrote:
I also wonder, does it at all correct to use negative windowBits in
crypto API? I mean, if windowBits is negative, zlib doesn't produce the
It's absolutely correct for IPComp. For other uses it may or may not
Herbert Xu wrote:
Can you please point me to the paragraph in RFC 1950 that says this?
Ok, if to do s/correct/compliant/, here it is:
Section 2.3, page 7
---
A compliant compressor must produce streams with correct CMF, FLG and
AD
On Sun, Apr 03, 2005 at 03:41:07PM +0400, Artem B. Bityuckiy wrote:
>
> I also wonder, does it at all correct to use negative windowBits in
> crypto API? I mean, if windowBits is negative, zlib doesn't produce the
It's absolutely correct for IPComp. For other uses it may or may not
be correct.
On Sun, Apr 03, 2005 at 02:23:42PM +0400, Artem B. Bityuckiy wrote:
>
> It must not. Look at the algoritm closer.
This relies on implementation details within zlib_deflate, which may
or may not be the case.
It should be easy to test though. Just write a user-space program
which does exactly this
On Sun, Apr 03, 2005 at 12:19:17PM +0100, David Woodhouse wrote:
>
> But now we're not using Z_SYNC_FLUSH it doesn't matter if we feed the
> input in smaller chunks. We can calculate a conservative estimate of the
> amount we'll fit, and keep feeding it input till the amount of space
> left in the
Herbert,
I also wonder, does it at all correct to use negative windowBits in
crypto API? I mean, if windowBits is negative, zlib doesn't produce the
proper zstream header, which is incorrect according to RFC-1950. It also
doesn't calculate adler32.
For example, if we work over an IP network (RF
On Sun, 2005-04-03 at 20:17 +1000, Herbert Xu wrote:
> You might be right. But I'm not sure yet.
>
> If we use the current code and supply zlib_deflate with 1048576-12
> bytes of (incompressible) input and 1048576 bytes of output buffer,
> wouldn't zlib keep writing incompressible blocks and retu
Herbert Xu wrote:
You might be right. But I'm not sure yet.
If we use the current code and supply zlib_deflate with 1048576-12 bytes
of (incompressible) input and 1048576 bytes of output buffer, wouldn't
zlib keep writing incompressible blocks and return when it can't do that
anymore because the o
Herbert Xu wrote:
On Sun, Apr 03, 2005 at 01:45:58PM +0400, Artem B. Bityuckiy wrote:
I think the overhead could be higher.
IIUC, not. But I'll check this in practice.
But even if it is 2 bytes
Ok, suppose.
per block, then for 1M of incompressible input the total overhead is
2 * 1048576 / 65536 =
On Sun, Apr 03, 2005 at 11:06:01AM +0100, David Woodhouse wrote:
>
> We're not interested in the _total_ overhead, in this context. We're
> interested in the number of bytes we have to have available in the
> output buffer in order to let zlib finish its stream.
>
> In the case of a 1MiB input ge
On Sun, 2005-04-03 at 20:00 +1000, Herbert Xu wrote:
> > 1. 64K is only applied to non-compressible data, in which case zlib just
> > copies it as it is, adding a 1-byte header and a 1-byte EOB marker.
>
> I think the overhead could be higher. But even if it is 2 bytes
> per block, then for 1M o
On Sun, Apr 03, 2005 at 01:45:58PM +0400, Artem B. Bityuckiy wrote:
>
> Here is a cite from RFC-1951 (page 4):
>
>A compressed data set consists of a series of blocks, corresponding
>to successive blocks of input data. The block sizes are arbitrary,
>except that non-compressible bloc
Herbert Xu wrote:
You can't compress 1M-12bytes into 1M using zlib when the block size
is 64K.
Here is a cite from RFC-1951 (page 4):
A compressed data set consists of a series of blocks, corresponding
to successive blocks of input data. The block sizes are arbitrary,
except that non-com
On Sun, Apr 03, 2005 at 12:59:23PM +0400, Artem B. Bityuckiy wrote:
>
> Err, it looks like we've lost the conversation flow. :-) I commented
> your phrase: "The question is what happens when you compress 1 1GiB
> input buffer into a 1GiB output buffer."
>
> Then could you please in a nutshell w
Herbert Xu wrote:
On Sun, Apr 03, 2005 at 12:22:12PM +0400, Artem B. Bityuckiy wrote:
The latter case is possible if the input isn't compressible and it is up
to user to detect that handle this situation properly (i.e., just not to
compress this). So, IMO, there are no problems here at least for
Herbert Xu wrote:
Surely that defeats the purpose of pcompress? I thought the whole point
was to compress as much of the input as possible into the output?
Absolutely correct.
So 1G into 1G doesn't make sense here.
I thought you are afraid about the case of a totally random input which
may *grow*
Jörn Engel wrote:
> Absolutely. You can argue that 4KiB is too small and 8|16|32|64|...
> would be much better, yielding in better compression ratio. But
> having to read and uncompress the whole file when appending a few
> bytes is utter madness.
>
Dear Joern,
I meant that JFFS2 always reads by
On Sun, Apr 03, 2005 at 12:22:12PM +0400, Artem B. Bityuckiy wrote:
>
> The latter case is possible if the input isn't compressible and it is up
> to user to detect that handle this situation properly (i.e., just not to
> compress this). So, IMO, there are no problems here at least for the
> cr
Sorry :-)
Artem B. Bityuckiy wrote:
In case of crypto_comp_pcompress() if the input isn't compressible,
s/crypto_comp_pcompress()/crypto_comp_compress()/
--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the bod
Artem B. Bityuckiy wrote:
In the former case user may provide a second output buffer and
s/former/latter/
--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majord
Herbert Xu wrote:
The question is what happens when you compress 1 1GiB input buffer into
a 1GiB output buffer.
If user provides 1 GB output buffer then either we successfully compress
all the 1 GB input or we compress just a part of it.
In the former case user may provide a second output buffer
On Fri, Apr 01, 2005 at 04:41:44PM +0100, Artem B. Bityuckiy wrote:
>
> Suppose we compress 1 GiB of input, and have a 70K output buffer. We
The question is what happens when you compress 1 1GiB input buffer into
a 1GiB output buffer.
> Surely I'll check. I'll even test the new implementation (
> I thought stored blocks (incompressible blocks) were limited to 64K
> in size, no?
Blocks are limited in size by 64K, true. But why it matters for us?
Suppose we compress 1 GiB of input, and have a 70K output buffer. We
reserve 5 bytes at the end and start calling zlib_deflate(stream,
Z_SYNCK
On Fri, 1 April 2005 16:22:50 +0100, Artem B. Bityuckiy wrote:
>
> Another question, does JFFSx *really* need the peaces of a 4K page to be
> independently uncompressable? It it wouldn't be required, we would achieve
> better compression if we have saved the zstream state. :-) But it is too
> l
On Fri, Apr 01, 2005 at 03:36:23PM +0100, Artem B. Bityuckiy wrote:
>
> In our code we do zlib_deflate(stream, Z_SYNC_FLUSH), so we always flush
> the output. So the final zlib_deflate(stream, Z_FINISH) requires 1 byte
> for the EOB marker and 4 bytes for adler32 (5 bytes total). Thats all. If
David Woodhouse wrote:
> On Fri, 2005-04-01 at 18:57 +0400, Artem B. Bityuckiy wrote:
>
>>Yes, the compression will be better. But the implementation will be more
>>complicated.
>>We can try to use the "bound" functions to predict how many bytes to
>>pass to the deflate's input, but there is no
On Fri, 2005-04-01 at 18:57 +0400, Artem B. Bityuckiy wrote:
> Yes, the compression will be better. But the implementation will be more
> complicated.
> We can try to use the "bound" functions to predict how many bytes to
> pass to the deflate's input, but there is no guarantee they'll fit into
David Woodhouse wrote:
Hm. Could we avoid using Z_SYNC_FLUSH and stick with a larger amount?
That would give us better compression.
Yes, the compression will be better. But the implementation will be more
complicated.
We can try to use the "bound" functions to predict how many bytes to
pass to th
On Fri, 2005-04-01 at 15:36 +0100, Artem B. Bityuckiy wrote:
> In our code we do zlib_deflate(stream, Z_SYNC_FLUSH), so we always flush
> the output. So the final zlib_deflate(stream, Z_FINISH) requires 1 byte
> for the EOB marker and 4 bytes for adler32 (5 bytes total). Thats all. If
> we compr
Hi Herbert,
> For the default zlib parameters (which crypto/deflate.c does not use)
> the maximum overhead is 5 bytes every 16KB plus 6 bytes. So for input
> streams less than 16KB the figure of 12 bytes is correct. However,
> in general the overhead needs to grow proportionally to the number of
Artem B. Bityuckiy <[EMAIL PROTECTED]> wrote:
>
>> Good catch. I'll apply this one.
> Only one small note: I've spotted this but didn't test. I didn't make
> sure this is OK if we haven't ever used the compression and remove the
> deflate module. (i.e, in this case we call zlib_[in|de]flateEnd()
Hello Herbert,
> The GNU coding style is completely different from Linux.
Ok, NP.
> Please reformat it when you fix up the overhead calculation.
Sure.
> Good catch. I'll apply this one.
Only one small note: I've spotted this but didn't test. I didn't make
sure this is OK if we haven't ever used
On Mon, Mar 28, 2005 at 05:22:36PM +, Artem B. Bityuckiy wrote:
>
> I made the changes to deflate_decompr() because the old version doesn't
> work properly for me. There are 2 changes.
>
> 1. I've added the following code:
>
> -
Hi Artem:
On Tue, Mar 29, 2005 at 12:55:11PM +0100, Artem B. Bityuckiy wrote:
>
> I'm not sure. David Woodhouse (the author) said that this is probably
> enough in any case but a lot of time has gone since the code was written
> and he doesn't remember for sure. I have also seen some magic numbe
> Are you sure that 12 bytes is enough for all cases? It would seem
> to be safer to use the formula in deflateBound/compressBound from
> later versions (> 1.2) of zlib to calculate the reserve.
>
I'm not sure. David Woodhouse (the author) said that this is probably
enough in any case but a lot of
Hi Artem:
On Mon, Mar 28, 2005 at 05:22:36PM +, Artem B. Bityuckiy wrote:
>
> The first patch is the implementation of the deflate_pcompress()
Thanks for the patch. I'll comment on the second patch later.
Are you sure that 12 bytes is enough for all cases? It would seem
to be safer to use
On Sat, 2005-03-26 at 15:44 +1100, Herbert Xu wrote:
> I've whipped up something quick and called it crypto_comp_pcompress.
> How does this interface look to you?
Hello Herbert,
I've done some work. Here are 2 patches:
1. pcompress-deflate-1.diff
2. uncompress-1.diff
(should be applied in that o
On Sat, 2005-03-26 at 15:44 +1100, Herbert Xu wrote:
> I've whipped up something quick and called it crypto_comp_pcompress.
> How does this interface look to you?
Thanks for the patch. At the first glance it looks OK. I'll try to use
it and add the deflate method which in fact is already implemente
Hi Artem:
On Fri, Mar 25, 2005 at 04:08:20PM +, Artem B. Bityuckiy wrote:
>
> I'm working on cleaning-up the JFFS3 compression stuff. JFFS3 contains a
> number of compressors which actually shouldn't be there as they are
> platform-independent and generic. So we want to move them to the gener
Hello Herbert and others,
I'm working on cleaning-up the JFFS3 compression stuff. JFFS3 contains a
number of compressors which actually shouldn't be there as they are
platform-independent and generic. So we want to move them to the generic
part of the Linux kernel instead of storing them in fs/jff
48 matches
Mail list logo