* Juan Quintela (quint...@redhat.com) wrote: > "Dr. David Alan Gilbert" <dgilb...@redhat.com> wrote: > > * Juan Quintela (quint...@redhat.com) wrote: > >> We are divining by page_size to multiply again in the only use. > > ^--- typo > >> Once there, impreve the comments. > > ^--- typo > >> > >> Signed-off-by: Juan Quintela <quint...@redhat.com> > > > > OK, with the typo's fixed: > > Thanks. > > > Reviewed-by: Dr. David Alan Gilbert <dgilb...@redhat.com> > > > > but, could you also explain the x 2 (that's no worse than the current > > code); is this defined somewhere in zlib? I thought there was a routine > > that told you the worst case? > > Nowhere. > > There are pathological cases where it can be worse. Not clear at all > how much (ok, for zlib it appears that it is on the order of dozen of > bytes, because it marks it as uncompressed on the worst possible case), > For zstd, there is not a clear/fast answer when you google.
For zlib: ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen)); /* compressBound() returns an upper bound on the compressed size after compress() or compress2() on sourceLen bytes. It would be used before a compress() or compress2() call to allocate the destination buffer. */ > As this buffer is held for the whole migration, it is one for thread, > this looked safe to me. Notice that we are compressing 128 pages at a > time, so for it not to compress anything looks very pathological. > > But as one says, better safe than sorry. Yeh. Dave > If anyone that knows more about zlib/zstd give me different values, I > will change that in an additional patch. > > Later, Juan. > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK