>>> > gzip --compress-like=old-foo foo
>
>gzip creates a dictionary (that gets realy large) of strings that are
>used and encodes references to them. At the start the dictionary is
>empty, so the first char is pretty much unencoded and inserted into
>the dictionary. The next char is encoded usi
> > No, this won't work with very many compression algorithms. Most
> > algorithms update their dictionaries/probability tables dynamically based
> > on input. There isn't just one static table that could be used for
> > another file, since the table is automatically updated after every (or
> > n
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:
>> > gzip --compress-like=old-foo foo > > where foo will be
>> compressed as old-foo was or as aquivalent as > possible. Gzip
>> does not need to know anything about foo except how it > was
>> compressed. The switch "--compress-lik
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:
>> > gzip --compress-like=old-foo foo
>>
>> AFAIK thats NOT possible with gzip. Same with bzip2.
>>
> Why not.
gzip creates a dictionary (that gets realy large) of strings that are
used and encodes references to them. At th
> > gzip --compress-like=old-foo foo
>
> AFAIK thats NOT possible with gzip. Same with bzip2.
>
Why not.
> I wish it where that simple.
>
I'm not saying it's simple, I'm saying it's possible. I'm not a
compression speciallist but from the theory there is nothing which
prevents this
> > gzip --compress-like=old-foo foo
> >
> > where foo will be compressed as old-foo was or as aquivalent as
> > possible. Gzip does not need to know anything about foo except how it
> > was compressed. The switch "--compress-like" could be added to any
> > compression algorithmus (bzip?)
Goswin Brederlow wrote:
> >> gzip --rsyncable, aloready implemented, ask Rusty Russell.
>
> > The --rsyncable switch might yield the same result (I haven't
> > checked it sofar) but will need some internal knowledge how to
> > determine the old compression.
>
> As far as I unde
> " " == John O Sullivan <[EMAIL PROTECTED]> writes:
> There was a few discussions on the rsync mailing lists about
> how to handle compressed files, specifically .debs I'd like to
> see some way of handling it better, but I don't think it'll
> happen at the rsync end. Reas
There was a few discussions on the rsync mailing lists about how to
handle compressed files, specifically .debs
I'd like to see some way of handling it better, but I don't think
it'll happen at the rsync end. Reasons include higher server cpu load
to (de)compress every file that is transferred and
> " " == Andrew Lenharth <[EMAIL PROTECTED]> writes:
> What is better and easier is to ensure that the compression is
> deturministic (gzip by default is not, bzip2 seems to be), so
> that rsync can decompress, rsync, compress, and get the exact
> file back on the other sid
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:
>>> So why not solve the compression problem at the root? Why not
>>> try to change the compression in a way so it does produce a
>>> compressed
> result
>>> with the same (or similar) difference rate as the source?
>> Ar
> No, I want rsync not even to be mentioned. All I want is something
> similar to
>
> gzip --compress-like=old-foo foo
>
> where foo will be compressed as old-foo was or as aquivalent as
> possible. Gzip does not need to know anything about foo except how it
> was compressed. The switch "
On Mon, Jan 08, 2001 at 05:28:56PM +0100, Otto Wyss wrote:
> >> So why not solve the compression problem at the root? Why not try to
> >> change the compression in a way so it does produce a compressed
> result
> >> with the same (or similar) difference rate as the source?
> >
> >Are you going to
>> So why not solve the compression problem at the root? Why not try to
>> change the compression in a way so it does produce a compressed
result
>> with the same (or similar) difference rate as the source?
>
>Are you going to hack at *every* different kind of file format that you
>might ever want
On Mon, Jan 08, 2001 at 11:58:26PM +1100, Peter Eckersley wrote:
> On Mon, Jan 08, 2001 at 08:27:53AM +1100, Sam Couter wrote:
> > Otto Wyss <[EMAIL PROTECTED]> wrote:
> > >
> > > So why not solve the compression problem at the root? Why not try to
> > > change the compression in a way so it does
On Mon, Jan 08, 2001 at 08:27:53AM +1100, Sam Couter wrote:
> Otto Wyss <[EMAIL PROTECTED]> wrote:
> >
> > So why not solve the compression problem at the root? Why not try to
> > change the compression in a way so it does produce a compressed result
> > with the same (or similar) difference rate
> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:
> On 7 Jan 2001, Bdale Garbee wrote:
>> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
>>
>> I have a copy of Rusty's patch, but have not applied it since I
>> don't like diverging Debian packages from up
Previously Bdale Garbee wrote:
> Wichert, have you or Rusty or anyone taken this up with the gzip upstream
> maintainer?
I'm not sure; I'll meet Rusty next week at linux.conf.au, I'll ask
him.
Wichert.
--
/ Generally uninteres
Previously Jason Gunthorpe wrote:
> Has anyone checked out what the size hit is, and how well ryncing debs
> like this performs in actual use?
Rusty has, the size difference is amazingly minimal.
Wichert.
--
/ Generally unint
[EMAIL PROTECTED] (Matt Zimmerman) writes:
> As you know, it's been eons since the last upstream gzip release.
On advice of the current FSF upstream, we moved to 1.3 in November 2000.
I think it is entirely reasonable to talk to upstream about this before
contemplating forking.
Bdale
On Sun, Jan 07, 2001 at 08:16:08PM -0700, Bdale Garbee wrote:
> [EMAIL PROTECTED] (Wichert Akkerman) writes:
>
> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
>
> I have a copy of Rusty's patch, but have not applied it since I don't like
> diverging Debian packages from upstream t
On 7 Jan 2001, Bdale Garbee wrote:
> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
>
> I have a copy of Rusty's patch, but have not applied it since I don't like
> diverging Debian packages from upstream this way. Wichert, have you or Rusty
> or anyone taken this up with the gzip
[EMAIL PROTECTED] (Wichert Akkerman) writes:
> gzip --rsyncable, aloready implemented, ask Rusty Russell.
I have a copy of Rusty's patch, but have not applied it since I don't like
diverging Debian packages from upstream this way. Wichert, have you or Rusty
or anyone taken this up with the gzip
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:
> It's commonly agreed that compression does prevent rsync from
> profit of older versions of packages when synchronizing Debian
> mirrors. All the discussion about fixing rsync to solve this,
> even trough a deb-plugin is IMHO
Otto Wyss <[EMAIL PROTECTED]> wrote:
>
> So why not solve the compression problem at the root? Why not try to
> change the compression in a way so it does produce a compressed result
> with the same (or similar) difference rate as the source?
Are you going to hack at *every* different kind of fi
Previously Otto Wyss wrote:
> So why not solve the compression problem at the root? Why not try to
> change the compression in a way so it does produce a compressed result
> with the same (or similar) difference rate as the source?
gzip --rsyncable, aloready implemented, ask Rusty Russell.
Wiche
It's commonly agreed that compression does prevent rsync from profit of
older versions of packages when synchronizing Debian mirrors. All the
discussion about fixing rsync to solve this, even trough a deb-plugin is
IMHO not the right way. Rsync's task is to synchronize files without
knowing what's
27 matches
Mail list logo