On Apr 19, 2016, at 1:34 PM, Jim Meyering wrote:
> Note however that you really should not rely on gzip's -l option, due
> to the format-imposed limit an input of size 4TiB or larger, the size
> reported by --list (-l) will be reduced modulo 2^32, i.e., "wrong".
A side note that "pigz -lt" will a
On Tue, Apr 19, 2016 at 6:02 PM, Paul Eggert wrote:
> On 04/19/2016 01:34 PM, Jim Meyering wrote:
>>
>> you really should not rely on gzip's -l option
>
>
> These users are probably dealing with small (< 4 GiB) files for which -l
> should work fine. And three bug reports in one day, ouch. It may b
On 04/19/2016 01:34 PM, Jim Meyering wrote:
you really should not rely on gzip's -l option
These users are probably dealing with small (< 4 GiB) files for which -l
should work fine. And three bug reports in one day, ouch. It may be a
good idea to generate a new gzip version sooner rather than
On Tue, Apr 19, 2016 at 8:43 AM, Eric Blake wrote:
> merge 23314 23315
>
> On 04/19/2016 09:11 AM, Giorgio Ciucci wrote:
>> For each compressed file, eg. foo.gz, the command
>>
>>a=$(gzip -lq foo.gz)
>>
>> produces on stderr the message
>>
>> "gzip: write error: Bad file descriptor"
>
> You're
merge 23314 23315
On 04/19/2016 09:11 AM, Giorgio Ciucci wrote:
> For each compressed file, eg. foo.gz, the command
>
>a=$(gzip -lq foo.gz)
>
> produces on stderr the message
>
> "gzip: write error: Bad file descriptor"
You're the second one to report this today. The culprit is -l, and I
For each compressed file, eg. foo.gz, the command
a=$(gzip -lq foo.gz)
produces on stderr the message
"gzip: write error: Bad file descriptor"
Best regards
Giorgio