On 11/3/21 22:13, Mike Frysinger wrote:
with the rise of commodity multicore computing, tar feels a bit antiquated in
that it still defaults to single (de)compression.  it feels like the defaults
could & should be more intelligent.  has any thought been given to supporting
parallel (de)compression by default ?

Not really. Some thought would be required, I assume. For example, parallelizing decompression might hurt performance, as 'tar' is typically I/O bound in that case. It'd be nice if someone could think this through and do some performance measures.

Reply via email to