On 2025-05-11 06:50, Klaus Kusche wrote:
Today, we have lots of RAM. I could easily spend some GB for tar.
So would it be possible to allocate many file-sized buffers
(at least for files up to a given size limit),
fill them in parallel with several read threads or async read calls,
and sequential
On May 11, 2025, at 6:50 AM, Klaus Kusche wrote:
>
> I regularly backup hundreds of thousands of very small files with tar.
> Currently, this results in many very small sequential read requests.
Are these small read requests occurring because the files are small?
Or is tar deliberately making sm
Hi,
would it be possible to read files in parallel when creating
an archive (and perhaps write files in parallel when extracting
an archive)? At least in my case, this would speed up tar
by at least an order of magnitude.
Rationale:
I regularly backup hundreds of thousands of very small files