On 2025-05-11 06:50, Klaus Kusche wrote:
Today, we have lots of RAM. I could easily spend some GB for tar.
So would it be possible to allocate many file-sized buffers
(at least for files up to a given size limit),
fill them in parallel with several read threads or async read calls,
and sequentially write them to the archive whenever a file read
has completed?

I don't see why not. It's just a simple matter of programming.

One could use pread to do parallel reads even when tarring a single large file.

Reply via email to