Hello,
I would like to ask about behaviour of tar when extracting millions of
symlink. We have a problem, where we extract 10 mil. symlinks on 1000 files
and tar allocates in the end approx. 3GB of memory for extracting. Is it
expected behaviour? Memory is allocated during the whole process of
extraction and it is not freed until the very end. There are no major leaks
during extraction, but I would rather be certain that this is ok, as there
is already an issue created on redhat bugzilla with full description here:

https://bugzilla.redhat.com/show_bug.cgi?id=1759140

Is there a way that the memory can be freed also during extraction in case
it is not needed anymore?
Thanks for your help!

Best regards,
Ondrej Dubaj

Reply via email to