Patrick Schoenfeld <[EMAIL PROTECTED]> wrote: > found 412688 6.10-6 > severity 412688 normal > > Package: coreutils > Version: 6.10-6 > Followup-For: Bug #412688 > > Hi, > > I tried to reproduce this bug with test data because it happens on a > backup system which copies a lot of files and cannnot be used for > testing. With test data consisting of merely some thousand (empty) files and > some thousand deeply nested directories I can determine a memory usage > of ~ 35% on Etch and ~ 45% on Sid. The system has 768MB of RAM so in > both cases more then 250M RAM are used for as little as 100MB test data. > The real data where we see this problem is several gigabytes of size and > has some million files. Its already using more then 90% of RAM on a 1G system. > One can easily imagine that the situation will get worse in the future, > because number of files/directories is likely to increase over time. > But its not really feasible to think about an upgrade to Lenny, because > it seems the situation got even worse with the current coreutils > version. > In both cases OOM situations are expected soon. > Lucas, why did you downgrade this bug? Severity normal is already quiet > conservative for a bug that easily causes OOM situations and I'm tempted > to upgrade it to important, but minor is some kind of understatement...
Please describe precisely the set-up required to demonstrate the problem. For example, I've just done the following: (10k empty files and 5000 empty directories, all at the same level. True, this is not "deep" as you said, but what does your "deep" mean? a single linear tree, a/a/a/a/.../a to a depth of 1000? or many trees, each to a depth of 50 each) (mkdir a && cd a seq 10000|touch && seq 20000 25000|xargs mkdir) Then ran this, which shows it allocated 25MB total: $ valgrind cp -al a b ... ==6374== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 1) ==6374== malloc/free: in use at exit: 0 bytes in 0 blocks. ==6374== malloc/free: 63,242 allocs, 63,242 frees, 24,784,149 bytes allocated. ==6374== For counts of detected errors, rerun with: -v ==6374== All heap blocks were freed -- no leaks are possible. FYI, cp has to keep track of a lot of dev/inode/name info. However, if you don't need to preserve hard-link relationships, use these options in place of "-al": -R --link --no-dereference (and preserve what you can, without preserving "links"): --preserve=mode,ownership,timestamps,context Though you'd use "context" only if you're on a system with SELinux. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

