On Wednesday, 17 April 2019 at 16:27:02 UTC, Adam D. Ruppe wrote:
D programs are a vital part of my home computer infrastructure. I run some 60 D processes at almost any time.... and have recently been running out of memory.

Each individual process eats ~30-100 MB, but that times 60 = trouble. They start off small, like 5 MB, and grow over weeks or months, so it isn't something I can easily isolate in a debugger after recompiling.

I'm pretty sure this is the result of wasteful code somewhere in my underlying libraries, but nothing is obviously jumping out at me in the code. So I want to look at some of my existing processes and see just what is responsible for this.

I tried attaching to one and `call gc_stats()` in gdb... and it segfaulted. Whoops.




I am willing to recompile and run again, though I need to actually use the programs, so if instrumenting makes them unusable it won't really help. Is there a magic --DRT- argument perhaps? Or some trick with gdb attaching to a running process I don't know?

What I'm hoping to do is get an idea of which line of code allocates the most that isn't subsequently freed.

One thing you can try, without recompiling, is using pmap -x
on one of the bloated processes, and then dumping a large
memory region to file, and then just looking at the binary.

It might be something obvious on visual inspection.

You can dump memory with

gdb -p $pid --eval-command 'dump binary memory file.bin 0xfromLL 0xtoLL' -batch

Reply via email to