Hi, My app is a parser/filter for binary files, that produces a bunch of ascii files.
At the begining of the parsing, the filtering step involves the storage of the positions of 32 objects, each second for a whole day. So that's 32 Arrays with 86400 elements each. During this step, the memory used by my image grows from 50Mb to ~500Mb. I find it far too large since I'm pretty sure my arrays are the largest objects I create and only weight something like 300kb. The profiling of the app shows that hte footprint of the "old memory" went up by 350Mb. Which I'm pretty sure is super bad. Maybe as a consequence, after the parsing is finished, the memory footprint of the image stays at ~500Mb What are the tools I have to find where precisely the memory usage explodes ? For example, is it possible to browse the "old memory" objects to see which one fails to get GC'ed ? Thanks in advance, Thomas.