bcraig added a comment.

In http://reviews.llvm.org/D20933#452854, @dcoughlin wrote:

> A 6% speed improvement could be a big win! Do we have a sense of what the 
> expected increased memory cost (as a % of average use over the lifetime of 
> the process) is? My guess is it would be relatively low. I suspect most 
> analyzer users run relatively few concurrent 'clang' processes -- so this 
> might be well worth it.


If the underlying allocator that does a poor job at reusing freed memory, then 
trivial functions will use about 1 MB more than before, then free the memory 
immediately.  On the other hand, functions that hit the max step count will use 
about 1 MB less memory than before.  The thought experiments get difficult when 
the underlying allocator is good at reusing freed memory.

I got some memory numbers for the .C file that saw the 6% speedup and has 37% 
of its functions hitting the step count.
From /usr/bin/time -v,
Before: Maximum resident set size (kbytes): 498688
After: Maximum resident set size (kbytes): 497872

Valgrind's massif tool reported the peak usage as 14696448 (0xE04000) bytes for 
both the before and after.

> I do think we should make sure the user can't shoot themselves in the foot by 
> pre-reserving space for an absurdly high maximum step count. We might want to 
> to clamp this reservation to something that is not outrageous even when the 
> maximum step count is huge.


Sure.  I'll switch to reserving the smaller of Steps and 4 million.  That means 
the most memory we will pre-reserve will be 32MB.


http://reviews.llvm.org/D20933



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to