2006/6/23, Steven Bosscher <[EMAIL PROTECTED]>:
Don't write off Boehm's GC just yet. You can't expect to beat something that has seen a lot of tuning for GCC with something that you got working only a few days ago. There are a lot of special tricks especially in ggc-page that may put it at an advantage, but with some tuning perhaps you can get Boehm's to perform better for GCC.
But of course we are limited to tweaking usage of external Boehm's collector API, while internal collectors can have their internals hacked to support GCC's needs best. Nevertheless I will continue tweaking Boehm's GC: incremental collection, different allocation routines for large objects w/o pointers, weak pointer support, excluding roots for large static data...
For the locality thing: Have you already tried using something like cachegrind or oprofile to compare the cache behavior of gcc with Boehm's and gcc with ggc?
An excellent suggestion, although my primary working platform is valgrind-less Cygwin, but I will find a way to gather cache usage data.
What about allocation strategies? Perhaps that's another thing you could toy with to improve the peak memory usage issue. I don't know how Boehm's GC works, but in ggc-page e.g. all binary expression 'tree's are allocated on the same bag of pages, which may help (or not, dunno).
There might be some options here: for objects that do not contain pointers special API can be used instead of generic one. Moreover I think that peak memory usage can be reduced by using Boehm's weak pointer facilities where they should be used: I suspect that some things are not collected just because they are cached. Thanks for your comments, -- Laurynas