Branden wrote:
> Ken Fox wrote:
> > Some researchers have estimated that 90% or
> > more of all allocated data dies (becomes unreachable) before the
> > next collection. A ref count system has to work on every object,
> > but smarter collectors only work on 10% of the objects.
>
> Does this 90/10 ratio mean that the memory usage is actually 10 times it
> needs to be? (if it were even _possible_ to pack all the data without
> fragmentation problems)
The general rule is the more space you "waste" the faster the collector
is. If you have memory to spare, then don't run the garbage collector as
often and your program will spend less total time garbage collecting.
In other words, the collection cost per object approaches zero.
If you "need" to go faster, then waste more memory.
If you "need" to use less memory, then go slower and collect more
frequently.
When comparing the memory management efficiency of different approaches,
it's very important to remember all the costs that the approaches have.
C-style malloc has quite a bit of overhead per object and tends to
fragment the heap. Many garbage collectors don't have either of these
problems.
Garbage collectors are very good from an efficiency perspective, but
tend to be unreliable in a mixed language environment and sometimes
impose really nasty usage requirements.
- Ken