Dan Sugalski <[EMAIL PROTECTED]> wrote:

> The freelist used to be an array that held free PMC pointers. I
> presume this changed?

Must have been changed before my days then. The freelist is a linked
list of pointers:

get_free_object ...

    ptr = pool->free_list;
    pool->free_list = *(void **)ptr;

A generation list either uses extra PMC memory or it is an array of
pointers in the objects arena. An array of pointers would need
allocation and resizing if generation of an object changes.

>>But anyway, we don't need[1] different schemes for PMCs: The copying
>>collection of variable sized Buffer memory is the problem for
>>multi-threading.

> No, it isn't. For several reasons, not the least of it is the fact
> that we're going to have to shift to a non-copying collector in the
> threaded case.

Ok. But then lets start here. This *is* the current issue.

> ... The bigger problem is tracing the live set in the face
> of multiple simultaneous mutators, which the incremental gc systems
> handle much more pleasantly and with less overall impact on the
> system.

I doubt that it has less overall impact. Anyway: an incremental GC isn't
the only solution. It could be ok, if all PMCs are shared. But this
isn't true. All temporary PMCs are *not* shared logically. Considering
all temps of all threads as shared (and managing all PMCs of all threads
in one PMC pool is for sure suboptimal). Freelist handling could have
high contention with this case.

And incremental GC must always run faster then allocating new PMCs. This
can't be achieved, if all threads use the one and only pool for its
PMCs.

Incremental GC has a lot of overhead by tracking pointer updates. When
we have the temps automatically shared, we add locking and this cost to
each PMC. This scheme can't win (or will need a ~16 CPU machine to
achieve something).

So it could work like this:
- temps aren't shared - they live in their interpreters private pools
- shared PMCs live in exactly one pool (of the "main" interpreter)
- any thread can DOD its own PMCs - shared PMCs are marked live by
  that process, but nether freed, because that interpreter doesn't own
  the pool. And they aren't append to the next_for_GC pointer.

- shared PMC DOD needs:
  - suspend all interpreters, that might have a shared PMC from this
    pool
  - the main interpreter marks the root set
  - all other interpreters run the mark phase, one by one, and append
    live shared PMCs to the next_for_GC pointer (anchored at the main
    interpreter)
  - the main interpreter continues/finishes marking phase 2
  - all interpreters can now continue to free_objects (if their
    statistics state, that they are low on free objects)
  - and in parallel the main interpreter can free shared objects.

If someone implements a truely paralled incremental GC later, and *if*
that's faster, we can switch to it. Above scheme works (almost:) now.

leo

Reply via email to