> I agree we need an overall architectural solution. Setting and clearing
> bits manually is error-prone but fast, as you said. Its identical to
> the malloc()/free() situation which is one of the primary reasons we
> use garbage collection in the first place, so why reinvent the same
> situation with different syntax?

Generally, malloc/free are used in more complex situations than just
stack-based memory management. But I see your point.

> malloc/free is vulnerable to:
> 1) leakage (forgot to free)

If you remember to mark it as used, you're pretty much guaranteed to mark
it as unused at the end of the same function.

> 2) double deallocation (freed an already freed buffer)

In general, this can't happen with setting bits. We can't unset a bit
twice, since we're only doing this on stuff returned by a function, for
the duration of the function that got it. If we return it again,
they'll set it, and free it on their own. Agreed, it's confusing, and thus
the reason for this whole discussion. :)

> I suppose a variation of the scratch-pad that might be more on the
> performance line that you are thinking could be similar to the
> scope tracking that compilers do when gathering symbols into
> symbol tables.

Ahhh, that's what I missed. I was assuming that you'd either have to push
variables on to this stack in buffer_allocate, or in the place that's
allocating them, and pop them all off with a end_GC_function. Which I
considered to be 'just as much work'.

> So a GC-able buffer gets created with a intial scope of cur_interp->scope,
> hidden in the allocator, and the collector skips collect on any
> buffer with scope <= the cur_scope.

I'm a bit confused. Say I have function A call B and C. Function C will
have the same scope as B will. If C triggers a GC run, then anything
allocated in B will have the same scope as C. How will the GC system
know that it can mark those as dead? Granted your system is safe, but it
seems a little *too* safe.

> And there is no stack churn.

I like that part, tho. :)

Mike Lambert


Reply via email to