On Sat, Apr 30, 2005 at 05:02:54PM -0400, Dan Sugalski wrote:
> destroy. There's a vtable method that's called by the GC system when 
> an object is no longer reachable from the root set.

Actually, not when, but some (indefinite) time after that has happened,
right?

> > And if so, what
> >would the purpose of them be?

First, let me define "destructor" as something that runs very soon after
an object is no longer reachable (perl5 sense), and a "finalizer" as
somethign that gets called before the memory of an object gets reused
(java sense).

The first quoted sentence obviously defined "destructors", as they get
called when an object is no longer reachable (I understand "when" as "as
soon as" here, which is not neccesarily what you meant). I suspect that
reality will actually match the java sense: when a GC runs (at some
indefinite time, as there is no need to call the GC as long as memory is
still available, or at least that's the only definite GC
triggering-event) it detects dead objects and finalizes them, to be
reused later (next GC run for example).

> It's there so that if there's any sort of cleanup that needs doing -- 
> closing DB handles or files, destroying windows, updating stats -- 
> can be done.

Yes, that's actually the point - many java programs do it that way, and
this leads to a number of problems, for example programs that works fine
on a 1GB machine suddenly stop working on a 4GB machine, because it
reaches a filehandle resource limit. Often these filehandles control
resources outside the knowledge of the virtual machine, for example,
database connections.

It's a pretty common bug in java to close database handles in the
finalizer, as databases often have some limits on the number of database
handles, of which the program doesn't have any knowledge, so one program
can keep unrelated other programs from connecting to the database when
the database runs out of resources that are tied to the java program.

The examples you gave are considered bugs in other languages that have
tracing GC's exactly because of this "action at a distance". In java,
for example, there is no guarentee that a finalizer will ever be called
for an object, and I don't think parrot will guarentee it either (simply
because defining such a guarentee is difficult).

> You don't have to free memory unless you've managed to 
> allocate it in a way that parrot's not tracking it.

Exactly, a GC (any kind of) will manage memory for the program, and
nothing else (one can extend running a memory-GC when filehandles run
out, and hope that finalizing unused objects will also free up some
filehandles, but that only helps for resources the gc explicitly knows
about and is pretty inefficient, as the GC is optimized for memory).

I was asking the question about a useful example of what can be done in
a finalizer because I, frankly, still don't know of a useful example.

The "obvious" examples like closing filehandles or database handles are
actually bugs, because you have no definite guarentee on when they will
be freed.

For these cases, the workaround is easy: explicit resource management,
such as "auto $a = ..." or the using() directive of Câ.

This is a valid solution, but it doesn't rely on finalizers, but
on manual resource management - the "close/dispose/etc." function gets
called explicitly.

That's why I wonder wether valid examples for finalizers exist - so far
I can only imagine finalizers for objects that have non-GC'ed memory
attached, such as C library objects. I can't imagine a use for
finalizers within the language that uses the GC itself, because all such
uses seem to result in the same class of spurious, memory-dependent
bugs as in the examples above, because the GC is used to manage
non-memory resources.

In summary: The GC only manages the memory "resource", and the only
resource that can be managed with finalizers also is "memory",
everything else would need to be managed explicitly, as the GC
has no knowledge about it or cannot be taught about it, for example,
database connection limits (actually, in the latter case, the GC would
need an even from the database to notify it of the need to GC, which is
obviously obscure).

As finalizers in a tracing-GC language only make use for memory, they
are not useful at all, as memory is already being taken care of (at
least I have never heard of a contradicting example).

I think the java community realized this a long time ago, and I think
that m$ tried to rectify this by introducing the Dispose interface and
the using() statement within Câ.

However, you don't need finalizers for that style of management.

(I'm not arguing for timely destruction, I am just wondering about the
theoreticel usefulness of a finalizer method, and cannot find any).

Most of this thought is from portland pattern repository's wiki:
http://c2.com/cgi/wiki?FinalizeInsteadOfProperDestructor


robin

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED]
Robin Redeker

Reply via email to