On Mon, Dec 18, 2006 at 12:02:35PM +0000, Blair Sutton wrote:
: Dear all
: I hope I am sending this to the correct place.
: I regularly use the RAII idiom in Perl 5 and C++ to automatically clean 
: up resources during object destruction.
: I recently read a mail thread "Is RAII possible in Python?" at 
: http://www.thescripts.com/forum/thread25072.html and "Perl vs Python - 
: GC and Reference Counting" at 
: http://discuss.joelonsoftware.com/default.asp?joel.3.323675.32.
: The above threads suggests that Python cannot do RAII since its garbage 
: collection does not offer deterministic finalization. I understand that 
: Parrot will not use reference counting for garbage collection and so may 
: suffer from the same problem. Are my fears warranted?

Short answer: absolutely.

Long answer: emphatically not.  I believe you have a bit of an XY
problem here.  You're wanting transactional security and timely
destruction but have predisposed yourself not to accept them unless
they happen to look like deterministic reference counting.  Perl 6
offers the former without offering you the latter, because Perl 6 must
run on a variety of platforms that may or may not support deterministic
reference counting semantics easily.  Perl 6 is a language, not an
implementation.  As such, it officially Doesn't Care what Parrot does
about GC under the hood.

But the problem goes much deeper than that, and has to do with
the necessary virtualization of time as we scale things up and
distribute responsibility for them.  In this day of multiple,
distributed processing units of uncertain proximity, it's going
to get harder and harder to say what a deterministic reference is.
What if a bit your program is running on a different processor and it
goes down?  You can't know that till the next GC run at the appropriate
granularity fails to sync with the down processor.  A GC solution can
finesse around such problems, up to a point, whereas a deterministic
solution really has only one approach.  In the larger and wider scale
of things, you can't know whether a reference is valid till you try
it--see the Web for an example.  The web would never have take off with
perfect accountability.  (That's why Xanadu didn't work, basically.)

All of computing is getting more like that.

For instance, people are now finding that deterministic locking
solutions don't scale up either.  STM (software transactional memory)
seems to be a way to get around that, and that's partly because a
transactional memory manager can virtualize time much like a garbage
collector can defer decisions it isn't sure of.  Two transactions
can be in conflict in real time, which would prevent at least one
of them from happening under deterministic locking.  But an STM
manager can decide the fate of the transactions based *only* on the
data dependencies, ignoring the bogus realtime dependency, and maybe
decide to allow both transactions if a consistent state is maintained.

In short, programs written in languages that require unnecessary
determinism will not escape the von Neumann bottleneck as easily
as programs written in languages that do not.  Determinism is fine
when you need it, and a good language will provide adequate ways to
express that when you do need it.  (Perl 6 provides several ways that
are much handier than try/finally, and just about as handy as RAII.)
But baking such handicaps into every object merely guarantees it
will not scale well in the real world.  The real world itself avoids
computing deterministically most of the time!  And if anything could
manage determinism, it'd probably be the real world!  Languages that
require determinism will scale only up only when the entire world is
running one huge completely entangled quantum computer.  Determinism
is not interoperable.

Anyway, that's the trend I see.  And that's why Perl 6 has lots of
ways to promise you Don't Care about various dependencies, but Do Care
about others.  That's why Perl 6 has pipes and lazy lists, junctions
and hyperoperators, contend and defer.  These are for building scalable
solutions to run on cell processors and GPUs and clusters of servers
and corporate intranets.  And maybe even trigger a Singularity or two.  :-)

Larry

Reply via email to