On Tue, 28 Aug 2001 19:04:20 -0700, Hong Zhang <[EMAIL PROTECTED]>
wrote:

>Normally, GC is more efficient than ref count, since you will have many
>advanced gc algorith to choose and don't have to pay malloc overhead.

You still need to malloc() your memory; however I realize that the
allocator can be *really* fast here.  But still, you give a lot of the
gain back during the mark-and-sweep phase, especially if you also
move/compact the memory.

The big gain only comes in when your program is small/quick enough to
actually finish before the GC kicks in the first time (think CGI).  In
that case you just discard the whole heap instead of doing a proper
garbage collection (unless of course someone thought they could still do
something inside a finalizer during global destruction and you still need
to finalize every other object on your heap :).

>On MP machine, ref count is really slow, because of the atomic instructions,
>which are very slow. I measured the atomic x86 instruction such as 
>"LOCK INC DWORD PTR [ECX];" long time ago. I believe each instruction takes
>about 10 to 30 clock cycles.

Don't even dream of accessing Perl scalars simultaneously from multiple
threads without some kind of locking.  To keep their internal caching
behavior consistent, you'll need to lock them for even the most simple
operations (see the failure of the Perl 5.005 thread model).

But even if you give up the caching behavior, what about strings?  Atomic
updates, eh?  Welcome to the world of immutable strings.  Just allocate a
new string every time you need to modify it and update the string
reference atomically.  You want a modifiable buffer?  Get a StringBuilder
object and lock it on every access. :)  We could just as well switch to
Java or C#.

-Jan

Reply via email to