Chaim Frenkel wrote:

> You aren't being clear here.
> 
>         fetch($a)               fetch($a)
>         fetch($b)               ...
>         add                     ...
>         store($a)               store($a)
> 
> Now all of the perl internals are done 'safely' but the result is garbage.
> You don't even know the result of the addition.

Sorry you are right, I wasn't clear.  You are correct - the final value
of $a will depend on the exact ordering of the FETCHEs and STOREs in the
two threads.  As I said - tough.  The problem is that defining a
'statement' is hard.  Does map or grep constitute a single statement?  I
bet most perl programmers would say 'Yes'.  However I suspect it
wouldn't be practical to make it auto-locking in the manner you
describe.  In that case you aren't actually making anyone's life easier
by adding auto-locking, as they now have a whole new problem to solve -
remembering which operations are and aren't auto-locking.  Explicit
locks don't require a feat of memory - they are there for all to see in
the code.

The other issue is that auto-locking operations will inevitably be done
inside explicitly locked sections.  This is firstly inefficient as it
adds another level of locking, and secondly may well be prone to causing
deadlocks.

> AB> I think you are getting confused between the locking needed within the
> AB> interpreter to ensure that it's internal state is always consistent and
> AB> sane, and the explicit application-level locking that will have to be in
> AB> multithreaded perl programs to make them function correctly.
> AB> Interpreter consistency and application correctness are *not* the same
> AB> thing.
> 
> I just said the same thing to someone else. I've been assuming that
> perl would make sure it doesn't dump core. I've been arguing for having
> perl do a minimal guarentee at the user level.

Right - I think everyone is in agreement that there are two types of
locking under discussion, and that the first - internal locking to
ensure interpreter consistency - is a must.  The debate is now over how
much we try to do automatically at the application level.

> Sorry, internal consistancy isn't enough.
> 
> Doing that store of a value in $h, ior pushing something onto @queue
> is going to be a complex operation.  If you are going to keep a lock
> on %h while the entire expression/statement completes, then you have
> essentially given me an atomic operation which is what I would like.

And you have given me something that I don't like, which is to make
every shared hash a serialisation point.  If I'm thinking of speeding up
an app that uses a shared hash by threading it I'll see limited speedup
because under your scheme, any accesses will be serialised by that damn
automatic lock that I DON'T WANT!  A more common approach to locking
hashes is to have a lock per chain - this allows concurrent updates to
happen as long as they are on different chains.  Also, I'm not clear
what type of automatic lock you are intending to cripple me with - an
exclusive one or a read/write lock for example.  My shared variable
might be mainly read-only, so automatically taking out an exclusive lock
every time I fetch its value isn't really helping me much.  I think what
I'm trying to say is please stop trying to be helpful by adding auto
locking, because in most cases it will just get in the way.

If you *really* desperately want it, I think it should be optional, e.g.
   my $a : shared, auto lock;
or somesuch.  This will probably be fine for those people who are using
threads but who don't actually understand what they are doing.  I still
however think that you havn't fully addressed the issue of what
constitutes and atomic operation.

> I think we all would agree that an op is atomic. +, op=, push, delete
> exists, etc. Yes?

Sorry - no I don't agree.  As I said, what about map or grep, or sort? 
I have an alternative proposal - anything that can be the target of a
tie is atomic, i.e. for scalars, STORE, FETCH, and DESTROY etc.

-- 
Alan Burlison

Reply via email to