I found this presentation pretty enlightening in understanding why non 
blocking concurrent algorithms can be more efficient than locking ones:
http://www.infoq.com/presentations/LMAX-Disruptor-100K-TPS-at-Less-than-1ms-Latency

I'd watch it from the start to get the background, but section 6 from 29:40 
explains how to "take the right approach to concurrency", briefly comparing 
lock vs atomic/CAS (compare and set) based approaches.

It boils down to the cost of locks being "phenomenal", since waiting for a 
contended lock can cause a thread to go to sleep, which is relatively 
expensive and may cause other issues like polluting the cache. The cost of 
retrying a CAS operation a few times is relatively trivial.

Of course these guys are talking about a particular use case which may or 
may not apply to you and most don't have these sorts of performance 
requirements, but I found the explanation very useful, as it accessibly 
explains the issue at quite a low level.


Martin.

On Wednesday, 18 July 2012 12:55:13 UTC+1, Brian Hurt wrote:
>
> Accesses to atoms are just wrappers around atomic compare and swap 
> instructions at the hardware level.  Locking an object also uses an atomic 
> compare and swap, but piles other stuff on top of it, making it more 
> expensive.  So atoms are useful in situations where there is likely not 
> going to be much in the way of contention (multiple writes happening at the 
> same time), so there won't be that many retries, and where the cost of 
> redoing the computation is low enough that it's cheaper to simply redo an 
> occasional update than pay the cost of locking.  So, a classic example of a 
> good use of atom is to assign ids, like:
>
> (def counter (atom 0))
>
> (def get-id [] (swap! counter inc))
>
> Incrementing an integer is about as cheap as computations get.  Even if 
> some unlucky thread had to redo the computation dozens of times before 
> "winning", that isn't that high of a cost.  And even if you're getting 
> millions of ids a second, the number of collisions you have will be low.  
> So this is a good use for atoms.  Any more advanced locking behavior would 
> simply slow things down.
>
> Holding hash maps in atoms is a borderline case.  I tend to only do it 
> when I know the update rate will be low, and thus collisions very rare.
>
> For situations where collisions are a lot more common, or where the update 
> is a lot more expensive, I'd use a ref cell and a dosync block.  Of course, 
> if you need atomic updates to multiple different cells, there is no 
> replacement for dosync blocks.
>
> Brian
>
> On Tue, Jul 17, 2012 at 6:57 PM, Warren Lynn wrote:
>
>> I have a hard time understanding why there is a need to retry when doing 
>> "swap!" on an atom. Why does not Clojure just lock the atom up-front and do 
>> the update? I have this question because I don't see any benefit of the 
>> current "just try and then re-try if needed" (STM?) approach for atom 
>> (maybe OK for refs because you cannot attach a lock to unknown ref 
>> combinations in a "dosync" clause). Right now I have an atom in my program 
>> and there are two "swap!" functions on it. One may take a (relatively) long 
>> time, and the other is short. I don't want the long "swap!" function to 
>> retry just because in the last minute the short one sneaked in and changed 
>> the atom value. I can do the up-front lock myself, but I wonder why this is 
>> not already so in the language. Thank you for any enlightenment.
>>
>> -- 
>>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to