Michael Matz wrote:
Hi,

On Wed, 12 May 2010, Andrew MacLeod wrote:

Well, you get the same thing you get today. Any synchronization done via a function call will tend to be correct since we never move shared memory operations across calls. Depending on your application, the types of data races the options deal with may not be an issue. Using the options will eliminate having to think whether they are issues or not at a (hopefully) small cost.

Since the atomic operations are being built into the compiler, the intent is to eventually optimize and inline them for speed... and in the best case, simply result in a load or store. That's further work of course, but these options are laying some of the groundwork.

Are you and the other proponents of that memory model seriously proposing it as an alternative to explicit locking via atomic builtins (that map to some form of atomic instructions)?

Proposing what as an alternative?

These optimization restrictions defined by the memory model are there to create predictable memory behaviour across threads. This is applicable when you use the atomic built-ins for locking. Especially in the case when the atomic operation is inlined. One goal is to have unoptimized program behaviour be consistent with the optimized version. If the optimizers introduce new data races, there is a potential behaviour difference.

Lock free data structures which utilize the atomic built-ins but do not require explicit locking are potential applications built on top of that.

Andrew

Reply via email to