http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51766
--- Comment #6 from Richard Guenther <rguenth at gcc dot gnu.org> 2012-01-10 14:48:54 UTC --- (In reply to comment #5) > I understand that fixing __sync_* is a hassle. This is why I opened a > separate > bug for libstdc++. > > While __sync_* is deprecated in favor of __atomic_*, use of __sync_* for > portability is fairly pervasive in FOSS applications that need it because of > its implementation in GCC. Most programmers do not know about memory models > and do not care about memory models. And it will take time for programmers to > switch to __atomic_*, if they even bother to choose a memory model and don't > introduce a bug. > > The basic problem is MEMMODEL_SEQ_CST only makes a performance difference for > POWER and developers are going to continue to use __sync_* builtins for a > while. This change in default behavior only hurts performance for > applications > on POWER relative to all other architectures, which sucks. :-( Yes, I see that. But my question is - did a developer reading the documentation get _correct_ code on POWER (which uses a laxer memory model than documented!) in all cases? Or can you construct a testcase that works fine on IA64 while surprisingly (after reading docs) does not work on POWER? Thus, didn't we simply fix a wrong-code bug (albeit by producing slower code)?