On Mon, Oct 22, 2007 at 18:15:35 +0200, Michael Matz wrote: > > I'd rather wish the optimization would be done differently. Currently > > we have: > > > > mem -> reg; > > loop loop > > if (condition) => optimize => if (condition) > > val -> mem; val -> reg; > > reg -> mem; > > > > > > But it could use additional register and be: > > > > 0 -> flag_reg; > > loop > > if (condition) > > val -> reg; > > 1 -> flag_reg; > > if (flag_reg == 1) > > reg -> mem; > > That could be done but would be besides the point. You traded one > conditional store with another one, so you've gained nothing in that > transformation.
Rather I traded possibly many conditional stores in a loop with one conditional store outside the loop. And this exactly coincides with the point of discussion: you can't go further, when you replace conditional store with unconditional one, you introduce the race that wasn't in the original code. Several people already suggested to use volatile for shared data. Yes, it will help because we know it will disable all access optimizations, including thread-unaware ones. But I don't want to disable _all_ optimizations, I rather vote for thread-aware optimizations. There is no requirement in POSIX to make all shared data volatile. As the article referenced in the thread explains, there is no agreement between POSIX and C/C++ wrt memory access. But should it be fixed in the compiler (as article suggests), or should every shared data in every threaded program be defined volatile, just for the case? I never seen latter approach in any Open Source project (though didn't look for it specifically), and many of them are considered quite portable. Again, we are not discussing some particular code sample, and how it might be fixed, but the problem in general. Should GCC do thread-unsafe optimizations, or not? -- Tomash Brechko