https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83853
--- Comment #4 from rene.r...@fu-berlin.de --- (In reply to Jonathan Wakely from comment #3) > (In reply to rene.rahn from comment #2) > > It basically says, that while T2 is currently destroying the condition > > variable, T1 is still accessing it by calling notify on it??? > > Yes. > > > Note that if the condition variable is put before the while loop and reset > > in every iteration, it performs without warnings on unix, but still dead > > locks on mac. > > If I put the notify of T1 inside of the lock_guard it will also run without > > data race warnings and does not dead lock on mac. > > That seems like the correct fix. The publisher can destroy the condition > variable as soon as Job::wait() returns, which can happen before the > notify_all() call begins. That's a logic error in your program. Calling > notify_all() while the mutex is still locked prevents that. Alright, I am truly sorry. I kind of mist out the last paragraph in the documentation of http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one, which is literally the behaviour I am modelling. So that is definitely a bug in the user code and not in the library. So the same will fix the OSX problem. > > > So there is a little error that might happen in the following situation. > > After T1 updates the shared bool, it leaves the lock. > > So here it could potentially happen, that T1 is preempted and T2 has a > > spurious wakeup but now acquiring the lock, as the predicate returns true > > now. In this case T2 can continue in the loop and destroy the lock, and T1 > > will call notify on it while T2 already destroys it. > > Yes, that is one explanation. Even if it didn't happen often, your code > allows it to happen sometimes, which is a bug. I again would totally agree, and yet > > > Having said that, this seems unlikely to happen so often, does it. I mean I > > am not very confident about how often something like a spurious wake really > > happens, especially as between wait and notify there are only a few cycles. > > It can still happen without a spurious wake up. The call to > pthread_cond_broadcast isn't an atomic operation, it can wake the waiting > thread (which then destroys the pthread_cond_t) and then perform further > reads from the variable before returning. > > > I would really like to know, it is in general forbidden, that the waiting > > thread owns the condition variable as his stack memory might get > > inaccessible when it is put into wait. I couldn't find any information > > regarding this. > > I'm not sure I understand what you're asking, because your code doesn't > destroy the condition variable until after the wait returns. The object > isn't invalidated when "put into wait" only after the wait returns. > > It's not generally forbidden for the waiting thread to own the condition > variable, you just need to ensure no other thread is calling a function on > an object being destroyed (which is generally true for any object, although > std::condition_variable does allow destruction before all waiters have > returned, as long as they've been notified, see > http://en.cppreference.com/w/cpp/thread/condition_variable/ > ~condition_variable -- i.e. std::condition_variable relaxes the usual rules > about accessing an object being destroyed). > > > > Or is there a general user error on my site, which I might not see (because > > it is not a trivial case). I am truly sorry if that would be the case. > > Or is it possible that there is kind of a bug in the condition variable > > implementation, such that the signalling is not broadcasted correctly? > > I think that is very unlikely. Especially if you're seeing the bug on > GNU/Linux and OS X, because they have different implementations of > pthread_cond_t (and libstdc++ just uses that). > > Also you didn't state which standard library you're using with clang, if > it's not libstdc++ then it's even more unlikely that you're seeing the same > bug with two entirely different implementations of std::condition_variable > and pthread_cond_t. > > I don't see any evidence of a bug here. Yes, me neither!!! Sorry, again for bothering, but I am glad for the very competent answers. Helped a lot.