Hi Ronald, On Wed, Jul 5, 2017 at 7:24 PM, Ronald S. Bultje <rsbul...@gmail.com> wrote: > Hi Wan-Teh, > > On Wed, Jul 5, 2017 at 8:08 PM, Wan-Teh Chang <w...@google.com> wrote: >> >> Hi Ronald, >> >> On Wed, Jul 5, 2017 at 3:31 PM, Ronald S. Bultje <rsbul...@gmail.com> >> wrote: >> > Hi Wan-Teh, >> > >> > On Wed, Jul 5, 2017 at 6:12 PM, Wan-Teh Chang <w...@google.com> wrote: >> >> >> >> Thank you for all the tsan warning fixes. In the meantime, it would be >> >> good to revert 2e664b9c1e73c80aab91070c1eb7676f04bdd12d to avoid >> >> confusion. >> > >> > >> > Why? I believe it fixes a subset of the issue. >> >> Could you explain why it fixes a subset of the issue? > > From what I remember, I was (before this patch) semi-reliably able to > reproduce the issue locally, and it went away after. I've observed the same > thing in the relevant fate station, where before the patch, this warning > occurred semi-reliably in 3-4 tests per run, and after the patch it > sometimes occurs in 1 test per run. I understand this doesn't satisfy you > but I currently don't want to dig through the code to figure out why I > thought this was a good idea, I have other things that take priority. > Reverting the patch will require me to dig through this code also, and I > really don't see the gain.
In addition to avoiding confusion, reverting the patch will move the av_log() statements outside the lock. Since I don't use those two av_log() statements, I won't insist on reverting the patch. > Some more thoughts: > - I don't think we want to grab a second lock for something trivial like > this (e.g. grabbing progress_mutex when copying the debug field in > update_context_from_user); > - making the debug flags field atomic will be difficult because it's public > API, and I don't think we're ready to expose C11 types in our public API; > - I suppose you could make a private (inside PerThreadContext, which allows > it to be atomic) copy of debug intended for cross-threading purposes and use > that here. After studying this problem for two more days, I implemented a variant of your last suggestion: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213384.html Let's continue the discussion in that email thread. Wan-Teh Chang _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel