To the best of my understanding, the underlying race condition within the assert has not been solved. I've worked around it for now by simply removing the assert, but that's just a workaround to keep my development going.
- Justin On Thu, Jul 19, 2018 at 2:09 PM Bryan Drewery <bdrew...@freebsd.org> wrote: > > Did this issue get resolved? > > On 6/8/2018 11:37 AM, Konstantin Belousov wrote: > > On Fri, Jun 08, 2018 at 02:30:10PM -0400, Mark Johnston wrote: > >> On Fri, Jun 08, 2018 at 08:37:55PM +0300, Konstantin Belousov wrote: > >>> On Thu, Jun 07, 2018 at 11:02:29PM -0700, Ryan Libby wrote: > >>>> On Thu, Jun 7, 2018 at 10:03 PM, Mateusz Guzik <mjgu...@gmail.com> wrote: > >>>>> Checking it without any locks is perfectly valid in this case. It is > >>>>> done > >>>>> after v_holdcnt gets bumped from a non-zero value. So at that time it > >>>>> is at least two. Of course that result is stale as an arbitrary number > >>>>> of > >>>>> other threads could have bumped and dropped the ref past that point. > >>>>> The minimum value is 1 since we hold the ref. But this means the > >>>>> vnode must not be on the free list and that's what the assertion is > >>>>> verifying. > >>>>> > >>>>> The problem is indeed lack of ordering against the code clearing the > >>>>> flag for the case where 2 threads to vhold and one does the 0->1 > >>>>> transition. > >>>>> > >>>>> That said, the fence is required for the assertion to work. > >>>>> > >>>> > >>>> Yeah, I agree with this logic. What I mean is that reordering between > >>>> v_holdcnt 0->1 and v_iflag is normally settled by the release and > >>>> acquisition of the vnode interlock, which we are supposed to hold for > >>>> v_*i*flag. A quick scan seems to show all of the checks of VI_FREE that > >>>> are not asserts do hold the vnode interlock. So, I'm just saying that I > >>>> don't think the possible reordering affects them. > >>> But do we know that only VI_FREE checks are affected ? > >>> > >>> My concern is that users of _vhold() rely on seeing up to date state of > >>> the > >>> vnode, and VI_FREE is only an example of the problem. Most likely, the > >>> code which fetched the vnode pointer before _vhold() call, should > >>> guarantee > >>> visibility. > >> > >> Wouldn't this be a problem only if we permit lockless accesses of vnode > >> state outside of _vhold() and other vnode subroutines? The current > >> protocol requires that the interlock be held, and this synchronizes with > >> code which performs 0->1 and 1->0 transitions of the hold count. If this > >> requirement is relaxed in the future, then fences would indeed be > >> needed. > > > > I do not claim that my concern is a real problem. I stated it as a > > thing to look at when deciding whether the fences should be added > > (unconditionally ?). > > > > If you argument is that the only current lock-less protocol for the > > struct vnode state is the v_holdcnt transitions for > 1, then I can > > agree with it. > > > > > -- > Regards, > Bryan Drewery > _______________________________________________ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"