On Fri, Nov 22, 2019 at 10:07:13AM +0000, Stefan Hajnoczi wrote: > On Fri, Nov 15, 2019 at 03:55:40PM -0500, Vivek Goyal wrote: > > diff --git a/contrib/virtiofsd/passthrough_ll.c > > b/contrib/virtiofsd/passthrough_ll.c > > index bc214df0c7..028e7da273 100644 > > --- a/contrib/virtiofsd/passthrough_ll.c > > +++ b/contrib/virtiofsd/passthrough_ll.c > > @@ -936,6 +936,14 @@ static void put_shared(struct lo_data *lo, struct > > lo_inode *inode) > > } > > } > > > > +static void release_plock(gpointer data) > > The name posix_locks_value_destroy() would be clearer because it matches > g_hash_table_new_full() terminology and the function cannot be confused > with a lock acquire/release operation.
Ok, will use this name. > > This patch conflicts with the cleanups that are currently being made to > virtiofsd: > https://github.com/stefanha/qemu/commit/1e493175feca58a81a2d0cbdac93b92e5425d850#diff-ca2dea995d1e6cdb95c8a47c7cca51ceR773 Yes it will. I see you are removing element from hash table on lo_flush(). This works fine today but with waiting locks, we drop the inode->plock_mutex lock and then wait for the lock and expect "lo_inode_plock" to not go away. So I don't think you can remove the element from hash table upon lo_flush(). May be we can refcount lo_inode_plock structure and first release all the locks using setlk(UNLCK) and then drop the reference. If this is last refernce, it will be freed. And waiting lock code, will obtain a refernce under inode->posix_locks and then wait for lock outside the lock. IOW, I will say don't do this optimization of lookup + remove because it will not work with blocking locks. Thanks Vivek