> From: Alex Williamson
> Sent: Wednesday, January 18, 2023 5:23 AM
> 
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjros...@linux.ibm.com> wrote:
> 
> >  void vfio_device_group_close(struct vfio_device *device)
> >  {
> > +   void (*put_kvm)(struct kvm *kvm);
> > +   struct kvm *kvm;
> > +
> >     mutex_lock(&device->group->group_lock);
> > +   kvm = device->kvm;
> > +   put_kvm = device->put_kvm;
> >     vfio_device_close(device, device->group->iommufd);
> > +   if (kvm == device->kvm)
> > +           kvm = NULL;
> 
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference.  That's a
> bit obscure.  Our get and put is also asymmetric.
> 
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
> 
> What's the downside?  Thanks,
> 

btw I guess this can be also fixed by Yi's work here:

https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l....@intel.com/

with set_kvm(NULL) moved to the release callback of kvm_vfio device,
such circular lock dependency can be avoided too.

Reply via email to