On Tue, 1 Jun 2021 10:36:36 +0800, Jason Wang wrote:

> 在 2021/5/31 下午4:41, Liu Yi L 写道:
> >> I guess VFIO_ATTACH_IOASID will fail if the underlayer doesn't support
> >> hardware nesting. Or is there way to detect the capability before?  
> > I think it could fail in the IOASID_CREATE_NESTING. If the gpa_ioasid
> > is not able to support nesting, then should fail it.
> >  
> >> I think GET_INFO only works after the ATTACH.  
> > yes. After attaching to gpa_ioasid, userspace could GET_INFO on the
> > gpa_ioasid and check if nesting is supported or not. right?  
> 
> 
> Some more questions:
> 
> 1) Is the handle returned by IOASID_ALLOC an fd?

it's an ID so far in this proposal.

> 2) If yes, what's the reason for not simply use the fd opened from 
> /dev/ioas. (This is the question that is not answered) and what happens 
> if we call GET_INFO for the ioasid_fd?
> 3) If not, how GET_INFO work?

oh, missed this question in prior reply. Personally, no special reason
yet. But using ID may give us opportunity to customize the management
of the handle. For one, better lookup efficiency by using xarray to
store the allocated IDs. For two, could categorize the allocated IDs
(parent or nested). GET_INFO just works with an input FD and an ID.

> 
> >  
> >>>   /* Bind guest I/O page table  */
> >>>   bind_data = {
> >>>           .ioasid = giova_ioasid;
> >>>           .addr   = giova_pgtable;
> >>>           // and format information
> >>>   };
> >>>   ioctl(ioasid_fd, IOASID_BIND_PGTABLE, &bind_data);
> >>>
> >>>   /* Invalidate IOTLB when required */
> >>>   inv_data = {
> >>>           .ioasid = giova_ioasid;
> >>>           // granular information
> >>>   };
> >>>   ioctl(ioasid_fd, IOASID_INVALIDATE_CACHE, &inv_data);
> >>>
> >>>   /* See 5.6 for I/O page fault handling */
> >>>   
> >>> 5.5. Guest SVA (vSVA)
> >>> ++++++++++++++++++
> >>>
> >>> After boots the guest further create a GVA address spaces (gpasid1) on
> >>> dev1. Dev2 is not affected (still attached to giova_ioasid).
> >>>
> >>> As explained in section 4, user should avoid expose ENQCMD on both
> >>> pdev and mdev.
> >>>
> >>> The sequence applies to all device types (being pdev or mdev), except
> >>> one additional step to call KVM for ENQCMD-capable mdev:  
> >> My understanding is ENQCMD is Intel specific and not a requirement for
> >> having vSVA.  
> > ENQCMD is not really Intel specific although only Intel supports it today.
> > The PCIe DMWr capability is the capability for software to enumerate the
> > ENQCMD support in device side. yes, it is not a requirement for vSVA. They
> > are orthogonal.  
> 
> 
> Right, then it's better to mention DMWr instead of a vendor specific 
> instruction in a general framework like ioasid.

good suggestion. :)

-- 
Regards,
Yi Liu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to