y interfere with
> > other contexts or the general operation of the invalidation queue
> > if a user is effectively given direct access? Will the
> > invalidation data be sanitized by the iommu driver?
> >
> > > union intel_iommu_invalidate_data {
> > > struct {
> > > __u64 low;
> > > __u64 high;
> > > } invalidate_data;
> > >
> > > struct {
> > > __u64 type: 4;
> > > __u64 gran: 2;
> > > __u64 rsv1: 10;
> > > __u64 did: 16;
> > > __u64 sid: 16;
> > > __u64 func_mask: 2;
> > > __u64 rsv2: 14;
> > > __64 rsv3: 64;
> > > } context_cache_inv;
> > >
> >
> > Here's part of the issue with not fully defining these, we have did,
> > sid, and func_mask. I think we're claiming that the benefit of
> > passing through the hardware data structure is performance, but the
> > user needs to replace these IDs to match the physical device rather
> > than the virtual device, perhaps even entirely recreating it
> > because there's not necessarily a 1:1 mapping of things like
> > func_mask between virtual and physical hardware topologies
> > (assuming I'm interpreting these fields correctly). Doesn't the
> > kernel also need to validate any such field to prevent the user
> > spoofing entries for other devices? Is there any actual
> > performance benefit remaining vs defining a generic interface after
> > multiple levels have manipulated, recreated, and sanitized these
> > structures? We can't evaluate these sorts of risks if we don't
> > define what we're passing through. Thanks,
>
> A potential proposal is to abstract the fields of the QI entry.
> However, here is a concern for it. Different type of QI entry would
> have diferent fields. It means we need to have a hyper set to include
> all the possible fields. Supposedly, the set would increase as more
> QI type is introduced. I'm not sure if it is an acceptable definition.
>
> Based on the latest spec, the vendor-specific fields may have:
>
> Global hint
> Drain read/write
> Source-ID
> MIP
> PFSID
>
My thinking was that as long as the risk of having some opaque data is
limited to the device that is already exposed to the user space, it
should be fine. We have model specific IOMMU driver to sanitize the
data before putting the descriptor into hardware.
But I agree the overhead of disassemble/assemble may not be
significant. Though with vIOMMU and caching mode = 1 (requires
explicit invalidation of caches regardless present or not, VT-d spec
6.1), we will see more invalidation than the native pIOMMU case.
Anyway, we can do some micro benchmark to see the overhead.
> PRQ response is another topic. Not included here.
>
> Thanks,
> Yi L
>
[Jacob Pan]
On Fri, 12 May 2017 15:59:29 -0600
Alex Williamson wrote:
> > + if (pasidt_binfo->size >= intel_iommu_get_pts(iommu)) {
> > + pr_err("Invalid gPASID table size %llu, host size
> > %lu\n",
> > + pasidt_binfo->size,
> > + intel_iommu_get_pts(iommu));
On Wed, 26 Apr 2017 17:56:45 +0100
Jean-Philippe Brucker wrote:
> Hi Yi, Jacob,
>
> On 26/04/17 11:11, Liu, Yi L wrote:
> > From: Jacob Pan
> >
> > Virtual IOMMU was proposed to support Shared Virtual Memory (SVM)
> > use case in the guest:
> > https: