> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Friday, February 23, 2018 5:41 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > Sent: 23 February 2018 05:17
> > To: Paul Durrant <paul.durr...@citrix.com>; xen-
> de...@lists.xenproject.org
> > Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
> > <wei.l...@citrix.com>; George Dunlap <george.dun...@citrix.com>;
> > Andrew Cooper <andrew.coop...@citrix.com>; Ian Jackson
> > <ian.jack...@citrix.com>; Tim (Xen.org) <t...@xen.org>; Jan Beulich
> > <jbeul...@suse.com>; Daniel De Graaf <dgde...@tycho.nsa.gov>
> > Subject: RE: [Xen-devel] [PATCH 5/7] public / x86: introduce
> > __HYPERCALL_iommu_op
> >
> > > From: Paul Durrant [mailto:paul.durr...@citrix.com]
> > > Sent: Tuesday, February 13, 2018 5:23 PM
> > >
> > > > -----Original Message-----
> > > > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > > > Sent: 13 February 2018 06:43
> > > > To: Paul Durrant <paul.durr...@citrix.com>; xen-
> > > de...@lists.xenproject.org
> > > > Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
> > > > <wei.l...@citrix.com>; George Dunlap <george.dun...@citrix.com>;
> > > > Andrew Cooper <andrew.coop...@citrix.com>; Ian Jackson
> > > > <ian.jack...@citrix.com>; Tim (Xen.org) <t...@xen.org>; Jan Beulich
> > > > <jbeul...@suse.com>; Daniel De Graaf <dgde...@tycho.nsa.gov>
> > > > Subject: RE: [Xen-devel] [PATCH 5/7] public / x86: introduce
> > > > __HYPERCALL_iommu_op
> > > >
> > > > > From: Paul Durrant
> > > > > Sent: Monday, February 12, 2018 6:47 PM
> > > > >
> > > > > This patch introduces the boilerplate for a new hypercall to allow a
> > > > > domain to control IOMMU mappings for its own pages.
> > > > > Whilst there is duplication of code between the native and compat
> > > entry
> > > > > points which appears ripe for some form of combination, I think it is
> > > > > better to maintain the separation as-is because the compat entry
> point
> > > > > will necessarily gain complexity in subsequent patches.
> > > > >
> > > > > NOTE: This hypercall is only implemented for x86 and is currently
> > > > >       restricted by XSM to dom0 since it could be used to cause
> IOMMU
> > > > >       faults which may bring down a host.
> > > > >
> > > > > Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
> > > > [...]
> > > > > +
> > > > > +
> > > > > +static bool can_control_iommu(void)
> > > > > +{
> > > > > +    struct domain *currd = current->domain;
> > > > > +
> > > > > +    /*
> > > > > +     * IOMMU mappings cannot be manipulated if:
> > > > > +     * - the IOMMU is not enabled or,
> > > > > +     * - the IOMMU is passed through or,
> > > > > +     * - shared EPT configured or,
> > > > > +     * - Xen is maintaining an identity map.
> > > >
> > > > "for dom0"
> > > >
> > > > > +     */
> > > > > +    if ( !iommu_enabled || iommu_passthrough ||
> > > > > +         iommu_use_hap_pt(currd) || need_iommu(currd) )
> > > >
> > > > I guess it's clearer to directly check iommu_dom0_strict here
> > >
> > > Well, the problem with that is that it totally ties this interface to 
> > > dom0.
> > > Whilst, in practice, that is the case at the moment (because of the xsm
> > > check) I do want to leave the potential to allow other PV domains to
> control
> > > their IOMMU mappings, if that make sense in future.
> > >
> >
> > first it's inconsistent from the comments - "Xen is maintaining
> > an identity map" which only applies to dom0.
> 
> That's not true. If I assign a PCI device to an HVM domain, for instance,
> then need_iommu() is true for that domain and indeed Xen maintains a 1:1
> BFN:GFN map for that domain.
> 
> >
> > second I'm afraid !need_iommu is not an accurate condition to represent
> > PV domain. what about iommu also enabled for future PV domains?
> >
> 
> I don't quite follow... need_iommu is a per-domain flag, set for dom0 when
> in strict mode, set for others when passing through a device. Either way, if
> Xen is maintaining the IOMMU pagetables then it is clearly unsafe for the
> domain to also be messing with them.
> 

I don't think it's a mess. Xen always maintains the IOMMU pagetables
in a way that guest expects:

1) for dom0 (w/o pvIOMMU) in strict mode, it's MFN:MFN identity mapping
2) for dom0 (w/ pvIOMMU), it's BFN:MFN mapping
3) for HVM (w/o virtual VTd) with passthrough device, it's GFN:MFN 
4) for HVM (w/ virtual VTd) with passthrough device, it's BFN:MFN

(from IOMMU p.o.v we can always call all 4 categories as BFN:MFN. 
I deliberately separate them from usage p.o.v, where 'BFN'
represents the cases where guest explicitly manages a new address
space - different from physical address space in its mind)

there is an address space switch in 2) and 4) before and after
enabling vIOMMU.

above is why I didn’t follow the assumption that "Xen is maintaining 
an identity map" is identical to need_iommu.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to