On 8/12/19 2:56 PM, Roger Pau Monné wrote:
> On Mon, Aug 12, 2019 at 02:17:53PM +0100, George Dunlap wrote:
>> On 8/2/19 10:22 AM, Roger Pau Monne wrote:
>>> Switch rmrr_identity_mapping to use iommu_{un}map in order to
>>> establish RMRR mappings for PV domains, like it's done in
>>> arch_iommu_hwdom_init. This solves the issue of a PV hardware domain
>>> not getting RMRR mappings because {set/clear}_identity_p2m_entry was
>>> not properly updating the iommu page tables.
>>
>> Sorry, I think this description is somewhat backwards: you're saying
>> what you're doing first, and then saying what the problematic behavior
>> was, but not actually saying what was causing the problematic behavior.
>>
>> Why was {set,clear}_identity_p2m not updating the page tables?
>>
>> I agree with Jan, that figuring that out is a prerequisite for any kind
>> of fix: if `need_iommu_pt_sync()` is false at that point for the
>> hardware domain, then there's a bigger problem than RMRRs not being
>> adjusted properly.
> 
> need_iommu_pt_sync is indeed false for a PV hardware domain not
> running in strict mode, see:
> 
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/drivers/passthrough/iommu.c;h=f8c3bf53bd1793df93d7ddea6564dc929f40c156;hb=HEAD#l192
> https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01547.html
> 
> This is fine for a non-strict PV hardware domain, since it has all the
> host memory (minus memory used by Xen) mapped in the iommu page tables
> except for RMRR regions not marked as reserved in the e820 memory map,
> which are added using set_identity_p2m_entry.
> 
> The issue here is that this patch alone doesn't fix the issue for the
> reporter, and there seems to be an additional flush required in order
> to get the issue solved on his end:
> 
> https://lists.xenproject.org/archives/html/xen-devel/2019-08/msg00526.html
> 
> My theory is that the system Roman is using is booting with DMA
> remapping enabled in the iommu, and somehow the call to
> iommu_flush_all in intel_iommu_hwdom_init doesn't seem to work
> properly, while calling iommu_iotlb_flush_all does seem to do the
> right thing. I'm still waiting for Roman to come back with the result
> of my last debug patches in order to figure out whether my hypothesis
> above is true.
> 
> This however won't still explain the weird behaviour of
> iommu_flush_all, and further debugging will still be required.

OK; so this patch has four effects, it looks like:

1. iommu_legacy_[un]map -> iommu_[un]map
2. Move iommu ops out of {set,clear}_identity_p2m for PV guests;
open-code them in rmrr_identity_mapping
3. For non-translated guests, do the operation unconditionally
4. Add a flush

Regarding #3, the previous patch changed it from "if
iommu_needs_pt_sync" to "if has_iommu_pt"; this one changes it to
"always".  Is that what you intended?

I don't really see the point of #2: from the device's perspective, the
"physmap" is the IOMMU for PV guests, and IOMMU+p2m for HVM guests; it
makes sense to have a single place to call for either type of guest,
rather than open-coding every location.

Can't comment on the other aspects.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to