On Tue, 3 Dec 2019 10:44:45 +0800
Lu Baolu <baolu...@linux.intel.com> wrote:

> Hi Jacob,
> 
> On 12/3/19 4:02 AM, Jacob Pan wrote:
> > On Fri, 22 Nov 2019 11:04:44 +0800
> > Lu Baolu<baolu...@linux.intel.com>  wrote:
> >   
> >> Intel VT-d 3.0 introduces more caches and interfaces for software
> >> to flush when it runs in the scalable mode. Currently various
> >> cache flush helpers are scattered around. This consolidates them
> >> by putting them in the existing iommu_flush structure.
> >>
> >> /* struct iommu_flush - Intel IOMMU cache invalidation ops
> >>   *
> >>   * @cc_inv: invalidate context cache
> >>   * @iotlb_inv: Invalidate IOTLB and paging structure caches when
> >> software
> >>   *             has changed second-level tables.
> >>   * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
> >> software
> >>   *               has changed first-level tables.
> >>   * @pc_inv: invalidate pasid cache
> >>   * @dev_tlb_inv: invalidate cached mappings used by
> >> requests-without-PASID
> >>   *               from the Device-TLB on a endpoint device.
> >>   * @p_dev_tlb_inv: invalidate cached mappings used by
> >> requests-with-PASID
> >>   *                 from the Device-TLB on an endpoint device
> >>   */
> >> struct iommu_flush {
> >>          void (*cc_inv)(struct intel_iommu *iommu, u16 did,
> >>                         u16 sid, u8 fm, u64 type);
> >>          void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
> >> addr, unsigned int size_order, u64 type);
> >>          void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did,
> >> u32 pasid, u64 addr, unsigned long npages, bool ih);
> >>          void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32
> >> pasid, u64 granu);
> >>          void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
> >> u16 pfsid, u16 qdep, u64 addr, unsigned int mask);
> >>          void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
> >> u16 pfsid, u32 pasid, u16 qdep, u64 addr,
> >>                                unsigned long npages);
> >> };
> >>
> >> The name of each cache flush ops is defined according to the spec
> >> section 6.5 so that people are easy to look up them in the spec.
> >>  
> > Nice consolidation. For nested SVM, I also introduced cache flushed
> > helpers as needed.
> > https://lkml.org/lkml/2019/10/24/857
> > 
> > Should I wait for yours to be merged or you want to extend the this
> > consolidation after SVA/SVM cache flush? I expect to send my v8
> > shortly. 
> 
> Please base your v8 patch on this series. So it could get more chances
> for test.
> 
Sounds good.

> I will queue this patch series for internal test after 5.5-rc1 and if
> everything goes well, I will forward it to Joerg around rc4 for linux-
> next.
> 
> Best regards,
> baolu

[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to