On Thu, Sep 30, 2021 at 08:49:03AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <j...@nvidia.com>
> > Sent: Thursday, September 23, 2021 8:22 PM
> > 
> > > > These are different things and need different bits. Since the ARM path
> > > > has a lot more code supporting it, I'd suggest Intel should change
> > > > their code to use IOMMU_BLOCK_NO_SNOOP and abandon
> > IOMMU_CACHE.
> > >
> > > I didn't fully get this point. The end result is same, i.e. making the DMA
> > > cache-coherent when IOMMU_CACHE is set. Or if you help define the
> > > behavior of IOMMU_CACHE, what will you define now?
> > 
> > It is clearly specifying how the kernel API works:
> > 
> >  !IOMMU_CACHE
> >    must call arch cache flushers
> >  IOMMU_CACHE -
> >    do not call arch cache flushers
> >  IOMMU_CACHE|IOMMU_BLOCK_NO_SNOOP -
> >    dot not arch cache flushers, and ignore the no snoop bit.
> 
> Who will set IOMMU_BLOCK_NO_SNOOP?

Basically only qemu due to specialized x86 hypervisor knowledge.

The only purpose of this attribute is to support a specific
virtualization use case where a whole bunch of stuff is broken
together:
 - the cache maintenance instructions are not available to a guest
 - the guest isn't aware that the instructions don't work and tells
   the device to issue no-snoop TLPs
 - The device ignores the 'disable no-snoop' flag in the PCIe config
   space

Thus things become broken.

Jason
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to