On Wed, 10 Jul 2019, Lendacky, Thomas wrote:

> From: Tom Lendacky <thomas.lenda...@amd.com>
> 
> If a device doesn't support DMA to a physical address that includes the
> encryption bit (currently bit 47, so 48-bit DMA), then the DMA must
> occur to unencrypted memory. SWIOTLB is used to satisfy that requirement
> if an IOMMU is not active (enabled or configured in passthrough mode).
> 
> However, commit fafadcd16595 ("swiotlb: don't dip into swiotlb pool for
> coherent allocations") modified the coherent allocation support in SWIOTLB
> to use the DMA direct coherent allocation support. When an IOMMU is not
> active, this resulted in dma_alloc_coherent() failing for devices that
> didn't support DMA addresses that included the encryption bit.
> 
> Addressing this requires changes to the force_dma_unencrypted() function
> in kernel/dma/direct.c. Since the function is now non-trivial and SME/SEV
> specific, update the DMA direct support to add an arch override for the
> force_dma_unencrypted() function. The arch override is selected when
> CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in the
> arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either SEV
> is active or SME is active and the device does not support DMA to physical
> addresses that include the encryption bit.
> 
> Fixes: fafadcd16595 ("swiotlb: don't dip into swiotlb pool for coherent 
> allocations")
> Suggested-by: Christoph Hellwig <h...@lst.de>
> Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
> ---
> 
> Based on tree git://git.infradead.org/users/hch/dma-mapping.git for-next
> 
>  arch/x86/Kconfig          |  1 +

For the x86 parts:

Acked-by: Thomas Gleixner <t...@linutronix.de>

Reply via email to