Am Dienstag, den 06.08.2019, 13:33 +0200 schrieb Christoph Hellwig: > On Tue, Aug 06, 2019 at 11:13:29AM +0200, Lucas Stach wrote: > > Hi Christoph, > > > > I just found a regression where my NVMe device is no longer able to > > set > > up its HMB. > > > > After subject commit dma_direct_alloc_pages() is no longer > > initializing > > dma_handle properly when DMA_ATTR_NO_KERNEL_MAPPING is set, as the > > function is now returning too early. > > > > Now this could easily be fixed by adding the phy_to_dma translation > > to > > the NO_KERNEL_MAPPING code path, but I'm not sure how this stuff > > interacts with the memory encryption stuff set up later in the > > function, so I guess this should be looked at by someone with more > > experience with this code than me. > > There is not much we can do about the memory encryption case here,
Which I would guess means we need to ignore DMA_ATTR_NO_KERNEL_MAPPING in that case instead of dropping out early? > as that requires a kernel address to mark the memory as unencrypted. > > So the obvious trivial fix is probably the right one: > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 59bdceea3737..c49120193309 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -135,6 +135,7 @@ void *dma_direct_alloc_pages(struct device *dev, > size_t size, > if (!PageHighMem(page)) > arch_dma_prep_coherent(page, size); > /* return the page pointer as the opaque cookie */ > + *dma_handle = phys_to_dma(dev, page_to_phys(page)); > return page; > } > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu