On Thu, Jun 11, 2020 at 12:20:32PM -0700, David Rientjes wrote: > When a coherent mapping is created in dma_direct_alloc_pages(), it needs > to be decrypted if the device requires unencrypted DMA before returning. > > Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers > into dma-direct") > Cc: sta...@vger.kernel.org # 5.5+ > Signed-off-by: David Rientjes <rient...@google.com> > --- > kernel/dma/direct.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -195,6 +195,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t > size, > __builtin_return_address(0)); > if (!ret) > goto out_free_pages; > + if (force_dma_unencrypted(dev)) { > + err = set_memory_decrypted((unsigned long)ret, > + 1 << get_order(size)); > + if (err) > + goto out_free_pages; > + }
Note that ret is a vmalloc address here. Does set_memory_decrypted work for that case? Again this should be mostly theoretical, so I'm not too worried for now.