On Fri, 20 May 2022, Robin Murphy wrote:
> The original x86 sev_alloc() only called set_memory_decrypted() on
> memory returned by alloc_pages_node(), so the page order calculation
> fell out of that logic. However, the common dma-direct code has several
> potential allocators, not all of which ar
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> Split the code for DMA_ATTR_NO_KERNEL_MAPPING allocations into a separate
> helper to make dma_direct_alloc a little more readable.
>
> Signed-off-by: Christoph Hellwig
Acked-by: David Rientjes
(I think my name got mangled in your To: field on t
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> We must never unencryped memory go back into the general page pool.
> So if we fail to set it back to encrypted when freeing DMA memory, leak
> the memory insted and warn the user.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/dma/direct.c |
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> Factor out helpers the make dealing with memory encryption a little less
> cumbersome.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/dma/direct.c | 55 +
> 1 file changed, 25 insertions(+), 30 dele
On Sun, 7 Feb 2021, Song Bao Hua (Barry Song) wrote:
> NUMA balancer is just one of many reasons for page migration. Even one
> simple alloc_pages() can cause memory migration in just single NUMA
> node or UMA system.
>
> The other reasons for page migration include but are not limited to:
> * me
On Mon, 3 Aug 2020, Christoph Hellwig wrote:
> On Sun, Aug 02, 2020 at 09:14:41PM -0700, David Rientjes wrote:
> > Christoph: since we have atomic DMA coherent pools in 5.8 now, recall our
> > previous discussions, including Greg KH, regarding backports to stable
> > trees (we are interested in
On Sun, 2 Aug 2020, Amit Pundir wrote:
> > > > Hi, I found the problematic memory region. It was a memory
> > > > chunk reserved/removed in the downstream tree but was
> > > > seemingly reserved upstream for different drivers. I failed to
> > > > calculate the length of the total region reserved d
On Fri, 31 Jul 2020, David Rientjes wrote:
> > > Hi Nicolas, Christoph,
> > >
> > > Just out of curiosity, I'm wondering if we can restore the earlier
> > > behaviour and make DMA atomic allocation configured thru platform
> > > specific device tree instead?
> > >
> > > Or if you can allow a mor
On Fri, 31 Jul 2020, Christoph Hellwig wrote:
> > Hi Nicolas, Christoph,
> >
> > Just out of curiosity, I'm wondering if we can restore the earlier
> > behaviour and make DMA atomic allocation configured thru platform
> > specific device tree instead?
> >
> > Or if you can allow a more hackish a
On Thu, 9 Jul 2020, Nicolas Saenz Julienne wrote:
> The function is only used once and can be simplified to a one-liner.
>
> Signed-off-by: Nicolas Saenz Julienne
I'll leave this one to Christoph to decide on. One thing I really liked
about hacking around in kernel/dma is the coding style, it
On Wed, 8 Jul 2020, Christoph Hellwig wrote:
> On Wed, Jul 08, 2020 at 06:00:35PM +0200, Nicolas Saenz Julienne wrote:
> > On Wed, 2020-07-08 at 17:35 +0200, Christoph Hellwig wrote:
> > > On Tue, Jul 07, 2020 at 02:28:04PM +0200, Nicolas Saenz Julienne wrote:
> > > > When allocating atomic DMA me
On Wed, 8 Jul 2020, Nicolas Saenz Julienne wrote:
> There is no guarantee to CMA's placement, so allocating a zone specific
> atomic pool from CMA might return memory from a completely different
> memory zone. So stop using it.
>
> Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to
On Sun, 21 Jun 2020, Guenter Roeck wrote:
> > When a DMA coherent pool is depleted, allocation failures may or may not
> > get reported in the kernel log depending on the allocator.
> >
> > The admin does have a workaround, however, by using coherent_pool= on the
> > kernel command line.
> >
> >
When a DMA coherent pool is depleted, allocation failures may or may not
get reported in the kernel log depending on the allocator.
The admin does have a workaround, however, by using coherent_pool= on the
kernel command line.
Provide some guidance on the failure and a recommended minimum size fo
On Sun, 21 Jun 2020, Guenter Roeck wrote:
> >> This patch results in a boot failure in some of my powerpc boot tests,
> >> specifically those testing boots from mptsas1068 devices. Error message:
> >>
> >> mptsas :00:02.0: enabling device ( -> 0002)
> >> mptbase: ioc0: Initiating bringup
>
On Fri, 19 Jun 2020, Roman Gushchin wrote:
> > > [ 40.287524] BUG: unable to handle page fault for address:
> > > a77b833df000
> > > [ 40.287529] #PF: supervisor write access in kernel mode
> > > [ 40.287531] #PF: error_code(0x000b) - reserved bit violation
> > > [ 40.287532] PGD 40d1
On Fri, 19 Jun 2020, Roman Gushchin wrote:
> [ 40.287524] BUG: unable to handle page fault for address: a77b833df000
> [ 40.287529] #PF: supervisor write access in kernel mode
> [ 40.287531] #PF: error_code(0x000b) - reserved bit violation
> [ 40.287532] PGD 40d14e067 P4D 40d14e067 PUD
On Thu, 18 Jun 2020, Christoph Hellwig wrote:
> The dma coherent pool code needs genalloc. Move the select over
> from DMA_REMAP, which doesn't actually need it.
>
> Fixes: dbed452a078d ("dma-pool: decouple DMA_REMAP from DMA_COHERENT_POOL")
> Reported-by: kernel test robot
Acked-by: David Rie
dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
works at page granularity. It's necessary to page align the allocation
size in dma_direct_alloc_pages() for consistent behavior.
This also fixes an issue when arch_dma_prep_coherent() is called on an
unaligned allocation si
When a coherent mapping is created in dma_direct_alloc_pages(), it needs
to be decrypted if the device requires unencrypted DMA before returning.
Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers
into dma-direct")
Cc: sta...@vger.kernel.org # 5.5+
Signed-off-by: David Rientjes
If arch_dma_set_uncached() fails after memory has been decrypted, it needs
to be re-encrypted before freeing.
Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more
general")
Cc: sta...@vger.kernel.org # 5.7
Signed-off-by: David Rientjes
---
kernel/dma/direct.c | 6 +-
1 file ch
While debugging recently reported issues concerning DMA allocation
practices when CONFIG_AMD_MEM_ENCRYPT is enabled, some curiosities arose
when looking at dma_direct_alloc_pages() behavior.
Fix these up. These are likely all stable material, so proposing for 5.8.
---
kernel/dma/direct.c | 42 ++
__change_page_attr() can fail which will cause set_memory_encrypted() and
set_memory_decrypted() to return non-zero.
If the device requires unencrypted DMA memory and decryption fails, simply
free the memory and fail.
If attempting to re-encrypt in the failure path and that encryption fails,
ther
On Mon, 8 Jun 2020, Geert Uytterhoeven wrote:
> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
>
> Fix this by correcting the calculation of the number of GiBs of RAM in
> th
On Fri, 17 Apr 2020, Christoph Hellwig wrote:
> So modulo a few comments that I can fix up myself this looks good. Unless
> you want to resend for some reason I'm ready to pick this up once I open
> the dma-mapping tree after -rc2.
>
Yes, please do, and thanks to both you and Thomas for the gui
When a device requires unencrypted memory and the context does not allow
blocking, memory must be returned from the atomic coherent pools.
This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the
config only requires CONFIG_DMA_COHERENT_POOL. This will be used for
CONFIG_AMD_MEM_
DMA atomic pools will be needed beyond only CONFIG_DMA_DIRECT_REMAP so
separate them out into their own file.
This also adds a new Kconfig option that can be subsequently used for
options, such as CONFIG_AMD_MEM_ENCRYPT, that will utilize the coherent
pools but do not have a dependency on direct r
When an atomic pool becomes fully depleted because it is now relied upon
for all non-blocking allocations through the DMA API, allow background
expansion of each pool by a kworker.
When an atomic pool has less than the default size of memory left, kick
off a kworker to dynamically expand the pool
When CONFIG_AMD_MEM_ENCRYPT is enabled and a device requires unencrypted
DMA, all non-blocking allocations must originate from the atomic DMA
coherent pools.
Select CONFIG_DMA_COHERENT_POOL for CONFIG_AMD_MEM_ENCRYPT.
Signed-off-by: David Rientjes
---
arch/x86/Kconfig | 1 +
1 file changed, 1 i
set_memory_decrypted() may block so it is not possible to do non-blocking
allocations through the DMA API for devices that required unencrypted
memory.
The solution is to expand the atomic DMA pools for the various possible
gfp requirements as a means to prevent an unnecessary depletion of lowmem.
When AMD memory encryption is enabled, some devices may use more than
256KB/sec from the atomic pools. It would be more appropriate to scale
the default size based on memory capacity unless the coherent_pool
option is used on the kernel command line.
This provides a slight optimization on initial
The atomic DMA pools can dynamically expand based on non-blocking
allocations that need to use it.
Export the sizes of each of these pools, in bytes, through debugfs for
measurement.
Suggested-by: Christoph Hellwig
Signed-off-by: David Rientjes
---
kernel/dma/pool.c | 41 ++
The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.
Devices may allocate through the DMA API but not have a strict reliance
on GFP_DMA memory. Since the atomic pool will be used for all
non-blockable allocations, return
On Tue, 14 Apr 2020, Christoph Hellwig wrote:
> > I'll rely on Christoph to determine whether it makes sense to add some
> > periodic scavening of the atomic pools, whether that's needed for this to
> > be merged, or wheter we should enforce some maximum pool size.
>
> I don't really see the po
On Tue, 14 Apr 2020, Christoph Hellwig wrote:
> > + /*
> > +* Unencrypted memory must come directly from DMA atomic pools if
> > +* blocking is not allowed.
> > +*/
> > + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(
On Thu, 9 Apr 2020, Tom Lendacky wrote:
> > When a device required unencrypted memory and the context does not allow
>
> required => requires
>
Fixed, thanks.
> > blocking, memory must be returned from the atomic coherent pools.
> >
> > This avoids the remap when CONFIG_DMA_DIRECT_REMAP is no
On Fri, 10 Apr 2020, Hillf Danton wrote:
>
> On Wed, 8 Apr 2020 14:21:06 -0700 (PDT) David Rientjes wrote:
> >
> > When an atomic pool becomes fully depleted because it is now relied upon
> > for all non-blocking allocations through the DMA API, allow background
> > expansion of each pool by a k
set_memory_decrypted() may block so it is not possible to do non-blocking
allocations through the DMA API for devices that required unencrypted
memory.
The solution is to expand the atomic DMA pools for the various possible
gfp requirements as a means to prevent an unnecessary depletion of lowmem.
When AMD memory encryption is enabled, some devices may use more than
256KB/sec from the atomic pools. It would be more appropriate to scale
the default size based on memory capacity unless the coherent_pool
option is used on the kernel command line.
This provides a slight optimization on initial
When an atomic pool becomes fully depleted because it is now relied upon
for all non-blocking allocations through the DMA API, allow background
expansion of each pool by a kworker.
When an atomic pool has less than the default size of memory left, kick
off a kworker to dynamically expand the pool
When CONFIG_AMD_MEM_ENCRYPT is enabled and a device requires unencrypted
DMA, all non-blocking allocations must originate from the atomic DMA
coherent pools.
Select CONFIG_DMA_COHERENT_POOL for CONFIG_AMD_MEM_ENCRYPT.
Signed-off-by: David Rientjes
---
arch/x86/Kconfig | 1 +
1 file changed, 1 i
The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.
Devices may allocate through the DMA API but not have a strict reliance
on GFP_DMA memory. Since the atomic pool will be used for all
non-blockable allocations, return
DMA atomic pools will be needed beyond only CONFIG_DMA_DIRECT_REMAP so
separate them out into their own file.
This also adds a new Kconfig option that can be subsequently used for
options, such as CONFIG_AMD_MEM_ENCRYPT, that will utilize the coherent
pools but do not have a dependency on direct r
When a device required unencrypted memory and the context does not allow
blocking, memory must be returned from the atomic coherent pools.
This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the
config only requires CONFIG_DMA_COHERENT_POOL. This will be used for
CONFIG_AMD_MEM_
On Thu, 5 Mar 2020, Christoph Hellwig wrote:
> On Sun, Mar 01, 2020 at 04:05:23PM -0800, David Rientjes wrote:
> > When AMD memory encryption is enabled, all non-blocking DMA allocations
> > must originate from the atomic pools depending on the device and the gfp
> > mask of the allocation.
> >
>
On Sun, 1 Mar 2020, David Rientjes wrote:
> When an atomic pool becomes fully depleted because it is now relied upon
> for all non-blocking allocations through the DMA API, allow background
> expansion of each pool by a kworker.
>
> When an atomic pool has less than the default size of memory lef
When allocating non-blockable memory, determine the optimal gfp mask of
the device and use the appropriate atomic pool.
The coherent DMA mask will remain the same between allocation and free
and, thus, memory will be freed to the same atomic pool it was allocated
from.
Signed-off-by: David Rien
When an atomic pool becomes fully depleted because it is now relied upon
for all non-blocking allocations through the DMA API, allow background
expansion of each pool by a kworker.
When an atomic pool has less than the default size of memory left, kick
off a kworker to dynamically expand the pool
When AMD memory encryption is enabled, some devices may used more than
256KB/sec from the atomic pools. Double the default size to make the
original size and expansion more appropriate.
This provides a slight optimization on initial expansion and is deemed
appropriate for all configs with CONFIG_
When AMD memory encryption is enabled, all non-blocking DMA allocations
must originate from the atomic pools depending on the device and the gfp
mask of the allocation.
Keep all memory in these pools unencrypted.
Signed-off-by: David Rientjes
---
arch/x86/Kconfig| 1 +
kernel/dma/direct.c |
set_memory_decrypted() may block so it is not possible to do non-blocking
allocations through the DMA API for devices that required unencrypted
memory.
The solution is to expand the atomic DMA pools for the various possible
gfp requirements as a means to prevent an unnecessary depletion of lowmem.
The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.
Devices may allocate through the DMA API but not have a strict reliance
on GFP_DMA memory. Since the atomic pool will be used for all
non-blockable allocations, return
This augments the dma_{alloc,free}_from_pool() functions with a pointer
to the struct device of the allocation. This introduces no functional
change and will be used later to determine the optimal gfp mask to
allocate memory from.
dma_in_atomic_pool() is not used outside kernel/dma/remap.c, so r
On Fri, 17 Jan 2020, Tom Lendacky wrote:
> On 12/31/19 7:54 PM, David Rientjes wrote:
> > Christoph, Thomas, is something like this (without the diagnosic
> > information included in this patch) acceptable for these allocations?
> > Adding expansion support when the pool is half depleted wouldn
On Thu, 9 Jan 2020, Christoph Hellwig wrote:
> > I'll rely on Thomas to chime in if this doesn't make sense for the SEV
> > usecase.
> >
> > I think the sizing of the single atomic pool needs to be determined. Our
> > peak usage that we have measured from NVMe is ~1.4MB and atomic_pool is
> >
On Tue, 7 Jan 2020, Christoph Hellwig wrote:
> > On 01/01/2020 1:54 am, David Rientjes via iommu wrote:
> >> Christoph, Thomas, is something like this (without the diagnosic
> >> information included in this patch) acceptable for these allocations?
> >> Adding e
Christoph, Thomas, is something like this (without the diagnosic
information included in this patch) acceptable for these allocations?
Adding expansion support when the pool is half depleted wouldn't be *that*
hard.
Or are there alternatives we should consider? Thanks!
When AMD SEV is ena
On Thu, 12 Dec 2019, David Rientjes wrote:
> Since all DMA must be unencrypted in this case, what happens if all
> dma_direct_alloc_pages() calls go through the DMA pool in
> kernel/dma/remap.c when force_dma_unencrypted(dev) == true since
> __PAGE_ENC is cleared for these ptes? (Ignoring for
On Thu, 12 Dec 2019, David Rientjes wrote:
> Since all DMA must be unencrypted in this case, what happens if all
> dma_direct_alloc_pages() calls go through the DMA pool in
> kernel/dma/remap.c when force_dma_unencrypted(dev) == true since
> __PAGE_ENC is cleared for these ptes? (Ignoring for
On Thu, 28 Nov 2019, Christoph Hellwig wrote:
> > So we're left with making dma_pool_alloc(GFP_ATOMIC) actually be atomic
> > even when the DMA needs to be unencrypted for SEV. Christoph's suggestion
> > was to wire up dmapool in kernel/dma/remap.c for this. Is that necessary
> > to be done f
On Wed, 18 Sep 2019, Christoph Hellwig wrote:
> On Tue, Sep 17, 2019 at 06:41:02PM +, Lendacky, Thomas wrote:
> > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> > > --- a/drivers/nvme/host/pci.c
> > > +++ b/drivers/nvme/host/pci.c
> > > @@ -1613,7 +1613,8 @@ static int nvme
On Thu, 5 Sep 2019, Christoph Hellwig wrote:
> > Hi Christoph, Jens, and Ming,
> >
> > While booting a 5.2 SEV-enabled guest we have encountered the following
> > WARNING that is followed up by a BUG because we are in atomic context
> > while trying to call set_memory_decrypted:
>
> Well, this
On Tue, 24 Apr 2018, Christoph Hellwig wrote:
> On Tue, Apr 24, 2018 at 11:54:26PM -0700, David Rientjes wrote:
> > Shouldn't that test for dev->coherent_dma_mask < DMA_BIT_MASK(32) be more
> > accurately <=?
>
> No, it should really be <. The exactly 32-bit case is already covered
> with GFP_
On Wed, 25 Apr 2018, Christoph Hellwig wrote:
> The following changes since commit 6d08b06e67cd117f6992c46611dfb4ce267cd71e:
>
> Linux 4.17-rc2 (2018-04-22 19:20:09 -0700)
>
> are available in the Git repository at:
>
> git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-4.17
64 matches
Mail list logo