Le 14/08/2019 à 15:22, Christoph Hellwig a écrit :
This switches to using common code for the DMA allocations, including
potential use of the CMA allocator if configured.

Switching to the generic code enables DMA allocations from atomic
context, which is required by the DMA API documentation, and also
adds various other minor features drivers start relying upon.  It
also makes sure we have on tested code base for all architectures
that require uncached pte bits for coherent DMA allocations.

Another advantage is that consistent memory allocations now share
the general vmalloc pool instead of needing an explicit careout
from it.

Signed-off-by: Christoph Hellwig <h...@lst.de>
---
  arch/powerpc/Kconfig                         |  12 -
  arch/powerpc/include/asm/book3s/32/pgtable.h |  12 +-
  arch/powerpc/include/asm/nohash/32/pgtable.h |  12 +-
  arch/powerpc/mm/dma-noncoherent.c            | 318 +------------------
  arch/powerpc/mm/mem.c                        |   4 -
  arch/powerpc/mm/ptdump/ptdump.c              |   9 -
  arch/powerpc/platforms/Kconfig.cputype       |   2 +
  7 files changed, 17 insertions(+), 352 deletions(-)


[...]

diff --git a/arch/powerpc/mm/dma-noncoherent.c 
b/arch/powerpc/mm/dma-noncoherent.c
index c617282d5b2a..4272ca5e8159 100644
--- a/arch/powerpc/mm/dma-noncoherent.c
+++ b/arch/powerpc/mm/dma-noncoherent.c

[...]

@@ -408,23 +116,15 @@ void arch_sync_dma_for_cpu(struct device *dev, 
phys_addr_t paddr,
        __dma_sync_page(paddr, size, dir);
  }
-/*
- * Return the PFN for a given cpu virtual address returned by arch_dma_alloc.
- */
-long arch_dma_coherent_to_pfn(struct device *dev, void *vaddr,
-               dma_addr_t dma_addr)
+void arch_dma_prep_coherent(struct page *page, size_t size)
  {
-       /* This should always be populated, so we don't test every
-        * level. If that fails, we'll have a nice crash which
-        * will be as good as a BUG_ON()
-        */
-       unsigned long cpu_addr = (unsigned long)vaddr;
-       pgd_t *pgd = pgd_offset_k(cpu_addr);
-       pud_t *pud = pud_offset(pgd, cpu_addr);
-       pmd_t *pmd = pmd_offset(pud, cpu_addr);
-       pte_t *ptep = pte_offset_kernel(pmd, cpu_addr);
+       unsigned long kaddr = (unsigned long)page_address(page);
- if (pte_none(*ptep) || !pte_present(*ptep))
-               return 0;
-       return pte_pfn(*ptep);
+       flush_dcache_range(kaddr, kaddr + size);
+}
+
+static int __init atomic_pool_init(void)
+{
+       return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL));
  }
+postcore_initcall(atomic_pool_init);

[...]

diff --git a/arch/powerpc/platforms/Kconfig.cputype 
b/arch/powerpc/platforms/Kconfig.cputype
index 56a7c814160d..afe71b89dec3 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -450,8 +450,10 @@ config NOT_COHERENT_CACHE
        depends on 4xx || PPC_8xx || E200 || PPC_MPC512x || \
                GAMECUBE_COMMON || AMIGAONE
        select ARCH_HAS_DMA_COHERENT_TO_PFN

You drop arch_dma_coherent_to_pfn(), it's surprising to see ARCH_HAS_DMA_COHERENT_TO_PFN remains. At first I thought I'd get a build failure.

After looking more, I see there is a arch_dma_coherent_to_pfn()
defined in kernel/dma/remap.c when DMA_DIRECT_REMAP is selected.

I think the naming is not really consistant and should be fixed some how, because that's misleading to have an arch_something() being common to all.

Christophe

+       select ARCH_HAS_DMA_PREP_COHERENT
        select ARCH_HAS_SYNC_DMA_FOR_DEVICE
        select ARCH_HAS_SYNC_DMA_FOR_CPU
+       select DMA_DIRECT_REMAP
        default n if PPC_47x
        default y

Reply via email to