On 06/19/2018 04:47 PM, Michael Ellerman wrote:
"Aneesh Kumar K.V" <aneesh.ku...@linux.ibm.com> writes:
With 4k page size for hugetlb we allocate hugepage directories from its on slab
cache. With patch 0c4d26802 ("powerpc/book3s64/mm: Simplify the rcu callback for
page table free")
we missed to free these allocated hugepd tables.
Update pgtable_free to handle hugetlb hugepd directory table.
Fixes: 0c4d26802 ("powerpc/book3s64/mm: Simplify the rcu callback for page table
free")
Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 1 +
.../include/asm/book3s/64/pgtable-4k.h | 21 +++++++++++++++++++
.../include/asm/book3s/64/pgtable-64k.h | 9 ++++++++
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +++++
arch/powerpc/include/asm/nohash/32/pgalloc.h | 1 +
arch/powerpc/include/asm/nohash/64/pgalloc.h | 1 +
arch/powerpc/mm/hugetlbpage.c | 3 ++-
arch/powerpc/mm/pgtable-book3s64.c | 12 +++++++++++
Fails with 4K=y HUGETLBFS=n:
arch/powerpc/mm/pgtable-book3s64.c:415:16: error: ‘H_16M_CACHE_INDEX’
undeclared (first use in this function); did you mean ‘H_PUD_CACHE_INDEX’?
...
diff --git a/arch/powerpc/mm/pgtable-book3s64.c
b/arch/powerpc/mm/pgtable-book3s64.c
index c1f4ca45c93a..468c3d83a2aa 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -409,6 +409,18 @@ static inline void pgtable_free(void *table, int index)
case PUD_INDEX:
kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), table);
break;
+#ifdef CONFIG_PPC_4K_PAGES
+ /* 16M hugepd directory at pud level */
+ case HTLB_16M_INDEX:
+ BUILD_BUG_ON(H_16M_CACHE_INDEX <= 0);
+ kmem_cache_free(PGT_CACHE(H_16M_CACHE_INDEX), table);
+ break;
+ /* 16G hugepd directory at the pgd level */
+ case HTLB_16G_INDEX:
+ BUILD_BUG_ON(H_16G_CACHE_INDEX <= 0);
+ kmem_cache_free(PGT_CACHE(H_16G_CACHE_INDEX), table);
+ break;
+#endif
Because this isn't protected by CONFIG_HUGETLBFS.
I assume this is correct?
diff --git a/arch/powerpc/mm/pgtable-book3s64.c
b/arch/powerpc/mm/pgtable-book3s64.c
index 468c3d83a2aa..9b7007fd075e 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -409,7 +409,7 @@ static inline void pgtable_free(void *table, int index)
case PUD_INDEX:
kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), table);
break;
-#ifdef CONFIG_PPC_4K_PAGES
+#if defined(CONFIG_PPC_4K_PAGES) && defined (CONFIG_HUGETLBFS)
/* 16M hugepd directory at pud level */
case HTLB_16M_INDEX:
BUILD_BUG_ON(H_16M_CACHE_INDEX <= 0);
cheers
Sorry missed that. Can we use #ifdef CONFIG_HUGETLB_PAGE ? That is what
we use to protect that in pgtable-4k.h
-aneesh
-aneesh