Since SLOB was removed, it is not necessary to use call_rcu
when the callback only performs kmem_cache_free. Use
kfree_rcu() directly.
The changes were done using the following Coccinelle semantic patch.
This semantic patch is designed to ignore cases where the callback
function is used in another
Since SLOB was removed, it is not necessary to use call_rcu
when the callback only performs kmem_cache_free. Use
kfree_rcu() directly.
The changes were done using the following Coccinelle semantic patch.
This semantic patch is designed to ignore cases where the callback
function is used in another
This series has reached maturity to not send it as an RFC anymore.
Only the book3s/64 part maybe needs more attention. Alternatively
we could simply disable HUGE pages on book3s/64 in hash-4k mode if
we want to be on the safe side.
Also see https://github.com/linuxppc/issues/issues/483
Unlike mos
On powerpc 8xx, when a page is 8M size, the information is in the PMD
entry. So allow architectures to provide __pte_leaf_size() instead of
pte_leaf_size() and provide the PMD entry to that function.
When __pte_leaf_size() is not defined, define it as a pte_leaf_size()
so that architectures not in
From: Michael Ellerman
This is a squash of series from Michael
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20240524073141.1637736-1-...@ellerman.id.au/
The nohash HTW_IBM (Hardware Table Walk) code is unused since support
for A2 was removed in commit fb5a515704d7 ("powerpc: Remove p
On powerpc 8xx huge_ptep_get() will need to know whether the given
ptep is a PTE entry or a PMD entry. This cannot be known with the
PMD entry itself because there is no easy way to know it from the
content of the entry.
So huge_ptep_get() will need to know either the size of the page
or get the p
_PAGE_PSIZE macro is never used outside the place it is defined
and is used only on 8xx and e500.
Remove indirection, remove it and use its content directly.
Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
---
arch/powerpc/include/asm/nohash/32/pte-40x.h | 3 ---
arch/powerpc/incl
Building on 32 bits with pmd_leaf() not returning always false leads
to the following error:
CC arch/powerpc/mm/pgtable.o
arch/powerpc/mm/pgtable.c: In function '__find_linux_pte':
arch/powerpc/mm/pgtable.c:506:1: error: function may return address of local
variable [-Werror=return-local-a
In preparation of implementing huge pages on powerpc 8xx
without hugepd, enclose hugepd related code inside an
ifdef CONFIG_ARCH_HAS_HUGEPD
This also allows removing some stubs.
Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
---
v3:
- Prepare huge_pte_alloc() for full standard topo
set_huge_pte_at() expects the size of the hugepage as an int, not the
psize which is the index of the page definition in table mmu_psize_defs[]
Fixes: 935d4f0c6dc8 ("mm: hugetlb: add huge page size param to
set_huge_pte_at()")
Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
---
arc
In order to fit better with standard Linux page tables layout, add
support for 8M pages using contiguous PTE entries in a standard
page table. Page tables will then be populated with 1024 similar
entries and two PMD entries will point to that page table.
The PMD entries also get a flag to tell it
On 8xx, only the shift field is used in struct mmu_psize_def
Remove other fields and related macros.
Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/po
enc field is hidden behind BOOK3E_PAGESZ_XX macros, and when you look
closer you realise that this field is nothing else than the value of
shift minus ten.
So remove enc field and calculate tsize from shift field.
Also remove inc field which is unused.
Signed-off-by: Christophe Leroy
Reviewed-b
At the time being when CONFIG_PTE_64BIT is selected, PTE entries are
64 bits but PGD entries are still 32 bits.
In order to allow leaf PMD entries, switch the PGD to 64 bits entries.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/pgtable-types.h | 4
arch/powerpc/kernel/head
Use PTE page size bits to encode hugepage size with the following
format corresponding to the values expected in bits 52-55 in MAS1
register. Those bits are called TSIZE:
00014 Kbyte
001016 Kbyte
001164 Kbyte
0100256 Kbyte
01011 Mbyte
Don't pre-check write access on read-only pages on data TLB error.
Load the TLB anyway and take a DSI exception when it happens. This
avoids reading SPRN_ESR at every data TLB error exception.
Signed-off-by: Christophe Leroy
---
v5: New
---
arch/powerpc/kernel/head_85xx.S | 15 ---
Move r13 load after the call to FIND_PTE, and use r13 instead of
r10 for storing fault address. This will allow using r10 freely
in FIND_PTE in following patch to handle hugepage size.
Signed-off-by: Christophe Leroy
---
v5: New
---
arch/powerpc/kernel/head_85xx.S | 30 --
e500 supports many page sizes among which the following size are
implemented in the kernel at the time being: 4M, 16M, 64M, 256M, 1G.
On e500, TLB miss for hugepages is exclusively handled by SW even
on e6500 which has HW assistance for 4k pages, so there are no
constraints like on the 8xx.
On e5
On book3s/64, the only user of hugepd is hash in 4k mode.
All other setups (hash-64, radix-4, radix-64) use leaf PMD/PUD.
Rework hash-4k to use contiguous PMD and PUD instead.
In that setup there are only two huge page sizes: 16M and 16G.
16M sits at PMD level and 16G at PUD level.
pte_update
All targets have now opted out of CONFIG_ARCH_HAS_HUGEPD so
remove left over code.
Signed-off-by: Christophe Leroy
Acked-by: Oscar Salvador
---
v5: Fix a forgotten #endif which ended up in following patch
---
arch/powerpc/include/asm/hugetlb.h | 7 -
arch/powerpc/include/asm/page.h
powerpc was the only user of CONFIG_ARCH_HAS_HUGEPD and doesn't
use it anymore, so remove all related code.
Signed-off-by: Christophe Leroy
Acked-by: Oscar Salvador
---
v4: Rebased on v6.10-rc1
---
include/linux/hugetlb.h | 6 --
mm/Kconfig | 10 ---
mm/gup.c| 18
On Thu, Jun 06, 2024 at 12:56:33PM +0530, Manivannan Sadhasivam wrote:
> Hi,
>
> This series includes patches that were left over from previous series [1] for
> making the host reboot handling robust in endpoint framework.
>
> When the above mentioned series got merged to pci/endpoint, we got a b
22 matches
Mail list logo