On 3/3/25 18:32, Aneesh Kumar K.V wrote:
Donet Tom <donet...@linux.ibm.com> writes:
A vmemmap altmap is a device-provided region used to provide
backing storage for struct pages. For each namespace, the altmap
should belong to that same namespace. If the namespaces are
created unaligned, there is a chance that the section vmemmap
start address could also be unaligned. If the section vmemmap
start address is unaligned, the altmap page allocated from the
current namespace might be used by the previous namespace also.
During the free operation, since the altmap is shared between two
namespaces, the previous namespace may detect that the page does
not belong to its altmap and incorrectly assume that the page is a
normal page. It then attempts to free the normal page, which leads
to a kernel crash.
In this patch, we are aligning the section vmemmap start address
to PAGE_SIZE. After alignment, the start address will not be
part of the current namespace, and a normal page will be allocated
for the vmemmap mapping of the current section. For the remaining
sections, altmaps will be allocated. During the free operation,
the normal page will be correctly freed.
Without this patch
==================
NS1 start NS2 start
_________________________________________________________
| NS1 | NS2 |
---------------------------------------------------------
| Altmap| Altmap | .....|Altmap| Altmap | ...........
| NS1 | NS1 | | NS2 | NS2 |
^^^ this should be allocated in ram?
Yes, it should be allocated from RAM. However, in the current
implementation, an altmap page gets allocated. This is because the
NS2 vmemmap section's start address is unaligned. There is an
altmap_cross_boundary() check. Here, from the vmemmap section
start, we identify the namespace start and check if the namespace start
is within the boundary. Since it is within the boundary, it returns false,
causing an altmap page to be allocated. During the PTE update, the
vmemmap start address is aligned down to PAGE_SIZE, and the PTE is
updated. As a result, the altmap page is shared between the current
and previous namespaces.
If we had aligned the vmemmap start address, the
altmap_cross_boundary() function would return true because the
vmemmap section's start address belongs to the previous
namespace. Therefore normal page gets allocated. During the
PTE set operation, since the address is already aligned, the
PTE will updated.
In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
for NS1 comes from Altmap NS1, which belongs to NS1, and the
vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
by both NS1 and NS2. During the free operation in NS1, Altmap NS2
is not part of NS1's altmap, causing it to attempt to free an
invalid page.
With this patch
===============
NS1 start NS2 start
_________________________________________________________
| NS1 | NS2 |
---------------------------------------------------------
| Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
| NS1 | NS1 | | Page | NS2 | NS2 |
If the vmemmap start for NS2 is not aligned then we are allocating
a normal page. NS1 and NS2 vmemmap will be freed correctly.
Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different
vmemmap handling function")
Co-developed-by: Ritesh Harjani (IBM) <ritesh.l...@gmail.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.l...@gmail.com>
Signed-off-by: Donet Tom <donet...@linux.ibm.com>
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 311e2112d782..b22d5f6147d2 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long
start, unsigned long end, in
pmd_t *pmd;
pte_t *pte;
+ start = ALIGN_DOWN(start, PAGE_SIZE);
+
for (addr = start; addr < end; addr = next) {
next = pmd_addr_end(addr, end);
--
2.43.5