On 2/20/25 05:54, Ilias Apalodimas wrote:
The ARM ARM on section 8.17.1 describes the cases where
break-before-make is required when changing live page tables.
Since we can use this function to tweak block and page permssions,
where BBM is not required add an extra argument to the function.
While at it add a function description.
Signed-off-by: Ilias Apalodimas <ilias.apalodi...@linaro.org>
---
arch/arm/cpu/armv8/cache_v8.c | 6 +++++-
arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 10 +++++-----
arch/arm/include/asm/system.h | 11 ++++++++++-
arch/arm/mach-snapdragon/board.c | 2 +-
4 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/armv8/cache_v8.c b/arch/arm/cpu/armv8/cache_v8.c
index c4b3da4a8da7..670379e17b7a 100644
--- a/arch/arm/cpu/armv8/cache_v8.c
+++ b/arch/arm/cpu/armv8/cache_v8.c
@@ -972,11 +972,14 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start,
size_t size,
* The procecess is break-before-make. The target region will be marked as
* invalid during the process of changing.
*/
-void mmu_change_region_attr(phys_addr_t addr, size_t siz, u64 attrs)
+void mmu_change_region_attr(phys_addr_t addr, size_t siz, u64 attrs, bool bbm)
{
int level;
u64 r, size, start;
+ if (!bbm)
+ goto skip_break;
+
start = addr;
size = siz;
/*
@@ -1001,6 +1004,7 @@ void mmu_change_region_attr(phys_addr_t addr, size_t siz,
u64 attrs)
gd->arch.tlb_addr + gd->arch.tlb_size);
__asm_invalidate_tlb_all();
+skip_break:
/*
* Loop through the address range until we find a page granule that fits
* our alignment constraints, then set it to the new cache attributes
Because the 'bbm' argument is always constant in the callers,
better to split the function. Perhaps
mmu_change_region_attr
mmu_change_region_attr_nobreak
r~