Christophe Leroy <christophe.le...@csgroup.eu> writes: > Commit 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines") > included a spin_lock() to change_page_attr() in order to > safely perform the three step operations. But then > commit 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against > concurrent accesses") modify it to use pte_update() and do > the operation atomically.
It's not really atomic, it's just safe against concurrent access. We still do a read / modify / write of the pte value. Which isn't safe against concurrent calls to change_page_attr() for the same address. But maybe that's OK? AFAICS other architectures (eg. arm64) have no protection against concurrent callers. I think the assumption is higher level code is ensuring there's only a single caller at a time. On the other hand x86 and s390 do have locking (cpa_lock / cpa_mutex). But it seems that's mostly to protect against splitting of page tables, which we aren't doing. We'd be a bit safer if we used pte_update() "properly", like I did in: https://lore.kernel.org/linuxppc-dev/20210817132552.3375738-1-...@ellerman.id.au/ cheers > In the meantime, Maxime reported some spinlock recursion. > > [ 15.351649] BUG: spinlock recursion on CPU#0, kworker/0:2/217 > [ 15.357540] lock: init_mm+0x3c/0x420, .magic: dead4ead, .owner: > kworker/0:2/217, .owner_cpu: 0 > [ 15.366563] CPU: 0 PID: 217 Comm: kworker/0:2 Not tainted 5.15.0+ #523 > [ 15.373350] Workqueue: events do_free_init > [ 15.377615] Call Trace: > [ 15.380232] [e4105ac0] [800946a4] do_raw_spin_lock+0xf8/0x120 (unreliable) > [ 15.387340] [e4105ae0] [8001f4ec] change_page_attr+0x40/0x1d4 > [ 15.393413] [e4105b10] [801424e0] __apply_to_page_range+0x164/0x310 > [ 15.400009] [e4105b60] [80169620] free_pcp_prepare+0x1e4/0x4a0 > [ 15.406045] [e4105ba0] [8016c5a0] free_unref_page+0x40/0x2b8 > [ 15.411979] [e4105be0] [8018724c] kasan_depopulate_vmalloc_pte+0x6c/0x94 > [ 15.418989] [e4105c00] [801424e0] __apply_to_page_range+0x164/0x310 > [ 15.425451] [e4105c50] [80187834] kasan_release_vmalloc+0xbc/0x134 > [ 15.431898] [e4105c70] [8015f7a8] __purge_vmap_area_lazy+0x4e4/0xdd8 > [ 15.438560] [e4105d30] [80160d10] _vm_unmap_aliases.part.0+0x17c/0x24c > [ 15.445283] [e4105d60] [801642d0] __vunmap+0x2f0/0x5c8 > [ 15.450684] [e4105db0] [800e32d0] do_free_init+0x68/0x94 > [ 15.456181] [e4105dd0] [8005d094] process_one_work+0x4bc/0x7b8 > [ 15.462283] [e4105e90] [8005d614] worker_thread+0x284/0x6e8 > [ 15.468227] [e4105f00] [8006aaec] kthread+0x1f0/0x210 > [ 15.473489] [e4105f40] [80017148] ret_from_kernel_thread+0x14/0x1c > > Remove the spin_lock() in change_page_attr(). > > Reported-by: Maxime Bizon <mbi...@freebox.fr> > Link: https://lore.kernel.org/all/20211212112152.GA27070@sakura/ > Cc: Russell Currey <rus...@russell.cc> > Signed-off-by: Christophe Leroy <christophe.le...@csgroup.eu> > --- > arch/powerpc/mm/pageattr.c | 4 ---- > 1 file changed, 4 deletions(-) > > diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c > index edea388e9d3f..308adc51da9d 100644 > --- a/arch/powerpc/mm/pageattr.c > +++ b/arch/powerpc/mm/pageattr.c > @@ -30,8 +30,6 @@ static int change_page_attr(pte_t *ptep, unsigned long > addr, void *data) > long action = (long)data; > pte_t pte; > > - spin_lock(&init_mm.page_table_lock); > - > pte = ptep_get(ptep); > > /* modify the PTE bits as desired, then apply */ > @@ -61,8 +59,6 @@ static int change_page_attr(pte_t *ptep, unsigned long > addr, void *data) > > flush_tlb_kernel_range(addr, addr + PAGE_SIZE); > > - spin_unlock(&init_mm.page_table_lock); > - > return 0; > } > > -- > 2.33.1