Le 04/12/2021 à 15:10, Maxime Bizon a écrit : > > On Saturday 04 Dec 2021 à 10:01:07 (+0000), Christophe Leroy wrote: > >> In fact BAT4 is wrong. Both virtual and physical address of a 64M BAT >> must be 64M aligned. I think the display is wrong as well (You took it > > oh so hardware does simple bitmask after all > > I got fooled by the lack of guard in the bat setup code, so I assumed > magical hardware
I guess all the guard is in the comment ... /* * Set up one of the I/D BAT (block address translation) register pairs. * The parameters are not checked; in particular size must be a power * of 2 between 128k and 256M. */ void __init setbat(int index, unsigned long virt, phys_addr_t phys, unsigned int size, pgprot_t prot) > >> from ptdump ?), BEPI and BRPN must be anded with complement of BL. > > yes that was ptdump code with seq_printf replaced by printk > > ptdump code is correct but iif the bat addresses are correctly > aligned, maybe add a safeguard like this ? > > index 85062ce2d849..f7c5cf62ef41 100644 > --- a/arch/powerpc/mm/book3s32/mmu.c > +++ b/arch/powerpc/mm/book3s32/mmu.c > @@ -275,6 +279,10 @@ void __init setbat(int index, unsigned long virt, > phys_addr_t phys, > (unsigned long long)phys); > return; > } > + > + WARN_ON(!is_power_of_2(size)); > + WARN_ON((phys & (size - 1))); > + WARN_ON((virt & (size - 1))); > bat = BATS[index]; > Yes we could add some check allthough I'd go for a 'pr_err()' like when no BAT is available. > >> So here your 64M BAT maps 0xf8000000-0xfbffffff, therefore the address >> 0xfd3fce00 is not mapped by any BAT hence the OOPS. > > ok I think I found the issue: > > diff --git a/arch/powerpc/mm/kasan/book3s_32.c > b/arch/powerpc/mm/kasan/book3s_32.c > index 35b287b0a8da..fcbb9a136c1a 100644 > --- a/arch/powerpc/mm/kasan/book3s_32.c > +++ b/arch/powerpc/mm/kasan/book3s_32.c > @@ -12,14 +12,14 @@ int __init kasan_init_region(void *start, size_t size) > unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + > size); > unsigned long k_cur = k_start; > int k_size = k_end - k_start; > - int k_size_base = 1 << (ffs(k_size) - 1); > + int k_size_base = 1 << (fls(k_size) - 1); > int ret; > void *block; > > block = memblock_alloc(k_size, k_size_base); > > if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, > k_size_base)) { > - int shift = ffs(k_size - k_size_base); > + int shift = fls(k_size - k_size_base); > int k_size_more = shift ? 1 << (shift - 1) : 0; > > setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL); > > > Not sure it is that simple. I'm cooking a patch reusing the block_size() function in mm/book3s32/mmu.c Christophe