On 07/09/21(Tue) 14:19, Patrick Wildt wrote:
> Hi,
> 
> I was playing around a little with the mutex code and found that on
> arm64 there some uninitialized mutexes out there.
> 
> I think the arm64 specific one is comparatively easy to solve.  We
> either initialize the mtx when we initialize the rest of the pmap, or
> we move it into the global definition of those.  I opted for the former
> version.

Is the kernel pmap mutex supposed to be used?  On i386 it isn't so the
mutex's IPL is set to -1 and we added a KASSERT() in splraise() to spot
any mistake.

> The other one prolly needs more discussion/debugging.  So uvm_init()
> calls first pmap_init() and then uvm_km_page_init().  The latter does
> initialize the mutex, but arm64's pmap_init() already uses pools, which
> uses km_alloc, which then uses that mutex.  Now one easy fix would be
> to just initialize the definition right away instead of during runtime.
> 
> But there might be the question if arm64's pmap is allowed to use pools
> and km_alloc during pmap_init.

That's a common question for the family of pmaps calling pool_setlowat()
in pmap_init().  That's where pool_prime() is called from.

> #0  0xffffff800073f984 in mtx_enter (mtx=0xffffff8000f3b048 <uvm_km_pages>) 
> at /usr/src/sys/kern/kern_lock.c:281
> #1  0xffffff8000937e6c in km_alloc (sz=<error reading variable: Unhandled 
> dwarf expression opcode 0xa3>, kv=0xffffff8000da6a30 <kv_page>, 
> kp=0xffffff8000da6a48 <kp_dirty>, kd=0xffffff8000e934d8)
>     at /usr/src/sys/uvm/uvm_km.c:899
> #2  0xffffff800084d804 in pool_page_alloc (pp=<error reading variable: 
> Unhandled dwarf expression opcode 0xa3>, flags=<error reading variable: 
> Unhandled dwarf expression opcode 0xa3>,
>     slowdown=<error reading variable: Unhandled dwarf expression opcode 
> 0xa3>) at /usr/src/sys/kern/subr_pool.c:1633
> #3  0xffffff800084f8dc in pool_allocator_alloc (pp=0xffffff8000ea6e40 
> <pmap_pmap_pool>, flags=65792, slowdown=0xffffff80026cd098) at 
> /usr/src/sys/kern/subr_pool.c:1602
> #4  0xffffff800084ef08 in pool_p_alloc (pp=0xffffff8000ea6e40 
> <pmap_pmap_pool>, flags=2, slowdown=0xffffff8000e9359c) at 
> /usr/src/sys/kern/subr_pool.c:926
> #5  0xffffff800084f808 in pool_prime (pp=<optimized out>, n=<error reading 
> variable: Unhandled dwarf expression opcode 0xa3>) at 
> /usr/src/sys/kern/subr_pool.c:896
> #6  0xffffff800048c20c in pmap_init () at 
> /usr/src/sys/arch/arm64/arm64/pmap.c:1682
> #7  0xffffff80009384dc in uvm_init () at /usr/src/sys/uvm/uvm_init.c:118
> #8  0xffffff800048e664 in main (framep=<error reading variable: Unhandled 
> dwarf expression opcode 0xa3>) at /usr/src/sys/kern/init_main.c:235
> 
> diff --git a/sys/arch/arm64/arm64/pmap.c b/sys/arch/arm64/arm64/pmap.c
> index 79a344cc84e..f070f4540ec 100644
> --- a/sys/arch/arm64/arm64/pmap.c
> +++ b/sys/arch/arm64/arm64/pmap.c
> @@ -1308,10 +1308,12 @@ pmap_bootstrap(long kvo, paddr_t lpt1, long 
> kernelstart, long kernelend,
>       pmap_kernel()->pm_vp.l1 = (struct pmapvp1 *)va;
>       pmap_kernel()->pm_privileged = 1;
>       pmap_kernel()->pm_asid = 0;
> +     mtx_init(&pmap_kernel()->pm_mtx, IPL_VM);
>  
>       pmap_tramp.pm_vp.l1 = (struct pmapvp1 *)va + 1;
>       pmap_tramp.pm_privileged = 1;
>       pmap_tramp.pm_asid = 0;
> +     mtx_init(&pmap_tramp.pm_mtx, IPL_VM);
>  
>       /* Mark ASID 0 as in-use. */
>       pmap_asid[0] |= (3U << 0);
> diff --git a/sys/uvm/uvm_km.c b/sys/uvm/uvm_km.c
> index 4a60377e9d7..e77afeda832 100644
> --- a/sys/uvm/uvm_km.c
> +++ b/sys/uvm/uvm_km.c
> @@ -644,7 +644,7 @@ uvm_km_page_lateinit(void)
>   * not zero filled.
>   */
>  
> -struct uvm_km_pages uvm_km_pages;
> +struct uvm_km_pages uvm_km_pages = { .mtx = MUTEX_INITIALIZER(IPL_VM) };
>  
>  void uvm_km_createthread(void *);
>  void uvm_km_thread(void *);
> @@ -664,7 +664,6 @@ uvm_km_page_init(void)
>       int     len, bulk;
>       vaddr_t addr;
>  
> -     mtx_init(&uvm_km_pages.mtx, IPL_VM);
>       if (!uvm_km_pages.lowat) {
>               /* based on physmem, calculate a good value here */
>               uvm_km_pages.lowat = physmem / 256;
> 

Reply via email to