I'm running v3.2-rc1 kernel with CONFIG_PROVE_LOCKING enabled on imx6
and imx5 with Linaro rootfs (nano, developer) and seeing a circular
locking dependency warning.

---8<----
[    3.520947] Freeing init memory: 260K
Loading, please wait...
[    3.713140] 
[    3.714647] ======================================================
[    3.720836] [ INFO: possible circular locking dependency detected ]
[    3.727112] 3.2.0-rc1+ #54
[    3.729825] -------------------------------------------------------
[    3.736102] udevd/368 is trying to acquire lock:
[    3.740728]  (&sig->cred_guard_mutex){+.+.+.}, at: [<8013c688>] lock_trace+0x
28/0x5c
[    3.748544] 
[    3.748547] but task is already holding lock:
[    3.754398]  (&sb->s_type->i_mutex_key#6){+.+.+.}, at: [<800fc954>] do_lookup
+0x1e4/0x32c
[    3.762665] 
[    3.762668] which lock already depends on the new lock.
[    3.762674] 
[    3.770879] 
[    3.770882] the existing dependency chain (in reverse order) is:
[    3.778383] 
[    3.778386] -> #1 (&sb->s_type->i_mutex_key#6){+.+.+.}:
[    3.785170]        [<8006a7d8>] __lock_acquire+0x14c4/0x1a04
[    3.790871]        [<8006b34c>] lock_acquire+0x124/0x148
[    3.796211]        [<8047999c>] mutex_lock_nested+0x5c/0x394
[    3.801903]        [<800fc954>] do_lookup+0x1e4/0x32c
[    3.806982]        [<800fcc6c>] link_path_walk+0x1d0/0x7d0
[    3.812495]        [<800fe750>] path_openat+0xac/0x374
[    3.817661]        [<800feb48>] do_filp_open+0x3c/0x88
[    3.822826]        [<800f5e98>] open_exec+0x2c/0x100
[    3.827818]        [<800f7544>] do_execve+0xd4/0x2a8
[    3.832810]        [<80011790>] sys_execve+0x44/0x64
[    3.837815]        [<8000db00>] ret_fast_syscall+0x0/0x3c
[    3.843245] 
[    3.843248] -> #0 (&sig->cred_guard_mutex){+.+.+.}:
[    3.849667]        [<80067aa4>] print_circular_bug+0x68/0x2b8
[    3.855445]        [<8006a5ac>] __lock_acquire+0x1298/0x1a04
[    3.861132]        [<8006b34c>] lock_acquire+0x124/0x148
[    3.866470]        [<804791a8>] mutex_lock_killable_nested+0x60/0x464
[    3.872940]        [<8013c688>] lock_trace+0x28/0x5c
[    3.877932]        [<8013e79c>] proc_lookupfd_common+0x60/0xc8
[    3.883795]        [<8013e844>] proc_lookupfd+0x1c/0x24
[    3.889047]        [<800fa96c>] d_alloc_and_lookup+0x54/0x70
[    3.894738]        [<800fc978>] do_lookup+0x208/0x32c
[    3.899816]        [<800fd434>] path_lookupat+0xfc/0x6c8
[    3.905155]        [<800fda2c>] do_path_lookup+0x2c/0x68
[    3.910495]        [<800feab0>] user_path_at_empty+0x68/0x98
[    3.916181]        [<800feb04>] user_path_at+0x24/0x2c
[    3.921346]        [<800f4eb4>] vfs_fstatat+0x44/0x74
[    3.926425]        [<800f4f40>] vfs_stat+0x2c/0x30
[    3.931243]        [<800f5164>] sys_stat64+0x24/0x40
[    3.936234]        [<8000db00>] ret_fast_syscall+0x0/0x3c
[    3.941662] 
[    3.941666] other info that might help us debug this:
[    3.941671] 
[    3.949702]  Possible unsafe locking scenario:
[    3.949708] 
[    3.955644]        CPU0                    CPU1
[    3.960181]        ----                    ----
[    3.964718]   lock(&sb->s_type->i_mutex_key);
[    3.969114]                                lock(&sig->cred_guard_mutex);
[    3.975853]                                lock(&sb->s_type->i_mutex_key);
[    3.982766]   lock(&sig->cred_guard_mutex);
[    3.986984] 
[    3.986987]  *** DEADLOCK ***
[    3.986991] 
[    3.992941] 1 lock held by udevd/368:
[    3.996609]  #0:  (&sb->s_type->i_mutex_key#6){+.+.+.}, at: [<800fc954>] do_l
ookup+0x1e4/0x32c
[    4.005314] 
[    4.005317] stack backtrace:
[    4.009712] [<800151dc>] (unwind_backtrace+0x0/0xec) from [<80477088>] (dump_
stack+0x20/0x24)
[    4.018260] [<80477088>] (dump_stack+0x20/0x24) from [<80067ca8>] (print_circ
ular_bug+0x26c/0x2b8)
[    4.027242] [<80067ca8>] (print_circular_bug+0x26c/0x2b8) from [<8006a5ac>] (
__lock_acquire+0x1298/0x1a04)
[    4.036918] [<8006a5ac>] (__lock_acquire+0x1298/0x1a04) from [<8006b34c>] (lo
ck_acquire+0x124/0x148)
[    4.046072] [<8006b34c>] (lock_acquire+0x124/0x148) from [<804791a8>] (mutex_
lock_killable_nested+0x60/0x464)
[    4.056007] [<804791a8>] (mutex_lock_killable_nested+0x60/0x464) from [<8013c
688>] (lock_trace+0x28/0x5c)
[    4.065593] [<8013c688>] (lock_trace+0x28/0x5c) from [<8013e79c>] (proc_looku
pfd_common+0x60/0xc8)
[    4.074572] [<8013e79c>] (proc_lookupfd_common+0x60/0xc8) from [<8013e844>] (
proc_lookupfd+0x1c/0x24)
[    4.083812] [<8013e844>] (proc_lookupfd+0x1c/0x24) from [<800fa96c>] (d_alloc
_and_lookup+0x54/0x70)
[    4.092878] [<800fa96c>] (d_alloc_and_lookup+0x54/0x70) from [<800fc978>] (do
_lookup+0x208/0x32c)
[    4.101771] [<800fc978>] (do_lookup+0x208/0x32c) from [<800fd434>] (path_look
upat+0xfc/0x6c8)
[    4.110317] [<800fd434>] (path_lookupat+0xfc/0x6c8) from [<800fda2c>] (do_pat
h_lookup+0x2c/0x68)
[    4.119123] [<800fda2c>] (do_path_lookup+0x2c/0x68) from [<800feab0>] (user_p
ath_at_empty+0x68/0x98)
[    4.128275] [<800feab0>] (user_path_at_empty+0x68/0x98) from [<800feb04>] (us
er_path_at+0x24/0x2c)
[    4.137253] [<800feb04>] (user_path_at+0x24/0x2c) from [<800f4eb4>] (vfs_fsta
tat+0x44/0x74)
[    4.145621] [<800f4eb4>] (vfs_fstatat+0x44/0x74) from [<800f4f40>] (vfs_stat+
0x2c/0x30)
[    4.153642] [<800f4f40>] (vfs_stat+0x2c/0x30) from [<800f5164>] (sys_stat64+0
x24/0x40)
[    4.161579] [<800f5164>] (sys_stat64+0x24/0x40) from [<8000db00>] (ret_fast_s
yscall+0x0/0x3c)
[    4.173173] udevd[369]: starting version 173
[    4.585095] mmc0: new high speed SDHC card at address aaaa
[    4.591330] mmcblk0: mmc0:aaaa SD04G 3.69 GiB 
[    4.597998]  mmcblk0: p1 p2 p3
[    5.384460] mmc1: host does not support reading read-only switch. assuming wr
ite-enable.
[    5.796502] mmc1: new high speed SDHC card at address 1234
[    5.803063] mmcblk1: mmc1:1234 SA04G 3.68 GiB 
[    5.810156]  mmcblk1: p1 p2 p3
[    6.032487] EXT4-fs (mmcblk0p3): INFO: recovery required on readonly filesyst
em
[    6.039849] EXT4-fs (mmcblk0p3): write access will be enabled during recovery
[    6.064099] EXT4-fs (mmcblk0p3): recovery complete
[    6.074078] EXT4-fs (mmcblk0p3): mounted filesystem with ordered data mode. O
pts: (null)

Last login: Thu Jan  1 00:00:27 UTC 1970 on tty1
Welcome to Linaro 11.10 (development branch) (GNU/Linux 3.2.0-rc1+ armv7l)

 * Documentation:  https://wiki.linaro.org/
root@linaro-nano:~# 
---->8---

As I see this only on v3.2-rc1 kernel, I suspect it's a v3.2-rc1 kernel
issue and have reported on LKML[1].  I post it here for information and
hope people running other platforms can try to reproduce the issue and
confirm it's a common rather than imx only issue.

Regards,
Shawn

https://lkml.org/lkml/2011/11/11/120


_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to