Takashi,
I've been seeing this on one machine since around 3.3 (perhaps earlier, I 
forget)
I reported it a while ago, and you had me test some patch that didn't make any
difference, then it fell off my radar..

        Dave

=============================================
[ INFO: possible recursive locking detected ]
3.6.0+ #31 Not tainted
---------------------------------------------
pulseaudio/1022 is trying to acquire lock:
blocked:  (&(&substream->self_group.lock)->rlock/1){......}, instance: 
ffff88009befc140, at: [<ffffffffa04ba173>] snd_pcm_action_group+0xa3/0x240 
[snd_pcm]

but task is already holding lock:
held:     (&(&substream->self_group.lock)->rlock/1){......}, instance: 
ffff88009bee2190, at: [<ffffffffa04ba173>] snd_pcm_action_group+0xa3/0x240 
[snd_pcm]

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&(&substream->self_group.lock)->rlock/1);
  lock(&(&substream->self_group.lock)->rlock/1);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks on stack by pulseaudio/1022:
 #0: held:     (snd_pcm_link_rwlock){......}, instance: ffffffffa04c8138, at: 
[<ffffffffa04baee2>] snd_pcm_drop+0x62/0x110 [snd_pcm]
 #1: held:     (&(&substream->self_group.lock)->rlock){......}, instance: 
ffff88009bee60f0, at: [<ffffffffa04baeea>] snd_pcm_drop+0x6a/0x110 [snd_pcm]
 #2: blocked:  (&(&substream->group->lock)->rlock){......}, instance: 
ffff8800b4e0dde8, at: [<ffffffffa04ba4ae>] snd_pcm_action+0x3e/0xb0 [snd_pcm]
 #3: held:     (&(&substream->self_group.lock)->rlock/1){......}, instance: 
ffff88009bee2190, at: [<ffffffffa04ba173>] snd_pcm_action_group+0xa3/0x240 
[snd_pcm]

stack backtrace:
Pid: 1022, comm: pulseaudio Not tainted 3.6.0+ #31
Call Trace:
 [<ffffffff810d8395>] __lock_acquire+0x6f5/0x1b80
 [<ffffffff810d7fa7>] ? __lock_acquire+0x307/0x1b80
 [<ffffffff81021dd3>] ? native_sched_clock+0x13/0x80
 [<ffffffff810d9ef1>] lock_acquire+0xa1/0x1f0
 [<ffffffffa04ba173>] ? snd_pcm_action_group+0xa3/0x240 [snd_pcm]
 [<ffffffff816c9e24>] _raw_spin_lock_nested+0x44/0x80
 [<ffffffffa04ba173>] ? snd_pcm_action_group+0xa3/0x240 [snd_pcm]
 [<ffffffffa04ba173>] snd_pcm_action_group+0xa3/0x240 [snd_pcm]
 [<ffffffffa04ba4e1>] snd_pcm_action+0x71/0xb0 [snd_pcm]
 [<ffffffffa04ba53a>] snd_pcm_stop+0x1a/0x20 [snd_pcm]
 [<ffffffffa04baf04>] snd_pcm_drop+0x84/0x110 [snd_pcm]
 [<ffffffffa04bccd8>] snd_pcm_common_ioctl1+0x4f8/0xc00 [snd_pcm]
 [<ffffffff810d4f4f>] ? lock_release_holdtime.part.26+0xf/0x180
 [<ffffffffa04bd770>] snd_pcm_playback_ioctl1+0x60/0x2e0 [snd_pcm]
 [<ffffffffa04bda24>] snd_pcm_playback_ioctl+0x34/0x40 [snd_pcm]
 [<ffffffff811d9cf9>] do_vfs_ioctl+0x99/0x5a0
 [<ffffffff812d9b07>] ? file_has_perm+0x97/0xb0
 [<ffffffff811da291>] sys_ioctl+0x91/0xb0
 [<ffffffff816d4290>] tracesys+0xdd/0xe2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to