On Mon, 25 Mar 2013 16:21:22 -0400 Sasha Levin <sasha.le...@oracle.com> wrote:
> On 03/20/2013 03:55 PM, Rik van Riel wrote: > > Include lkml in the CC: this time... *sigh* > > ---8<--- > > > > This series makes the sysv semaphore code more scalable, > > by reducing the time the semaphore lock is held, and making > > the locking more scalable for semaphore arrays with multiple > > semaphores. > > Hi Rik, > > I'm getting the following false positives from lockdep: Does this patch fix it? Andrew, this looks like another one for the queue... ---8<--- Subject: [PATCH -mm -next] ipc,sem: fix lockdep false positive When locking all the semaphores inside a sem_array, the kernel ends up locking a large number of locks with identical lockdep status. This trips up lockdep. Annotate the code to prevent such warnings. Signed-off-by: Rik van Riel <r...@redhat.com> --- ipc/sem.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index 450248e..f46441a 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -357,7 +357,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, spin_lock(&sma->sem_perm.lock); for (i = 0; i < sma->sem_nsems; i++) { struct sem *sem = sma->sem_base + i; - spin_lock(&sem->lock); + spin_lock_nested(&sem->lock, SINGLE_DEPTH_NESTING); } locknum = -1; } @@ -558,7 +558,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params) for (i = 0; i < nsems; i++) { INIT_LIST_HEAD(&sma->sem_base[i].sem_pending); spin_lock_init(&sma->sem_base[i].lock); - spin_lock(&sma->sem_base[i].lock); + spin_lock_nested(&sma->sem_base[i].lock, SINGLE_DEPTH_NESTING); } sma->complex_count = 0; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/