Nathan Lynch <nath...@linux.ibm.com> writes: > Michael Ellerman <m...@ellerman.id.au> writes: > >> Nathan Lynch via B4 Relay <devnull+nathanl.linux.ibm....@kernel.org> >> writes: >>> From: Nathan Lynch <nath...@linux.ibm.com> >>> >>> On RTAS platforms there is a general restriction that the OS must not >>> enter RTAS on more than one CPU at a time. This low-level >>> serialization requirement is satisfied by holding a spin >>> lock (rtas_lock) across most RTAS function invocations. >> ... >>> diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c >>> index 1fc0b3fffdd1..52f2242d0c28 100644 >>> --- a/arch/powerpc/kernel/rtas.c >>> +++ b/arch/powerpc/kernel/rtas.c >>> @@ -581,6 +652,28 @@ static const struct rtas_function >>> *rtas_token_to_function(s32 token) >>> return NULL; >>> } >>> >>> +static void __rtas_function_lock(struct rtas_function *func) >>> +{ >>> + if (func && func->lock) >>> + mutex_lock(func->lock); >>> +} >> >> This is obviously going to defeat most static analysis tools. > > I guess it's not that obvious to me :-) Is it because the mutex_lock() > is conditional? I'll improve this if it's possible.
Well maybe I'm not giving modern static analysis tools enough credit :) But what I mean that it's not easy to reason about what the function does in isolation. ie. all you can say is that it may or may not lock a mutex, and you can't say which mutex. >> I assume lockdep is OK with it though? > > Seems to be, yes. OK. cheers