On Thu, Oct 27, 2016 at 12:15:52PM -0700, Tony Lindgren wrote:
> * Johan Hovold <[email protected]> [161027 11:46]:
> > On Thu, Oct 27, 2016 at 10:40:17AM -0700, Tony Lindgren wrote:
> > > diff --git a/drivers/usb/musb/musb_gadget.c
> > > b/drivers/usb/musb/musb_gadget.c
> > > --- a/drivers/usb/musb/musb_gadget.c
> > > +++ b/drivers/usb/musb/musb_gadget.c
> > > @@ -1222,6 +1222,13 @@ void musb_ep_restart(struct musb *musb, struct
> > > musb_request *req)
> > > rxstate(musb, req);
> > > }
> > >
> > > +void musb_ep_restart_resume_work(struct musb *musb, void *data)
> > > +{
> > > + struct musb_request *req = data;
> > > +
> > > + musb_ep_restart(musb, req);
> >
> > This one is supposed to be called with musb->lock held (according to the
> > function header anyway).
>
> Good point, yeah that calls the monster functions txstate and rxstate.
>
> > > static int musb_gadget_queue(struct usb_ep *ep, struct usb_request *req,
> > > gfp_t gfp_flags)
> > > {
> > > @@ -1255,7 +1262,7 @@ static int musb_gadget_queue(struct usb_ep *ep,
> > > struct usb_request *req,
> > >
> > > map_dma_buffer(request, musb, musb_ep);
> > >
> > > - pm_runtime_get_sync(musb->controller);
> > > + pm_runtime_get(musb->controller);
> > > spin_lock_irqsave(&musb->lock, lockflags);
> > >
> > > /* don't queue if the ep is down */
> > > @@ -1271,8 +1278,13 @@ static int musb_gadget_queue(struct usb_ep *ep,
> > > struct usb_request *req,
> > > list_add_tail(&request->list, &musb_ep->req_list);
> > >
> > > /* it this is the head of the queue, start i/o ... */
> > > - if (!musb_ep->busy && &request->list == musb_ep->req_list.next)
> > > - musb_ep_restart(musb, request);
> > > + if (!musb_ep->busy && &request->list == musb_ep->req_list.next) {
> > > + if (pm_runtime_active(musb->controller))
> > > + musb_ep_restart(musb, request);
> > > + else
> > > + musb_queue_on_resume(musb, musb_ep_restart_resume_work,
> > > + request);
> > > + }
> >
> > But then this looks like it could trigger an ABBA deadlock as musb->lock
> > is held while queue_on_resume() takes musb->list_lock, and
> > musb_run_pending() would take the same locks in the reverse order.
>
> It seems we can avoid that by locking only list_add_tail() and list_del():
>
> list_for_each_entry_safe(w, _w, &musb->resume_work, node) {
> spin_lock_irqsave(&musb->list_lock, flags);
> list_del(&w->node);
> spin_unlock_irqrestore(&musb->list_lock, flags);
> if (w->callback)
> w->callback(musb, w->data);
> devm_kfree(musb->controller, w);
> }
I think you still need to hold the lock while traversing the list (even
if you temporarily release it during the callback).
But this is related to another issue; what if a new callback is
registered while musb_runtime_resume is running? It seems we could end
up queueing work which might never be executed (e.g. if queued just
after musb_run_pending() is done or while processing the last
callback).
> Or do you have some better ideas?
Not right now sorry, and I'll be away from keyboard the rest of the day,
I'm afraid.
Johan
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html