On Thu, Aug 02, 2018 at 01:21:22AM +0100, Al Viro wrote:
> So what happens if
>       * we call aio_poll(), add the sucker to queue and see that we need
> to wait
>       * add to ->active_refs just as the wakeup comes

active_reqs I guess..

>       * wakeup removes from queue and hits schedule_work()
>       * io_cancel() is called, triggering aio_poll_cancel(), which sees that
> we are not from queue and buggers off.  We are gone from ->active_refs.
>       * aio_poll_complete_work() is called, sees no ->cancelled
>       * aio_poll_complete_work() calls vfs_poll(), sees nothing interesting
> and puts us back on the queue.

So let me draw this up, we start with the following:

THREAD 1                                        THREAD 2

aio_poll
  vfs_poll(...)
    add_wait_queue()

  (no pending mask)

  spin_lock_irq(&ctx->ctx_lock);
  list_add_tail(..., &ctx->active_reqs)         aio_poll_wake
  spin_unlock_irq(&ctx->ctx_lock);

                                                (spin_trylock failed)
                                                list_del_init(&req->wait.entry);
                                                schedule_work(&req->work);

Now switching to two new threads:

io_cancel thread                        worker thread

                                        vfs_poll()
                                          (mask = 0)

aio_poll_cancel
  (not on waitqueue, done)
  remove from active_reqs

                                          add_wait_queue()
                                          iocb still around

> 
> Unless I'm misreading it, cancel will end up with iocb still around and now
> impossible to cancel...  What am I missing?

Yes, I think you are right. I'll see how I could handle that case.
One of the easiest options would be to just support aio poll on
file ops that support keyed wakeups, we'd just need to pass that
information up.

Reply via email to