On Sun, May 20, 2018 at 06:32:25AM +0100, Al Viro wrote:
> > +   spin_lock_irqsave(&ctx->ctx_lock, flags);
> > +   list_add_tail(&aiocb->ki_list, &ctx->delayed_cancel_reqs);
> > +   spin_unlock(&ctx->ctx_lock);
> 
> ... and io_cancel(2) comes, finds it and inhume^Wcompletes it, leaving us 
> to...
> 
> > +   spin_lock(&req->head->lock);
> 
> ... get buggered on attempt to dereference a pointer fetched from freed and
> reused object.

FWIW, how painful would it be to pull the following trick:
        * insert into wait queue under ->ctx_lock
        * have wakeup do schedule_work() with aio_complete() done from that
        * have ->ki_cancel() grab queue lock, remove from queue and use
the same schedule_work()

That way you'd get ->ki_cancel() with the same semantics as originally for
everything - "ask politely to finish ASAP", and called in the same locking
environment for everyone - under ->ctx_lock, that is.  queue lock nests
inside ->ctx_lock; no magical flags, etc.

The cost is schedule_work() for each async poll-related completion as you
have for fsync.  I don't know whether that's too costly or not; it certainly
simplifies the things, but whether it's OK performance-wise...

Comments?

Reply via email to