On Thu, Oct 21, 2010 at 1:10 PM, Arun R Bharadwaj <a...@linux.vnet.ibm.com> wrote: > static ssize_t qemu_paio_return(struct qemu_paiocb *aiocb) > { > ssize_t ret; > > - mutex_lock(&lock); > ret = aiocb->ret; > - mutex_unlock(&lock); > - > return ret; > } > > @@ -536,14 +442,15 @@ static void paio_cancel(BlockDriverAIOCB *blockacb) > struct qemu_paiocb *acb = (struct qemu_paiocb *)blockacb; > int active = 0; > > - mutex_lock(&lock); > if (!acb->active) { > - QTAILQ_REMOVE(&request_list, acb, node); > - acb->ret = -ECANCELED; > + if (!deque_threadletwork(&acb->work)) { > + acb->ret = -ECANCELED; > + } else { > + active = 1; > + } > } else if (acb->ret == -EINPROGRESS) { > active = 1; > } > - mutex_unlock(&lock); > > if (active) { > /* fail safe: if the aio could not be canceled, we wait for
Here is the assembly listing of what happens next: 454:posix-aio-compat.c **** while (qemu_paio_error(acb) == EINPROGRESS) 539 0347 48F7D8 negq %rax 540 034a 83F873 cmpl $115, %eax 541 034d 7581 jne .L46 543 .L58: 545 0350 EBFD jmp .L58 This while loop is an infinite loop. The compiler doesn't need to load acb->ret from memory. The reason this loop worked before threadlets was because qemu_paio_return() used to acquire a lock to access acb->ret, forcing the return value to be loaded each iteration: static ssize_t qemu_paio_return(struct qemu_paiocb *aiocb) { ssize_t ret; mutex_lock(&lock); ret = aiocb->ret; mutex_unlock(&lock); return ret; } Please use synchronization to wait on the request like Anthony suggested. Stefan