Il 07/09/2012 17:06, Bharata B Rao ha scritto: > qemu_gluster_aio_event_reader() is the node->io_read in qemu_aio_wait(). > > qemu_aio_wait() calls node->io_read() which calls qemu_gluster_complete_aio(). > Before we return back to qemu_aio_wait(), many other things happen: > > bdrv_close() gets called from qcow2_create2() > This closes the gluster connection, closes the pipe, does > qemu_set_fd_hander(read_pipe_fd, NULL, NULL, NULL, NULL), which results > in the AioHandler node being deleted from aio_handlers list. > > Now qemu_gluster_aio_event_reader (node->io_read) which was called from > qemu_aio_wait() finally completes and goes ahead and accesses "node" > which has already been deleted. This causes segfault. > > So I think the option 1 (scheduling a BH from node->io_read) would > be better for gluster.
This is a bug that has to be fixed anyway. There are provisions in aio.c, but they are broken apparently. Can you try this: diff --git a/aio.c b/aio.c index 0a9eb10..99b8b72 100644 --- a/aio.c +++ b/aio.c @@ -119,7 +119,7 @@ bool qemu_aio_wait(void) return true; } - walking_handlers = 1; + walking_handlers++; FD_ZERO(&rdfds); FD_ZERO(&wrfds); @@ -147,7 +147,7 @@ bool qemu_aio_wait(void) } } - walking_handlers = 0; + walking_handlers--; /* No AIO operations? Get us out of here */ if (!busy) { @@ -159,7 +159,7 @@ bool qemu_aio_wait(void) /* if we have any readable fds, dispatch event */ if (ret > 0) { - walking_handlers = 1; + walking_handlers++; /* we have to walk very carefully in case * qemu_aio_set_fd_handler is called while we're walking */ @@ -187,7 +187,7 @@ bool qemu_aio_wait(void) } } - walking_handlers = 0; + walking_handlers--; } return true; Paolo