On Thu, Jul 10, 2014 at 01:04:54PM +0300, Chrysostomos Nanakos wrote: > On 07/10/2014 03:23 AM, Jeff Cody wrote: > >On Fri, Jun 27, 2014 at 11:24:08AM +0300, Chrysostomos Nanakos wrote: > >>+err_exit: > >>+ __sync_add_and_fetch(&segreq->failed, 1); > >>+ if (segments_nr == 1) { > >>+ if (__sync_add_and_fetch(&segreq->ref, -1) == 0) { > >>+ g_free(segreq); > >>+ } > >>+ } else { > >>+ if ((__sync_add_and_fetch(&segreq->ref, -segments_nr + i)) == 0) { > >>+ g_free(segreq); > >>+ } > >>+ } > >Don't we run the risk of leaking segreq here? The other place this is > >freed is in xseg_request_handler(), but could we run into a race > >condition where 's->stopping' is true, or even xseg_receive() just does not > >return a request? > > If 's->stopping' is true means that _close() has been invoked. How QEMU > handles unserviced requests while in the meantime someone invokes _close()? > Does it wait for the requests to finish and then exits? Or it exits silently > without checking for pending requests? > > If xseg_receive() does not return an already submitted request then the > problem is located in Archipelago stack. Someone should check why the > pending requests are not serviced and resolve the problem. The question here > is the same as before, how QEMU handles pending requests while in the > meantime invokes _close()? > > Until all pending requests are serviced successfully or not, segreq > allocations will remain and not freed. Another approach could have been a > linked list that tracks all submitted requests and handle them accordingly > on _close(). > > Suggestions here are more than welcome!
bdrv_close() drains all requests before invoking .bdrv_close(). I think there is no race condition in that case. Stefan
pgp1kAoLOmxbf.pgp
Description: PGP signature