On Tue, Nov 16, 2010 at 8:54 AM, M. Mohan Kumar <mo...@in.ibm.com> wrote: > +static int read_openrequest(int sockfd, V9fsOpenRequest *request) > +{ > + int bytes, retval; > + retval = recv(sockfd, request, sizeof(request->data), 0); > + if (retval <= 0) { > + return -1; > + } > + bytes = retval; > + request->path.path = qemu_mallocz(request->data.path_len + 1);
Leaked on error. > + retval = recv(sockfd, (void *)request->path.path, > + request->data.path_len, 0); > + if (retval <= 0) { > + return -1; > + } > + bytes += retval; > + if (request->data.oldpath_len) { > + request->path.old_path = > + qemu_mallocz(request->data.oldpath_len + 1); Leaked on error. send/recv/read/write could be interrupted by a signal. The patch does not handle this. There is a qemu_write_full() function available to read a number of bytes and handle EINTR. Speaking of signals, what about signal handlers that the main qemu process has set up? If a signal comes in then we'll start executing the qemu signal handler code, which is wrong. The subprocess needs to either block signals or ignore all the possibly registered signals. atexit(3) handlers will run when the forked process exits. This could also lead to weird behavior. How does the subprocess terminate? I only see error exit cases that print something to stderr in the v9fs_chroot() main loop. Stefan