On Tue, Sep 17, 2013 at 07:10:44AM -0700, Mark Trumpold wrote: > I am using the kernel functionality directly with the commands: > echo platform >/sys/power/disk > echo disk >/sys/power/state > > The following appears in dmesg when I attempt to hibernate: > > ==================================================== > [ 38.881397] nbd (pid 1473: qemu-nbd) got signal 0 > [ 38.881401] block nbd0: shutting down socket > [ 38.881404] block nbd0: Receive control failed (result -4) > [ 38.881417] block nbd0: queue cleared > [ 87.463133] block nbd0: Attempted send on closed socket > [ 87.463137] end_request: I/O error, dev nbd0, sector 66824 > ==================================================== > > My environment: > Debian: 6.0.5 > Kernel: 3.3.1 > Qemu userspace: 1.2.0
This could be a bug in the nbd client kernel module. drivers/block/nbd.c:sock_xmit() does the following: result = kernel_recvmsg(sock, &msg, &iov, 1, size, msg.msg_flags); if (signal_pending(current)) { siginfo_t info; printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n", task_pid_nr(current), current->comm, dequeue_signal_lock(current, ¤t->blocked, &info)); result = -EINTR; sock_shutdown(nbd, !send); break; } The signal number in the log output looks bogus, we shouldn't get 0. sock_xmit() actually blocks all signals except SIGKILL before calling kernel_recvmsg(). I guess this is an artifact of the suspend-to-disk operation, maybe the signal pending flag is set on the process. Perhaps someone with a better understanding of the kernel internals can check this? What happens next is that the nbd kernel module shuts down the NBD connection. As a workaround, please try running a separate nbd-client(1) process and drop the qemu-nbd -c command-line argument. This way nbd-client(1) uses the nbd kernel module instead of the qemu-nbd process and you'll get the benefit of nbd-client's automatic reconnect. Stefan