Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > we still need customized vu_message_read because libvhost-user assumes > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > this short read case. I will improve libvhost-user's vu_message_read > > > by making it keep reading from socket util getting enough bytes. I > > > assume short read is a rare case thus introduced performance penalty > > > would be negligible. > > > In any case, please make sure that we use the QIOChannel functions > > called from a coroutine in QEMU so that it will never block, but the > > coroutine can just yield while it's waiting for more bytes. > > But if I am not wrong, libvhost-user is supposed to be indepdent from > the main QEMU code. So it can't use the QIOChannel functions if we > simply modify exiting vu_message_read to address the short read issue. > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > replaced by one which will depend on the main QEMU code. I'm not sure > which way is better.
The way your latest patches have it, with a separate read function, works for me. You could probably change libvhost-user to reimplement the same functionality, and it might be an improvement for other users of the library, but it's also code duplication and doesn't provide more value in the context of the vhost-user export in QEMU. The point that's really important to me is just that we never block when we run inside QEMU because that would actually stall the guest. This means busy waiting in a tight loop until read() returns enough bytes is not acceptable in QEMU. Kevin > On Thu, Feb 27, 2020 at 6:02 PM Kevin Wolf <kw...@redhat.com> wrote: > > > > Am 27.02.2020 um 10:53 hat Coiby Xu geschrieben: > > > Thank you for reminding me of this socket short read issue! It seems > > > we still need customized vu_message_read because libvhost-user assumes > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > this short read case. I will improve libvhost-user's vu_message_read > > > by making it keep reading from socket util getting enough bytes. I > > > assume short read is a rare case thus introduced performance penalty > > > would be negligible. > > > > In any case, please make sure that we use the QIOChannel functions > > called from a coroutine in QEMU so that it will never block, but the > > coroutine can just yield while it's waiting for more bytes. > > > > Kevin > > > > > On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefa...@redhat.com> > > > wrote: > > > > > > > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > > > > Hi Stefan, > > > > > > > > > > Thank you for reviewing my code! > > > > > > > > > > I tried to reach you on IRC. But somehow either you missed my message > > > > > or I missed your reply. So I will reply by email instead. > > > > > > > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > > > > i.e. use vu_dispatch as the read handler, then we can re-use > > > > > vu_message_read. And "removing the blocking recv from libvhost-user" > > > > > isn't necessary because "the operation of poll() and ppoll() is not > > > > > affected by the O_NONBLOCK flag" despite that we use > > > > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > > > > to make recv non-blocking. > > > > > > > > I'm not sure I understand. poll() just says whether the file descriptor > > > > is readable. It does not say whether enough bytes are readable :). So > > > > our callback will be invoked if there is 1 byte ready, but when we try > > > > to read 20 bytes either it will block (without O_NONBLOCK) or return > > > > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > > > > code changes will be necessary. > > > > > > > > But please go ahead and send the next revision and I'll take a look. > > > > > > > > Stefan > > > > > > > > > > > > -- > > > Best regards, > > > Coiby > > > > > > > > -- > Best regards, > Coiby >