Hi,

I'm now implementing vhost-user block device backend
https://patchew.org/QEMU/20200309100342.14921-1-coiby...@gmail.com/
and want to use chardev to help manage vhost-user client connections
and read socket message. However there are two issues that need to be
addressed.

Firstly, chardev isn't suitable for the case when exported drive is
run in an IOThread because for mow chardev use GSource to dispatch
socket fd events. So I have to specify which IOThread the exported
drive is using when launching vhost-user block device backend,
for example, the following syntax will be used,

  -drive file=file.img,id=disk -device virtio-blk,drive=disk,iothread=iothread0 
\
   -object vhost-user-blk-server,node-name=disk,chardev=mon1,iothread=iothread0 
\
   -object iothread,id=iothread0 \
   -chardev socket,id=mon1,path=/tmp/vhost-user-blk_vhost.socket,server,nowait

then iothread_get_g_main_context(IOThread *iothread) has to be called
to run the gcontext in IOThread. If we use AioContext to dispatch socket
fd events, we needn't to specify IOThread twice. Besides aio_poll is faster
than g_main_loop_run.

Secondly, socket chardev's async read handler (set through
qemu_chr_fe_set_handlers) doesn't take the case of socket short read
into consideration.  I plan to add one which will make use qio_channel_yield.

According to
[1] Improving the QEMU Event Loop - Linux Foundation Events
http://events17.linuxfoundation.org/sites/events/files/slides/Improving%20the%20QEMU%20Event%20Loop%20-%203.pdf

"Convert chardev GSource to aio or an equivalent source" (p.30) should have
been finished. I'm curious why the plan didn't continue. If it's desirable,
I'm going to finish the leftover work to resolve the aforementioned two issues.

Any suggestion will be appreciated.
Thank you!

Reply via email to