On 2018/10/17 17:51, Jason Wang wrote:
> 
> On 2018/10/17 下午5:39, Jason Wang wrote:
>>>>
>>> Hi Jason and Stefan,
>>>
>>> Maybe I find the reason of bad performance.
>>>
>>> I found pkt_len is limited to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE(4K),
>>> it will cause the bandwidth is limited to 500~600MB/s. And once I
>>> increase to 64k, it can improve about 3 times(~1500MB/s).
>>
>>
>> Looks like the value was chosen for a balance between rx buffer size and 
>> performance. Allocating 64K always even for small packet is kind of waste 
>> and stress for guest memory. Virito-net try to avoid this by inventing the 
>> merge able rx buffer which allows big packet to be scattered in into 
>> different buffers. We can reuse this idea or revisit the idea of using 
>> virtio-net/vhost-net as a transport of vsock.
>>
>> What interesting is the performance is still behind vhost-net.
>>
>> Thanks
>>
>>>
>>> By the way, I send to 64K in application once, and I don't use
>>> sg_init_one and rewrite function to packet sg list because pkt_len
>>> include multiple pages.
>>>
>>> Thanks,
>>> Yiwen. 
> 
> 
> Btw, if you're using vsock for transferring large files, maybe it's more 
> efficient to implement sendpage() for vsock to allow sendfile()/splice() work.
> 
> Thanks
>

I can't agree more.

why vhost_vsock is still behind vhost_net?
Because I use sendfile() to test performance at first, and then
I found vsock don't implement sendpage() and cause the bandwidth
can't be increased. So I use read() and send() to replace sendfile(),
it will increase some switch between kernel and user mode, and sendfile()
can support zero copy. I think this is main reason.

Thanks.

> 
> .
> 


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to