On 01/16/2018 01:33 PM, Jason Wang wrote:
On 2018年01月15日 18:43, Wei Wang wrote:
On 01/15/2018 04:34 PM, Jason Wang wrote:
On 2018年01月15日 15:59, Wei Wang wrote:
On 01/15/2018 02:56 PM, Jason Wang wrote:
On 2018年01月12日 18:18, Stefan Hajnoczi wrote:
I just fail understand why we can't do software defined network or
storage with exist virtio device/drivers (or are there any
shortcomings that force us to invent new infrastructure).
Existing virtio-net works with a host central vSwitch, and it has
the following disadvantages:
1) long code/data path;
2) poor scalability; and
3) host CPU sacrifice
Please show me the numbers.
Sure. For 64B packet transmission between two VMs: vhost-user reports
~6.8Mpps, and vhost-pci reports ~11Mpps, which is ~1.62x faster.
This result is kind of incomplete. So still many questions left:
- What's the configuration of the vhost-user?
- What's the result of e.g 1500 byte?
- You said it improves scalability, at least I can't get this
conclusion just from what you provide here
- You suspect long code/data path, but no latency numbers to prove it
Had an offline meeting with Jason. The future discussion will be more
focused on the design.
Here is a conclusion about more results we collected for 64B packet
transmission, compared to ovs-dpdk (though we are comparing to ovs-dpdk
here, but vhost-pci isn't meant to replace ovs-dpdk. It's for inter-VM
communication, and packets going to the outside world will go from the
traditional backend like ovs-dpdk):
1) 2VM communication: over 1.6x higher throughput;
2) 22% shorter latency;
3) in the 5-VM chain communication tests, vhost-pci shows ~6.5x higher
throughput thanks to its better scalability
We'll provide 1500B test results later.
Best,
Wei