Hi, all
I am testing the performance of xen netfront-netback driver that with
multi-queues support. The throughput from domU to remote dom0 is 9.2Gb/s, but
the throughput from domU to remote domU is only 3.6Gb/s, I think the bottleneck
is the throughput from dom0 to local domU. However, we have done some testing
and found the throughput from dom0 to local domU is 5.8Gb/s.
And if we send packets from one DomU to other 3 DomUs on different host
simultaneously, the sum of throughout can reach 9Gbps. It seems like the
bottleneck is the receiver?
After some analysis, I found that even the max_queue of netfront/back is
set to 4, there are some strange results as follows:
1. In domU, only one rx queue deal with softirq
2. In dom0, only two netback queues process are scheduled, other two
process aren't scheduled.
Are there any issues in my test? In theory, can we achieve 9~10Gb/s between
DomUs on different hosts using netfront/netback?
The testing environment details are as follows:
1. Hardware
a. CPU: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz, 2 CPU 6 Cores with Hyper
Thread enabled
b. NIC: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection
(rev 01)
2. Sofware:
a. HostOS: SLES 12 (Kernel 3.16-7,git commit
d0335e4feea0d3f7a8af3116c5dc166239da7521 )
b. NIC Driver: IXGBE 3.21.2
c. OVS: 2.1.3
d. MTU: 1600
e. Dom0:6U6G
f. queue number: 4
g. xen 4.4
h. DomU: 4U4G
3. Networking Environment:
a. All network flows are transmit/receive through OVS
b. Sender server and receiver server are connected directly between 10GE
NIC
4. Testing Tools:
a. Sender: netperf
b. Receiver: netserver
----------
zhangleiqiang (Trump)
Best Regards
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel