On 08/01/2015 02:33, trump_zhang wrote:
Hi,
I am trying to test the single-queue networking performance for
netback/netfront in upstream, my testing environment is as follows:
1. Using pkt-gen to send a single UDP flow from one host to a vm
which runs on another XEN host. Two hosts are connected with 10GE
network (Intel 82599 NIC, which has multiqueue support)
2. The receiver XEN host uses xen 4.4, and dom0's OS is Linux
3.17.4 which already has multiqueue netback support
3. The receiver XEN host's CPU is NUMA, and cpu0-cpu7 belong to the
same node
4. The receiver Dom0 has 6 VCPU and 6G Memory, and the vcpu/mem is
"pinned" to host by using "dom0_vcpu_num=4 dom0_vcpus_pin
dom0_mem=6144M" args during boot
I guess you wanted to say dom0_vcpu_num=6.
5. The receiver VM has 4 vcpu and 4G memory, and vcpu0 is pinned to
pcpu 6 which means the vcpu0 is running on the same numa node with dom0
It can be that Dom0 is using the same pcpu, see my previous mail:
http://lists.xen.org/archives/html/xen-devel/2015-01/msg00564.html
During testing, I have modified the sender host's IP address to
make the receiver host and vm have behaviors as follows:
1. The queue 5 of NIC in Dom0 handles the flow, which can be
confirmed from "/proc/interrupts". As a result, the "ksoftirqd/5"
process will run with some cpu usage during testing, which can be
confirmed from the result of "top" command
2. The queue 2 of netback vif handles the flow, which can be
confirmed from the result of "top" command in Dom0 (which shows that the
"vif1.0-guest-q2" process runs with some cpu usage)
3. The "ksfotirqd/0" (which running on vcpu0) in DomU runs with
some high cpu usage, and it seems that this process handles the soft
interrupt of RX
However, I find some strange phenomenon as follows:
1. All RX interrupts of netback vif are only sent to queue 0, which
can be confirmed from the content of "/proc/interrupts" file in Dom0.
But other than "ksoftirqd/0", it seems that "ksoftirqd/2" will handle
the soft interrupt and run with high cpu usage when the
"vif1.0-guest-q2" process runs on vpu2. But why not the ksortirqd/0
handles the soft interrupt because the RX interrupts are sent to queue0?
Can you check in /proc/interrupts which interrupts are served on vcpu#2
in an excessive number? Do you have irqbalance running and shifting
interrupts around?
I have also tried to make another queue of netback VIF (e.g.
vif1.0-guest-q1, vif1.0-guest-q3, etc) to handle the flow, but all RX
interrupts are still only sent to vcpu0, I am wondering the reason for it?
The NIC driver's receive function usually calls to the
bridge/openvswitch code (depending on what you have), and that calls
directly xenvif_start_xmit. There, skb_get_queue_mapping(skb) tells
netback which queue should be used. It seems to me NIC drivers set this
value to the receive queue number, or nothing at all, which defaults to
0. Check your NIC driver, look for "queue_mapping"
2. The RX interrupts are sent to queue 2 of vnic in DomU, but it
seems that it is the "ksoftirqd/0" other than "ksoftirqd/2" handles the
soft interrupt.
Again, check what interrupts are served on vcpu#0 in guest.
3. If and only if I "pin" the "vif1.0-guest-q2" process to the
vcpu0 of Dom0, the throughout of flow is higher (which can be improved
as much as double). This is the most strange phenomenon I find, I am
wondering the reason for it.
Could anyone give me some hint about it? If there are some things
unclear, please tell me and I will give more detailed description. Thanks.
-------------------
Best Regards
Trump
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel