On 04.03.2016 13:50, Wang, Zhihong wrote:
> 
> 
>> -----Original Message-----
>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>> Sent: Friday, March 4, 2016 6:00 PM
>> To: Wang, Zhihong <zhihong.w...@intel.com>; dev@openvswitch.org
>> Cc: Flavio Leitner <f...@redhat.com>; Traynor, Kevin 
>> <kevin.tray...@intel.com>;
>> Dyasly Sergey <s.dya...@samsung.com>
>> Subject: Re: vhost-user invalid txqid cause discard of packets
>>
>> Hi, Zhihong.
>> I can't reproduce this in my environment.
>> Could you please provide ovs-vswithd.log with VLOG_DBG enabled
>> for netdev-dpdk and outputs of following commands:
>> # ovs-vsctl show
>> # ovs-appctl dpctl/show
>> # ovs-appctl dpif-netdev/pmd-rxq-show
>> in 'good' and 'bad' states?
>>
>> Also, are you sure that VM started with exactly 4 queues?
> 
> 
> Yes it's exact 4 queues.
> Please see command output below.
> 
> In "bad" case only vhost txq 0, 1 are sending packets, I believe the other 2
> become -1 after the lookup.

I don't think so. Reconfiguration code can't affect vHost queue mapping.
Only distribution of queues between threads changed here.
Can you reproduce this issue with another cpu-mask?
For example : '00ff0' and '003f0'.

We will see exact mapping in ovs-vswitchd.log with VLOG_DBG enabled for
netdev-dpdk. (-vdpdk:all:dbg option to ovs-vswitchd).

One more comment inlined.

> 
> "good":
> ------------------------------------------------------
> [20160301]# ./ovs/utilities/ovs-vsctl show
> a71febbd-fc2b-4a0a-beb2-d6fe0ae68d58
>     Bridge "ovsbr0"
>         Port "ovsbr0"
>             Interface "ovsbr0"
>                 type: internal
>         Port "vhost-user1"
>             Interface "vhost-user1"
>                 type: dpdkvhostuser
>                 options: {n_rxq="4"}
>         Port "dpdk0"
>             Interface "dpdk0"
>                 type: dpdk
>                 options: {n_rxq="4"}
> [20160301]# ./ovs/utilities/ovs-appctl dpctl/show
> netdev@ovs-netdev:
>         lookups: hit:2642744165 missed:8 lost:0
>         flows: 8
>         port 0: ovs-netdev (internal)
>         port 1: ovsbr0 (tap)
>         port 2: vhost-user1 (dpdkvhostuser: configured_rx_queues=4, 
> configured_tx_queues=4, requested_rx_queues=4, requested_tx_queues=73)
>         port 3: dpdk0 (dpdk: configured_rx_queues=4, configured_tx_queues=64, 
> requested_rx_queues=4, requested_tx_queues=73)
> [20160301]# ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 16:
>         port: dpdk0     queue-id: 2
> pmd thread numa_id 0 core_id 10:
>         port: vhost-user1       queue-id: 0
> pmd thread numa_id 0 core_id 12:
>         port: vhost-user1       queue-id: 2
> pmd thread numa_id 0 core_id 13:
>         port: vhost-user1       queue-id: 3
> pmd thread numa_id 0 core_id 14:
>         port: dpdk0     queue-id: 0
> pmd thread numa_id 0 core_id 15:
>         port: dpdk0     queue-id: 1
> pmd thread numa_id 0 core_id 11:
>         port: vhost-user1       queue-id: 1
> pmd thread numa_id 0 core_id 17:
>         port: dpdk0     queue-id: 3
> ------------------------------------------------------
> 
> "bad":
> ------------------------------------------------------
> [20160301]# ./ovs/utilities/ovs-vsctl set Open_vSwitch . 
> other_config:pmd-cpu-mask=0x3f000
> 2016-03-04T03:33:30Z|00041|ovs_numa|WARN|Invalid cpu mask: x

Just in case, have you tried to use the valid mask 
(other_config:pmd-cpu-mask=3f000) ?

> 2016-03-04T03:33:30Z|00042|dpif_netdev|INFO|Created 6 pmd threads on numa 
> node 0
> [20160301]# ./ovs/utilities/ovs-vsctl show
> a71febbd-fc2b-4a0a-beb2-d6fe0ae68d58
>     Bridge "ovsbr0"
>         Port "ovsbr0"
>             Interface "ovsbr0"
>                 type: internal
>         Port "vhost-user1"
>             Interface "vhost-user1"
>                 type: dpdkvhostuser
>                 options: {n_rxq="4"}
>         Port "dpdk0"
>             Interface "dpdk0"
>                 type: dpdk
>                 options: {n_rxq="4"}
> [20160301]# ./ovs/utilities/ovs-appctl dpctl/show
> netdev@ovs-netdev:
>         lookups: hit:181693955 missed:7 lost:0
>         flows: 6
>         port 0: ovs-netdev (internal)
>         port 1: ovsbr0 (tap)
>         port 2: vhost-user1 (dpdkvhostuser: configured_rx_queues=4, 
> configured_tx_queues=4, requested_rx_queues=4, requested_tx_queues=73)
>         port 3: dpdk0 (dpdk: configured_rx_queues=4, configured_tx_queues=64, 
> requested_rx_queues=4, requested_tx_queues=73)
> [20160301]# ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 13:
>         port: vhost-user1       queue-id: 1
>         port: dpdk0     queue-id: 3
> pmd thread numa_id 0 core_id 14:
>         port: vhost-user1       queue-id: 2
> pmd thread numa_id 0 core_id 16:
>         port: dpdk0     queue-id: 0
> pmd thread numa_id 0 core_id 17:
>         port: dpdk0     queue-id: 1
> pmd thread numa_id 0 core_id 12:
>         port: vhost-user1       queue-id: 0
>         port: dpdk0     queue-id: 2
> pmd thread numa_id 0 core_id 15:
>         port: vhost-user1       queue-id: 3
> ------------------------------------------------------
> 
> 
>>
>> Best regards, Ilya Maximets.
>>
>> On 03.03.2016 18:24, Wang, Zhihong wrote:
>>> Hi,
>>>
>>> I ran an OVS multiqueue test with very simple traffic topology, basically
>>> 2 ports each with 4 queues, 8 rxqs in total, like below:
>>>
>>> Pktgen <=4q=> PHY <=4q=> OVS <=4q=> testpmd in the guest
>>>
>>> First set pmd-cpu-mask to 8 cores, and everything works fine, each rxq
>>> got a core, and all txqids are valid.
>>>
>>> Then I set pmd-cpu-mask to 6 cores, and then 2 txqids become invalid in
>>> __netdev_dpdk_vhost_send():
>>> qid = vhost_dev->tx_q[qid % vhost_dev->real_n_txq].map;
>>>
>>> qid returns -1 and this leads to discard of packets.
>>>
>>> Consequently in testpmd in VM we see only 2 queues are working and
>>> throughput drops more than a half.
>>>
>>> It works again when I set pmd-cpu-mask to 4 cores.
>>>
>>> My OVS and DPDK code are pulled from the repo on March 1st, 2016.
>>>
>>> Let me know if you need more info to reproduce this issue.
>>>
>>>
>>> Thanks
>>> Zhihong
> 
> 

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to