Can you try to capture "show hardw" with 18.10 ?

Looks like ThunderX is not acting as PCI device so part of the output is 
suppressed in 18.07 and we changed that behaviour in 18.10.

I'm looking for someething like:

    rss avail:         ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp ipv6-tcp-ex
                       ipv6-udp-ex ipv6-ex ipv6-tcp-ex ipv6-udp-ex
    rss active:        none


-- 
Damjan

> On 19 Dec 2018, at 08:26, mik...@yeah.net wrote:
> 
> vpp v18.07.1-10~gc548f5d-dirty 
> 
> mik...@yeah.net <mailto:mik...@yeah.net>
>  
> From: Damjan Marion <mailto:dmar...@me.com>
> Date: 2018-12-19 15:21
> To: mik...@yeah.net <mailto:mik...@yeah.net>
> CC: vpp-dev <mailto:vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] dpdk-input : serious load imbalance
> 
> What version of VPP do you use.
> I'm missing some outputs in "show hardware"...
> 
> -- 
> Damjan
> 
>> On 19 Dec 2018, at 02:19, mik...@yeah.net <mailto:mik...@yeah.net> wrote:
>> 
>> The "show hardw" is as follow, the statistics may be different from 
>> yesterday.
>> 
>> vpp# show hardware-interfaces 
>> Name Idx Link Hardware
>> VirtualFunctionEthernet5/0/2 1 up VirtualFunctionEthernet5/0/2
>> Ethernet address 72:62:8a:40:43:12
>> Cavium ThunderX
>> carrier up full duplex speed 10000 mtu 9190 
>> flags: admin-up pmd maybe-multiseg
>> rx queues 2, rx desc 1024, tx queues 2, tx desc 1024
>> cpu socket 0
>> 
>> tx frames ok 268302
>> tx bytes ok 74319654
>> rx frames ok 4000000
>> rx bytes ok 688000000
>> extended stats:
>> rx good packets 4000000
>> tx good packets 268302
>> rx good bytes 688000000
>> tx good bytes 74319654
>> rx q0packets 3999976
>> rx q0bytes 687995872
>> rx q1packets 24
>> rx q1bytes 4128
>> tx q0packets 12
>> tx q0bytes 3324
>> tx q1packets 268290
>> tx q1bytes 74316330
>> VirtualFunctionEthernet5/0/3 2 down VirtualFunctionEthernet5/0/3
>> Ethernet address 2a:f2:d5:47:67:f1
>> Cavium ThunderX
>> carrier down 
>> flags: pmd maybe-multiseg
>> rx queues 2, rx desc 1024, tx queues 2, tx desc 1024
>> cpu socket 0
>> 
>> local0 0 down local0
>> local
>> 
>> mik...@yeah.net <mailto:mik...@yeah.net>
>>  
>> From: Damjan Marion <mailto:dmar...@me.com>
>> Date: 2018-12-18 20:38
>> To: mik...@yeah.net <mailto:mik...@yeah.net>
>> CC: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] dpdk-input : serious load imbalance
>> 
>> What kind of nic do you have? Can you capture "show hardw" ?
>> 
>> -- 
>> Damjan
>> 
>>> On 18 Dec 2018, at 04:03, mik...@yeah.net <mailto:mik...@yeah.net> wrote:
>>> 
>>> Hi,
>>>    I configured 2 worker thread and 2 dpdk rx-queuein startup.conf . Then I 
>>> forged 400w packets and send them to a single dpdk interface . It turns out 
>>> that the second thread received only 24 packets .I test it for several 
>>> times 
>>> and the results are almost the same. Why did this happen ?
>>> 
>>>    Here is some config and "show":
>>> VPP : 18.07
>>> startup.conf:
>>> cpu {
>>>         main-core 1
>>>         corelist-workers 2,3
>>> }
>>> 
>>> dpdk {
>>>          dev default {
>>>                  num-rx-queues 2
>>>                  num-tx-queues 2
>>>          }
>>> }
>>> 
>>> packets:
>>> these pkts share the same src mac  , dst mac  and ipv4 body , only ipv4 src 
>>> ip and dst ip are different from each other.
>>> <Catch.jpg>
>>> 
>>> ------------------------------------------------------------------
>>> # sh runtime
>>> Thread 1 vpp_wk_0 (lcore 2)
>>> Time 69.8, average vectors/node 1.02, last 128 main loops 0.00 per node 0.00
>>>   vector rates in 5.7911e4, out 5.0225e2, drop 5.7783e4, punt 0.0000e0
>>>              Name                 State         Calls          Vectors      
>>>   Suspends         Clocks       Vectors/Call  
>>> dpdk-input                       polling         104246554         3999630  
>>>              0          7.84e2             .04
>>> ---------------
>>> Thread 2 vpp_wk_1 (lcore 3)
>>> Time 69.8, average vectors/node 1.00, last 128 main loops 0.00 per node 0.00
>>>   vector rates in 5.1557e-1, out 1.7186e-1, drop 5.1557e-1, punt 0.0000e0
>>>              Name                 State         Calls          Vectors      
>>>   Suspends         Clocks       Vectors/Call  
>>> dpdk-input                       polling         132390000              24  
>>>              0          1.59e8            0.00
>>> -----------------------------------------------------------------
>>> # show interface rx-placement 
>>> Thread 1 (vpp_wk_0):
>>>   node dpdk-input:
>>>     VirtualFunctionEthernet5/0/2 queue 0 (polling)
>>>     VirtualFunctionEthernet5/0/3 queue 0 (polling)
>>> Thread 2 (vpp_wk_1):
>>>   node dpdk-input:
>>>     VirtualFunctionEthernet5/0/2 queue 1 (polling)
>>>     VirtualFunctionEthernet5/0/3 queue 1 (polling)
>>> -----------------------------------------------------------------
>>> vpp# show interface 
>>>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
>>> Counter          Count     
>>> VirtualFunctionEthernet5/0/2      1      up          9000/0/0/0     rx 
>>> packets               3999654
>>>                                                                             
>>>                        rx bytes               687940488
>>>                                                                             
>>>                        tx packets                 35082
>>>                                                                             
>>>                        tx bytes                 9647550
>>>                                                                             
>>>                        rx-miss                      346
>>> VirtualFunctionEthernet5/0/3      2     down         9000/0/0/0     
>>> local0                            0     down          0/0/0/0       
>>> 
>>> 
>>> Thanks in advance.
>>> Mikado
>>> mik...@yeah.net <mailto:mik...@yeah.net>-=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>> 
>>> View/Reply Online (#11675): https://lists.fd.io/g/vpp-dev/message/11675 
>>> <https://lists.fd.io/g/vpp-dev/message/11675>
>>> Mute This Topic: https://lists.fd.io/mt/28791566/675642 
>>> <https://lists.fd.io/mt/28791566/675642>
>>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>> <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
>>> <mailto:dmar...@me.com>]
>>> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11688): https://lists.fd.io/g/vpp-dev/message/11688
Mute This Topic: https://lists.fd.io/mt/28791566/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to