Hmm,

 - I suppose you run VPP as root and not in a container
 - if you use CentOS/RHEL can you check disabling SELinux ('setenforce 0')
 - can you share the output of Linux dmesg and VPP 'show pci'

Best
ben

> -----Original Message-----
> From: chetan bhasin <chetan.bhasin...@gmail.com>
> Sent: lundi 13 janvier 2020 15:51
> To: Benoit Ganne (bganne) <bga...@cisco.com>
> Cc: vpp-dev <vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx
> queue
> 
> Hi Benoit,
> 
> Thanks for your prompt response.
> 
> We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
> modification in our build system and we want to use DPDK as we were using
> earlier
> 
> .
> DBGvpp# show log
> 2020/01/13 14:44:42:014 notice     dhcp/client    plugin initialized
> 2020/01/13 14:44:42:051 warn       dpdk           EAL init args: -c 14 -n
> 4 --in-memory --log-level debug --file-prefix vpp -w 0000:1b:00.0 -w
> 0000:13:00.0 --master-lcore 4
> 2020/01/13 14:44:42:603 notice     dpdk           DPDK drivers found 2
> ports...
> 2020/01/13 14:44:42:622 notice     dpdk           EAL: Detected 6 lcore(s)
> 2020/01/13 14:44:42:622 notice     dpdk           EAL: Detected 1 NUMA
> nodes
> 
> 2020/01/13 14:44:42:623 notice     dpdk           EAL: PCI device
> 0000:13:00.0 on NUMA socket -1
> 2020/01/13 14:44:42:623 notice     dpdk           EAL:   Invalid NUMA
> socket, default to 0
> 2020/01/13 14:44:42:623 notice     dpdk           EAL:   probe driver:
> 15ad:7b0 net_vmxnet3
> 2020/01/13 14:44:42:623 notice     dpdk           EAL:   using IOMMU type
> 8 (No-IOMMU)
> 2020/01/13 14:44:42:623 notice     dpdk           EAL: Ignore mapping IO
> port bar(3)
> 2020/01/13 14:44:42:623 notice     dpdk           EAL: PCI device
> 0000:1b:00.0 on NUMA socket -1
> 2020/01/13 14:44:42:623 notice     dpdk           EAL:   Invalid NUMA
> socket, default to 0
> 2020/01/13 14:44:42:623 notice     dpdk           EAL:   probe driver:
> 15ad:7b0 net_vmxnet3
> 2020/01/13 14:44:42:623 notice     dpdk           EAL: Ignore mapping IO
> port bar(3)
> 2020/01/13 14:45:02:475 err        dpdk           Interface
> GigabitEthernet13/0/0 error 1: Operation not permitted
> 2020/01/13 14:45:02:475 notice     dpdk
> vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 1
> 2020/01/13 14:45:02:475 notice     dpdk           vmxnet3_dev_start():
> Failed to configure v4 RSS
> 
> 
> 
> Thanks,
> Chetan Bhasin
> 
> On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne) <bga...@cisco.com
> <mailto:bga...@cisco.com> > wrote:
> 
> 
>       Hi Chetan,
> 
>       Any reason for not using VPP built-in vmxnet3 driver instead of
> DPDK? That should give you better performance and would be easier for us
> to debug. See https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
> 
>       Otherwise, can you share 'show logging' output?
> 
>       Ben
> 
>       > -----Original Message-----
>       > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>  <vpp-
> d...@lists.fd.io <mailto:vpp-dev@lists.fd.io> > On Behalf Of chetan bhasin
>       > Sent: lundi 13 janvier 2020 15:20
>       > To: vpp-dev <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> >
>       > Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with
> 1rx/tx queue
>       >
>       > Hello Everyone,
>       >
>       > I am facing an issue while bringing up vpp with less than 2 rx and
> 2 tx
>       > queue. I am using vpp19.08. I have configured pci's under the dpdk
> section
>       > like below -
>       >
>       > 1)
>       > dpdk {
>       > # dpdk-config
>       >  dev default {
>       >  num-rx-desc 1024
>       >  num-rx-queues 1
>       >  num-tx-desc 1024
>       >  num-tx-queues 1
>       > # vlan-strip-offload off
>       >  }
>       > dev 0000:1b:00.0 {
>       > }
>       > dev 0000:13:00.0 {
>       > }
>       > }
>       >
>       > When I bring pci state to up  , it is showing error in "show
> hardware-
>       > interfaces"
>       >
>       >  DBGvpp# set interface state GigabitEthernet13/0/0 up  DBGvpp#
> show
>       > hardware-interfaces
>       >               Name                Idx   Link  Hardware
>       > GigabitEthernet13/0/0              1    down
> GigabitEthernet13/0/0
>       >   Link speed: 10 Gbps
>       >   Ethernet address 00:50:56:9b:f5:c5
>       >   VMware VMXNET3
>       >     carrier down
>       >     flags: admin-up pmd maybe-multiseg rx-ip4-cksum
>       >     Devargs:
>       >     rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
>       >     tx: queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)
>       >     pci: device 15ad:07b0 subsystem 15ad:07b0 address
> 0000:13:00.00 numa 0
>       >     max rx packet len: 16384
>       >     promiscuous: unicast off all-multicast off
>       >     vlan offload: strip off filter off qinq off
>       >     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> tcp-lro
>       >                        vlan-filter jumbo-frame scatter
>       >     rx offload active: ipv4-cksum jumbo-frame scatter
>       >     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> tcp-tso
>       >                        multi-segs
>       >     tx offload active: multi-segs
>       >     rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp
> ipv6
>       >     rss active:        none
>       >     tx burst function: vmxnet3_xmit_pkts
>       >     rx burst function: vmxnet3_recv_pkts
>       >   Errors:
>       >
>       >     rte_eth_dev_start[port:0, errno:1]: Operation not permitted
>       >
>       > 2) When bring up system without "dev default " section , still
> facing the
>       > same issue , this time default [Rx-queue is 1 and tx-queue is 2
> (main
>       > thread + 1 worker)]
>       >
>       > DBGvpp# show hardware-interfaces
>       >               Name                Idx   Link  Hardware
>       > GigabitEthernet13/0/0              1    down
> GigabitEthernet13/0/0
>       >   Link speed: 10 Gbps
>       >   Ethernet address 00:50:56:9b:f5:c5
>       >   VMware VMXNET3
>       >     carrier down
>       >     flags: admin-up pmd maybe-multiseg rx-ip4-cksum
>       >     Devargs:
>       >     rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
>       >     tx: queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)
>       >     pci: device 15ad:07b0 subsystem 15ad:07b0 address
> 0000:13:00.00 numa 0
>       >     max rx packet len: 16384
>       >     promiscuous: unicast off all-multicast off
>       >     vlan offload: strip off filter off qinq off
>       >     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> tcp-lro
>       >                        vlan-filter jumbo-frame scatter
>       >     rx offload active: ipv4-cksum jumbo-frame scatter
>       >     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> tcp-tso
>       >                        multi-segs
>       >     tx offload active: multi-segs
>       >     rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp
> ipv6
>       >     rss active:        none
>       >     tx burst function: vmxnet3_xmit_pkts
>       >     rx burst function: vmxnet3_recv_pkts
>       >   Errors:
>       >     rte_eth_dev_start[port:0, errno:1]: Operation not permitted
>       >
>       >
>       > Thanks,
>       > Chetan Bhasin
>       >
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15148): https://lists.fd.io/g/vpp-dev/message/15148
Mute This Topic: https://lists.fd.io/mt/69669298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to