Hi Siddarth, The linked document is quite old and I don't think it can be relied on in it's current state. I am using vpp (22.06-20220307) in Azure with external dpdk from debian 11 (20.11) and it's working well except for an issue with larger buffer sizes.
What sort of errors are you seeing in the vpp log? Are you sure that the interfaces that you want vpp to control are DOWN when vpp starts? Depending on how you've deployed the VM dhcp may be being run for the AN interfaces which you want to be vpp owned, which would block them from being used by vpp. Another point is that I've had more success binding devices using their VMBUS id instead of the PCI ID, for example: dpdk { # VMBUS UUID. dev 6045bd81-248e-6045-bd81-248e6045bd81 { num-rx-queues 4 num-tx-queues 4 name GigabitEthernet1 } # VMBUS UUID. dev 6045bd81-2600-6045-bd81-26006045bd81 { num-rx-queues 4 num-tx-queues 4 name GigabitEthernet2 } } VPP is up and running: $ sudo vppctl sh hard Name Idx Link Hardware GigabitEthernet1 1 up GigabitEthernet1 Link speed: 50 Gbps RX Queues: queue thread mode 0 main (0) polling 1 main (0) polling 2 main (0) polling 3 main (0) polling Ethernet address 60:45:bd:81:24:8e Microsoft Hyper-V Netvsc carrier up full duplex max-frame-size 0 flags: admin-up tx-offload rx-ip4-cksum Devargs: rx: queues 4 (max 64), desc 1024 (min 0 max 65535 align 1) tx: queues 4 (max 64), desc 1024 (min 1 max 4096 align 1) max rx packet len: 65536 promiscuous: unicast off all-multicast on vlan offload: strip off filter off qinq off rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash rx offload active: ipv4-cksum tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6 rss active: ipv4-tcp ipv4 ipv6-tcp ipv6 tx burst function: (not available) rx burst function: (not available) tx frames ok 263350 tx bytes ok 41871764 rx frames ok 253901 rx bytes ok 33154932 extended stats: rx_good_packets 253901 tx_good_packets 263350 rx_good_bytes 33154932 tx_good_bytes 41871764 rx_q0_packets 4849 rx_q0_bytes 298233 rx_q1_packets 1345 rx_q1_bytes 147229 rx_q2_packets 32967 rx_q2_bytes 3625057 rx_q3_packets 1330 rx_q3_bytes 145796 rx_q0_good_packets 4849 rx_q0_good_bytes 298233 rx_q0_undersize_packets 3462 rx_q0_size_65_127_packets 1066 rx_q0_size_128_255_packets 318 rx_q0_size_256_511_packets 3 rx_q1_good_packets 1345 rx_q1_good_bytes 147229 rx_q1_undersize_packets 2 rx_q1_size_65_127_packets 1035 rx_q1_size_128_255_packets 307 rx_q1_size_1024_1518_packets 1 rx_q2_good_packets 32967 rx_q2_good_bytes 3625057 rx_q2_multicast_packets 31662 rx_q2_undersize_packets 1 rx_q2_size_65_127_packets 32663 rx_q2_size_128_255_packets 303 rx_q3_good_packets 1330 rx_q3_good_bytes 145796 rx_q3_size_65_127_packets 1016 rx_q3_size_128_255_packets 313 rx_q3_size_256_511_packets 1 vf_rx_good_packets 213410 vf_tx_good_packets 263350 vf_rx_good_bytes 28938617 vf_tx_good_bytes 41871764 vf_rx_q0_packets 16 vf_rx_q0_bytes 5964 vf_rx_q1_packets 213394 vf_rx_q1_bytes 28932653 vf_tx_q0_packets 263350 vf_tx_q0_bytes 41871764 vf_rx_unicast_packets 213410 vf_rx_unicast_bytes 28938617 vf_tx_unicast_packets 262385 vf_tx_unicast_bytes 41860696 vf_tx_multicast_packets 960 vf_tx_multicast_bytes 81772 vf_tx_broadcast_packets 5 vf_tx_broadcast_bytes 1146 GigabitEthernet2 2 up GigabitEthernet2 Link speed: 50 Gbps RX Queues: queue thread mode 0 main (0) polling 1 main (0) polling 2 main (0) polling 3 main (0) polling Ethernet address 60:45:bd:81:26:00 Microsoft Hyper-V Netvsc carrier up full duplex max-frame-size 0 flags: tx-offload rx-ip4-cksum Devargs: rx: queues 4 (max 64), desc 1024 (min 0 max 65535 align 1) tx: queues 4 (max 64), desc 1024 (min 1 max 4096 align 1) max rx packet len: 65536 promiscuous: unicast off all-multicast off vlan offload: strip off filter off qinq off rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash rx offload active: ipv4-cksum tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6 rss active: ipv4-tcp ipv4 ipv6-tcp ipv6 tx burst function: (not available) rx burst function: (not available) Thanks, Peter. ________________________________ From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of siddarth rai via lists.fd.io <siddsr=gmail....@lists.fd.io> Sent: 04 July 2022 12:38 To: vpp-dev <vpp-dev@lists.fd.io> Subject: [vpp-dev] VPP on Azure Hello All, I am trying to run VPP (version 22.02) on a Linux (centos 7.9.2009) VM on Azure. I have made sure that accelerated networking is enabled on the VM. However, when I start up VPP I notice that it is not able to bind with the interface whitelisted in the startup.conf. I found this article in fd.io<http://fd.io> docs - https://fd.io/docs/vpp/v2101/usecases/vppinazure.html. According to this, VPP 18.07(with DPDK 18.02) was tried on azure. Newer versions were causing issues. So, I wanted to understand if the newer versions of VPP(with DPDK) are supported on azure and if anyone has tried it recently? Regards, Siddarth
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21606): https://lists.fd.io/g/vpp-dev/message/21606 Mute This Topic: https://lists.fd.io/mt/92162915/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-