Hi, That’s also a source for slowdowns. Try configuring one worker.
Regards, Florin > On May 5, 2022, at 6:40 PM, weizhen9...@163.com wrote: > > Hi, > We have only one numa node. Just as the following picture show. > <dummyfile.0.part> > vpp# sh hardware-interfaces verbose > Name Idx Link Hardware > ens1f0 1 up ens1f0 > Link speed: 10 Gbps > RX Queues: > queue thread mode > 0 main (0) polling > 1 main (0) polling > Ethernet address 00:13:95:0a:58:03 > Intel 82599 > carrier up full duplex max-frame-size 2056 > flags: admin-up intel-phdr-cksum rx-ip4-cksum > Devargs: > rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8) > tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8) > pci: device 8086:10fb subsystem ffff:ffff address 0000:09:00.00 numa 0 > max rx packet len: 15872 > promiscuous: unicast off all-multicast on > vlan offload: strip off filter off qinq off > rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro > macsec-strip vlan-filter vlan-extend scatter security > keep-crc rss-hash > rx offload active: ipv4-cksum > tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum > tcp-tso macsec-insert multi-segs security > tx offload active: none > rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp > ipv6-udp ipv6-ex ipv6 > rss active: ipv4-tcp ipv4-udp ipv4 > tx burst function: ixgbe_recv_scattered_pkts_vec > rx burst function: (not available) > > rx frames ok 5 > rx bytes ok 415 > extended stats: > rx_good_packets 5 > rx_good_bytes 415 > rx_q0_packets 5 > rx_q0_bytes 415 > mac_remote_errors 1 > rx_size_65_to_127_packets 5 > rx_multicast_packets 5 > rx_total_packets 5 > rx_total_bytes 415 > ens1f1 2 up ens1f1 > Link speed: 10 Gbps > RX Queues: > queue thread mode > 0 main (0) polling > 1 main (0) polling > Ethernet address 00:13:95:0a:58:04 > Intel 82599 > carrier up full duplex max-frame-size 2056 > flags: admin-up intel-phdr-cksum rx-ip4-cksum > Devargs: > rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8) > tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8) > pci: device 8086:10fb subsystem ffff:ffff address 0000:09:00.01 numa 0 > max rx packet len: 15872 > promiscuous: unicast off all-multicast on > vlan offload: strip off filter off qinq off > rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro > macsec-strip vlan-filter vlan-extend scatter security > keep-crc rss-hash > rx offload active: ipv4-cksum > tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum > tcp-tso macsec-insert multi-segs security > tx offload active: none > rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp > ipv6-udp ipv6-ex ipv6 > rss active: ipv4-tcp ipv4-udp ipv4 > tx burst function: ixgbe_recv_scattered_pkts_vec > rx burst function: (not available) > > rx frames ok 5 > rx bytes ok 415 > extended stats: > rx_good_packets 5 > rx_good_bytes 415 > rx_q0_packets 5 > rx_q0_bytes 415 > mac_local_errors 28 > mac_remote_errors 1 > rx_size_65_to_127_packets 5 > rx_multicast_packets 5 > rx_total_packets 5 > rx_total_bytes 415 > local0 0 down local0 > Link speed: unknown > local > In addition, I don't use worker on core 0. Instead, I don't config the > worker. So the vpp have only one thread. Just as the following picture show. > <dummyfile.1.part> > > Thanks. > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21378): https://lists.fd.io/g/vpp-dev/message/21378 Mute This Topic: https://lists.fd.io/mt/90793836/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-