Fix Maxime's mail.

On Fri, Nov 24, 2023 at 10:18:27AM +0100, Edwin Brossette wrote:
> Hello,
> 
> I noticed a small inconsistency in the virtio pmd's xstats.
> The stat "rx_q0_errors" appears twice.
> I also think the stats "rx_q0_packets", "rx_q0_bytes", "tx_q0_packets" and
> "tx_q0_bytes" are duplicates of "rx_q0_good_packets", "rx_q0_good_bytes",
> "tx_q0_good_packets" and "tx_q0_good_bytes"
> 
> I believe this issue probably appeared after this commit:
> 
> f30e69b41f94: ethdev: add device flag to bypass auto-filled queue xstats
> http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=f30e69b41f949cd4a9afb6ff39de196e661708e2
> 
> From what I understand, the rxq0_error stat was originally reported by the
> librte. However, changes were made so it is reported by the pmd instead.
> The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to keep the
> old behaviour so that every pmd could have time to adapt the change.
> But it seems the flag was forgotten in the virtio pmd and as a result, some
> stats are fetched at two different times when displaying xstats.
> 
> First in lib_rte_ethdev:
> https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266
> (you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag before
> the snprintf on eth_dev_rxq_stats_strings[] )
> 
> And a second time in the virtio pmd:
> https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705 pmd
> (see the snprintf on rte_virtio_rxq_stat_strings[] )
> 
> This problem can be reproduced on testpmd simply by displaying the xstats
> on a port with the net_virtio driver:
> 
> Reproduction:
> ===========
> 
>     1) start dpdk-testpmd:
> 
> modprobe -a uio_pci_generic
> dpdk-devbind -b uio_pci_generic 03:00.0
> dpdk-devbind -b uio_pci_generic 04:00.0
> 
> dpdk-devbind -s
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:03:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic
> unused=vfio-pci
> 0000:04:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic
> unused=vfio-pci
> [...]
> 
> dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=1 --txq=1
> --coremask=0x4 --total-num-mbufs=250000
> EAL: Detected CPU lcores: 3
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: VFIO support initialized
> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket
> -1)
> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket
> -1)
> Interactive-mode selected
> Warning: NUMA should be configured manually by using --port-numa-config and
> --ring-numa-config parameters along with --numa.
> testpmd: create a new mbuf pool <mb_pool_0>: n=250000, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 52:54:00:B0:8F:88
> Configuring Port 1 (socket 0)
> Port 1: 52:54:00:EF:09:1F
> Checking link statuses...
> Done
> 
>     2) port info:
> 
> show port info 0
> 
> ********************* Infos for port 0  *********************
> MAC address: 52:54:00:B0:8F:88
> Device name: 0000:03:00.0
> Driver name: net_virtio
> Firmware-version: not available
> Devargs:
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: Unknown
> Link duplex: full-duplex
> Autoneg status: On
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 64
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
>   strip off, filter off, extend off, qinq strip off
> No RSS offload flow type is supported.
> Minimum size of RX buffer: 64
> Maximum configurable length of RX packet: 9728
> Maximum configurable size of LRO aggregated packet: 0
> Current number of RX queues: 1
> Max possible RX queues: 1
> Max possible number of RXDs per queue: 32768
> Min possible number of RXDs per queue: 32
> RXDs number alignment: 1
> Current number of TX queues: 1
> Max possible TX queues: 1
> Max possible number of TXDs per queue: 32768
> Min possible number of TXDs per queue: 32
> TXDs number alignment: 1
> Max segment number per packet: 65535
> Max segment number per MTU/TSO: 65535
> Device capabilities: 0x0( )
> Device error handling mode: none
> Device private info:
> guest_features: 0x110af8020
> vtnet_hdr_size: 12
> use_vec: rx-0 tx-0
> use_inorder: rx-0 tx-0
> intr_lsc: 1
> max_mtu: 9698
> max_rx_pkt_len: 1530
> max_queue_pairs: 1
> req_guest_features: 0x8000005f10ef8028
> 
>     3) show port xstats:
> 
> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 0
> tx_good_packets: 0
> rx_good_bytes: 0
> tx_good_bytes: 0
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 0
> rx_q0_bytes: 0
> rx_q0_errors: 0      <==================
> tx_q0_packets: 0
> tx_q0_bytes: 0
> rx_q0_good_packets: 0
> rx_q0_good_bytes: 0
> rx_q0_errors: 0      <==================
> rx_q0_multicast_packets: 0
> rx_q0_broadcast_packets: 0
> rx_q0_undersize_packets: 0
> rx_q0_size_64_packets: 0
> rx_q0_size_65_127_packets: 0
> rx_q0_size_128_255_packets: 0
> rx_q0_size_256_511_packets: 0
> rx_q0_size_512_1023_packets: 0
> rx_q0_size_1024_1518_packets: 0
> rx_q0_size_1519_max_packets: 0
> tx_q0_good_packets: 0
> tx_q0_good_bytes: 0
> tx_q0_multicast_packets: 0
> tx_q0_broadcast_packets: 0
> tx_q0_undersize_packets: 0
> tx_q0_size_64_packets: 0
> tx_q0_size_65_127_packets: 0
> tx_q0_size_128_255_packets: 0
> tx_q0_size_256_511_packets: 0
> tx_q0_size_512_1023_packets: 0
> tx_q0_size_1024_1518_packets: 0
> tx_q0_size_1519_max_packets: 0
> 
> You can see the stat "rx_q0_errors" appeared twice.

Reply via email to