I have run into a problem where the API command sw_interfaces_dump is returning 
incorrect rate and duplex information but the CLI command 'show 
hardware-interfaces' is correct.
System is running VPP version 18.07.1 and the interfaces on this system are two 
10G Intel x552 NICs.
sw_interfaces_dump gets the information from the vnet_hardware_interface_t data 
structure whereas 'show hardware-interfaces' gets info from DPDK.
Looking with gdb I could see the data in the hardware interface data structure 
flags variable was incorrect but the DPDK information was correct.
The 32 bit flags variable contains bit0 link_up, bit1-2 duplex, bit3-14 speed, 
b15 unused, b16=interrupt_mode, b17=supports_cksum_offload.
The public function that set the flags is in src/vnet/interface.c 
vnet_hw_interface_set_flags() and local static function in same file 
vnet_hw_interface_set_flags_helper() that it calls.
The basic logic of the later:
src/vnet/interface.c static clib_error* vnet_hw_interface_set_flags_helper 
(vnet_main_t * vnm, u32 hw_if_index, u32 flags, u32 helper_flags)
      mask =  (VNET_HW_INTERFACE_FLAG_LINK_UP | 
VNET_HW_INTERFACE_FLAG_DUPLEX_MASK | VNET_HW_INTERFACE_FLAG_SPEED_MASK);
      flags &= mask;        // clear all other bits
      ...
      hi->flags &= ~mask;   // existing flags are cleared
      hi->flags |= flags;   // set flags
The issue I see is this function clears the link/duplex/speed bits and the many 
callers of this function don't provide the full set of active bits.  Scenario:
Startup VPP and creation is done for hardware interfaces and flags is set to 
0x0.
Function dpdk_update_link_state() gets called and this sets flags to 0x20104 = 
supports_chksum_offload/10G/full_duplex/link_down.
The application does an admin up and ends up calling 
dpdk_interface_admin_up_down() and flags is passed in as 0x1 to 
vnet_hw_interface_set_flags() and the implementation sets the hardware 
interface flags to 0x20001 = supports_chksm_offload/link_up.
The speed and duplex bits have been cleared!

I took a look at the 19.01.1 code and it's been changed to this:

src/vnet/interface.c static clib_error* vnet_hw_interface_set_flags_helper 
(vnet_main_t * vnm, u32 hw_if_index, u32 flags, u32 helper_flags)
      mask =  (VNET_HW_INTERFACE_FLAG_LINK_UP | 
VNET_HW_INTERFACE_FLAG_DUPLEX_MASK);
      ...
This was due to change the speed data being stored in its own uint32_t instead 
of bits in flags.
   commit 5100aa9cb9e7acff35fa3bfde8aa95b5ace60344
   Author: Damjan Marion <damar...@cisco.com>
   Date:   Thu Nov 8 15:30:16 2018 +0100
       vnet: store hw interface speed in kbps instead of using flags
      Change-Id: Idd4471a3adf7023e48e85717f00c786b1dde0cca
       Signed-off-by: Damjan Marion <damar...@cisco.com>

I'll retest when I can and am guessing the speed will now work but am I missing 
something in that the same problem with the duplex information seems to still 
be there?

Ken Coulson
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12836): https://lists.fd.io/g/vpp-dev/message/12836
Mute This Topic: https://lists.fd.io/mt/31303346/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to