Thanks for the info Daniel.

-Nitin

________________________________
From: Bernier, Daniel <daniel.bern...@bell.ca>
Sent: Wednesday, November 1, 2017 6:58 PM
To: Saxena, Nitin; Damjan Marion (damarion)
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Hi,



I have had the same issue and still working on the fix with Mellanox on this 
one.

Damjan is right it is just cosmetic (although annoying).



On the Linux kernel side, it implies moving to kernel 4.8 and above and a newer 
version of ethtools.

On the VPP side, it just requires Mellanox to advertise speed capability 
correctly through DPDK and it is still half done I suppose



Thanks,

----

Daniel Bernier | Bell Canada





From: "Saxena, Nitin" <nitin.sax...@cavium.com>
Date: Wednesday, November 1, 2017 at 8:54 AM
To: "Damjan Marion (damarion)" <damar...@cisco.com>
Cc: "Bernier, Daniel" <daniel.bern...@bell.ca>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP



I havent ran testpmd but with VPP I am able to switch traffic between two 
ports. Both ports in VPP bridge. Seems fine right



-Nitin

________________________________

From: Damjan Marion (damarion) <damar...@cisco.com>
Sent: Wednesday, November 1, 2017 5:50:57 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP





Currently it is just cosmetic…..



Does it work with testpmd?



—

Damjan



On 1 Nov 2017, at 13:14, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:



Ok Thanks I will debug where the problem lies.



However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?



-Nitin

________________________________

From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP





mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.



You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.



—

Damjan



On 1 Nov 2017, at 12:55, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:



Here is the detail



(gdb) p *dev_info

$1 = {pci_dev = 0x51c4e0, driver_name = 0xffff76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,

  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,

  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,

    rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},

    tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {

    nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}



Thanks,

Nitin



________________________________

From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP





Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.



I.e:



$ make build debug

(gdb) b port_type_from_speed_capa

(gdb) r

<wait for breakpoint>

(gdb) p * dev_info



—

Damjan



On 1 Nov 2017, at 12:34, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:



Please find show pci output



DBGvpp# show pci

Address      Sock VID:PID     Link Speed   Driver          Product Name         
           Vital Product Data

0000:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x           OCP 10GbE Dual Port 
SFP+ Adapter

0000:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core       CX416A - ConnectX-4 
QSFP28

0000:13:00.1   0  8086:10c9   2.5 GT/s x4  igb

0000:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x           OCP 10GbE Dual Port 
SFP+ Adapter

0000:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core       CX416A - ConnectX-4 
QSFP28

0000:13:00.0   0  8086:10c9   2.5 GT/s x4  igb



Just Fyi I am running VPP on aarch64.



Thanks,

Nitin



________________________________

From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP





Can you share “show pci” output from VPP?



—

Damjan



On 30 Oct 2017, at 14:22, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:



Hi Damjan,



I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.



Attached logs.



Thanks,
Nitin



________________________________

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP



Hi Daniel,



Can you try with this patch?



https://gerrit.fd.io/r/#/c/7418/



Regards,



Damjan



On 4 Jul 2017, at 22:14, Bernier, Daniel 
<daniel.bern...@bell.ca<mailto:daniel.bern...@bell.ca>> wrote:



Hi,



I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.



localadmin@sm981:~$ lspci | grep Mellanox

81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]



localadmin@sm981:~$ ethtool ens1f0

Settings for ens1f0:

                Supported ports: [ FIBRE Backplane ]

                Supported link modes:   1000baseKX/Full

                                        10000baseKR/Full

                                        40000baseKR4/Full

                                        40000baseCR4/Full

                                        40000baseSR4/Full

                                        40000baseLR4/Full

                Supported pause frame use: Symmetric Receive-only

                Supports auto-negotiation: Yes

                Advertised link modes:  1000baseKX/Full

                                        10000baseKR/Full

                                        40000baseKR4/Full

                                        40000baseCR4/Full

                                        40000baseSR4/Full

                                        40000baseLR4/Full

                Advertised pause frame use: No

                Advertised auto-negotiation: Yes

                Speed: 40000Mb/s

                Duplex: Full

                Port: Direct Attach Copper

                PHYAD: 0

                Transceiver: internal

                Auto-negotiation: on

Cannot get wake-on-lan settings: Operation not permitted

                Current message level: 0x00000004 (4)

                                                       link

                Link detected: yes



localadmin@sm981:~$ sudo vppctl show interface

              Name               Idx       State          Counter          Count

UnknownEthernet81/0/0             1         up       rx packets                
723257

                                                     rx bytes                
68599505

                                                     tx packets                 
39495

                                                     tx bytes                 
2093235

                                                     drops                     
723257

                                                     ip4                        
48504

UnknownEthernet81/0/1             2         up       rx packets                
723194

                                                     rx bytes                
68592678

                                                     tx packets                 
39495

                                                     tx bytes                 
2093235

                                                     drops                     
723194

                                                     ip4                        
48504

local0                            0        down





Any ideas where this could be fixed?



Thanks,

----

Daniel Bernier | Bell Canada



_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev



<mlx_sh_int_error.txt>_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to