Hi Pat, Are you using fiber or copper to connect? If you are using fiber, it seems to me it could work with a three QSFP optical transceivers and three MTP/LC breakout fibers that you would connect via couplers such that Host Lanes 0 and 1 go to the first N321 Lanes 0 and 1 while the Host Lanes 2 and 3 go to the second N321 Lanes 0 and 1. The host NIC would be configured 1x4. Note that I have not done this, but I am planning to purchase N321s soon and so I am interested in this topic. Rob
On Wed, Feb 3, 2021 at 5:34 PM Patrick Kane via USRP-users < usrp-users@lists.ettus.com> wrote: > I still cannot use a single 2-port QSFP+ NIC to connect to 2x N321s. > Using the Intel tools to set the NIC to 2x2x10, the NIC doesn't recognize > the second physical port as a valid connection to a USRP, but works as a > loopback or a connection to another NIC. I asked Intel regarding the issues > to see if we were configuring the NIC incorrectly, and this is the response > we got: > > *Response from Intel on XL710-QDA2* > > “When configured for 2x2x10 the lanes are split between the ports. The A > configuration utilizes lanes 1 and 2 on the top port and lanes 3 and 4 on > the bottom port. These correlate to the physical connections on the QSFP+ > module. If you are able to configure the port of the other device to > utilize lanes 3 and 4, then it should be able to connect to the bottom port > in the A configuration.” > > > > *Problem:* XQ FPGA image doesn’t accept connections from lanes 2 and 3, > just 0 and 1. This prevents the bottom port on the Intel NIC from working > in the 2x2x10 configuration: > > > InterfaceHGXGWXXQAQ > SFP+ 0 1 GbE 10 GbE White Rabbit White Rabbit 10 GbE > SFP+ 1 10 GbE 10 GbE 10 GbE Unused 10 GbE > QSFP+ lane 0 Unused Unused Unused 10 GbE Aurora > QSFP+ lane 1 Unused Unused Unused 10 GbE Aurora > QSFP+ lane 2 Unused Unused Unused *Unused* Aurora > QSFP+ lane 3 Unused Unused Unused *Unused* Aurora > In the 2 N321 configuration, lane 2 and 3 are valid, but point to a > different USRP. I'm hoping theres a UHD or FW change that can prevent the > QSFP->2x10G breakout cable because that defeats the purpose of using a > QSFP+ NIC instead of a 4-port 10G NIC (Intel X710-DA4) > > Thanks, > Pat > > On Tue, Nov 24, 2020 at 1:01 PM Michael Dickens <michael.dick...@ettus.com> > wrote: > >> Hi Pat - I'm glad that info helped! >> >> Yes, I plan on adding this information into the N32x Getting Started >> Guide, once I have a better handle on it. Right now I have just a few data >> points & those are not consistent! and I don't know why! Thus ... >> >> Which Intel QSFP+ utility did you end up using? There are 2 that I can >> find: >> >> 1) EPCT: >> https://downloadcenter.intel.com/download/28933/Intel-Ethernet-Port-Configuration-Tool-Linux- >> >> This is the newer version that seems to work. >> >> 2) QCU : >> https://downloadcenter.intel.com/download/25851/Intel-QSFP-Configuration-Utility-Linux-Final-Release?product=46828 >> >> This one is deprecated, though it still works to some extent. >> >> === >> >> When I execute (A), I get the following options: "4x10" and "2x2x10". I >> do not get an "A" or "B" or "LOM" or whatever. Just literally those 2 >> options. >> >> I think the first one means "1x(4x10)", meaning that just port0 is active >> & provides 4 data lates. I was hoping this option would work with a 1:4 >> SFP+ breakout cable from FS.com, but to the best of my testing I can get >> just 1 of those 4 SFP+ links to work. Supposedly if one uses the Intel 1:4 >> breakout cable this will work ... but that's paying $350 for an otherwise >> $50 cable! I'm still investigating here. Ideally this NIC would provide >> "2x(4x10)" with 2 1:4 breakout cables, to get double the SFP+ density of >> current NICs (e.g., the X710-DA4 ort X722-DA4). >> >> The second one implies to me that both ports are available & providing 2 >> data lanes each. The best I've been able to do is use "2x2x10" with port0; >> port1 doesn't seem to be working in this setting. >> >> Admittedly, I might need to update to the current Intel Linux drivers for >> the XL710 NIC. I usually let the OS handle this for me -- in this case, >> Ubuntu 20.04 latest. There are new Intel drivers from early November 2020, >> but I don't think the XL710 had any updates from the prior version. >> >> I'm curious what driver version & OS / version you're using ... maybe >> let's catch up off-list for a bit & see what we can figure out here. >> Cheers! - MLD >> >> On Tue, Nov 24, 2020 at 9:06 AM Patrick Kane <prkan...@gmail.com> wrote: >> >>> Hi Mike, >>> >>> That seemed to do the trick, thanks for info! At some point, can we make >>> these steps part of the N32x getting started docs? >>> >>> Also- the config utilty makes me choose 2x2x10 A, B, or LOM. Choosing A >>> disables the second port on the QDA2, and B disables the first port. LOM >>> disables both ports (expected because it's not a MB NIC). My ideal case is >>> using 2x N321s over QSFP on the same XL710-QDA2 NIC. Have you had any luck >>> in this configuration? >>> >>> Thanks, >>> Pat >>> >>> On Mon, Nov 23, 2020 at 9:23 PM Michael Dickens < >>> michael.dick...@ettus.com> wrote: >>> >>>> Hi Pat - I recently verified that the N321 QAFP+ interface works with >>>> UHD 4.0 release. I am also using an Intel XL710 (QDA2, but that probably >>>> doesn’t matter too much). The trick for me was using the Intel QSFP+ NIC >>>> configuration tool to set the NIC to 2x(2x10 Gb) mode. This is the setting >>>> that the N321 requires, and one that the NIC provides. Once that was set >>>> then you need to configures the host and USRP network interfaces as >>>> you normally would. After all of that, the link worked very nicely! I hope >>>> this is useful! - MLD >>>> >>>> On Nov 23, 2020, at 4:44 PM, Patrick Kane via USRP-users < >>>> usrp-users@lists.ettus.com> wrote: >>>> >>>> >>>> >>>> I have an N321 connected to serial console and QSFP+ through a XL710 >>>> Intel NIC. With the default HG image, I can connect through 1G and serial >>>> as expected. I updated the filesystem to UHD 4.0.0.0 using mender, and the >>>> build artifact reflects that this was successful. Then, after loading the >>>> XQ image (using 2x 10Gb lanes through QSFP+ port), I lose all ethernet >>>> connectivity through the 1G port SFP0 (expected), but I get the following >>>> output in the console window: >>>> >>>> >>>> [ 451.560674] nixge 40000000.ethernet sfp0: Link is Up - 10Gbps/Full - >>>> flow control off >>>> >>>> [ 453.800673] nixge 40000000.ethernet sfp0: Link is Down >>>> >>>> [ 454.920676] nixge 40000000.ethernet sfp0: Link is Up - 10Gbps/Full - >>>> flow control off >>>> >>>> [ 458.280672] nixge 40000000.ethernet sfp0: Link is Down >>>> >>>> [ 459.400677] nixge 40000000.ethernet sfp0: Link is Up - 10Gbps/Full - >>>> flow control off >>>> >>>> [ 462.760705] nixge 40000000.ethernet sfp0: Link is Down >>>> >>>> [ 463.880678] nixge 40000000.ethernet sfp0: Link is Up - 10Gbps/Full - >>>> flow control off >>>> >>>> [ 466.120673] nixge 40000000.ethernet sfp0: Link is Down >>>> >>>> >>>> uhd_usrp_probe: >>>> >>>> _____________________________________________________ >>>> >>>> / >>>> >>>> | Device: N300-Series Device >>>> >>>> | _____________________________________________________ >>>> >>>> | / >>>> >>>> | | Mboard: ni-n3xx-31E00AC >>>> >>>> | | dboard_0_pid: 338 >>>> >>>> | | dboard_0_serial: 31DB406 >>>> >>>> | | dboard_1_pid: 338 >>>> >>>> | | dboard_1_serial: 31DB407 >>>> >>>> | | eeprom_version: 3 >>>> >>>> | | fs_version: 20200914000806 >>>> >>>> | | mender_artifact: v4.0.0.0_n3xx >>>> >>>> | | mpm_sw_version: 4.0.0.0-g90ce6062 >>>> >>>> | | pid: 16962 >>>> >>>> | | product: n320 >>>> >>>> | | rev: 7 >>>> >>>> | | rpc_connection: local >>>> >>>> | | serial: 31E00AC >>>> >>>> | | type: n3xx >>>> >>>> | | MPM Version: 3.0 >>>> >>>> | | FPGA Version: 8.0 >>>> >>>> | | FPGA git hash: be53058.clean >>>> >>>> | | >>>> >>>> | | Time sources: internal, external, gpsdo, sfp0 >>>> >>>> | | Clock sources: external, internal, gpsdo >>>> >>>> | | Sensors: ref_locked, gps_locked, temp, fan, gps_gpgga, gps_sky, >>>> gps_time, gps_tpv >>>> >>>> >>>> Are there any configuration items needed to connect to the N321 through >>>> the QSFP+ port. Note that I only see eth0, sfp0, sfp1, and int0 in >>>> /etc/network/interfaces. >>>> >>>> >>>> Thanks, >>>> >>>> Pat >>>> >>>> >>>> _______________________________________________ >>>> USRP-users mailing list >>>> USRP-users@lists.ettus.com >>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com >>>> >>>> _______________________________________________ > USRP-users mailing list > USRP-users@lists.ettus.com > http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com >
_______________________________________________ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com