It's X520-2. Note that we can see Multiple queues when we use PCI-pass through of whole NIC but the moment we enable VFs, ixgbe disables multiple-queues.
Here are more details: 06:00.0 *Ether*net controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 06:00.1 *Ether*net controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root at localhost ~]# lspci -vv -s 06:00.0 06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) Subsystem: Intel Corporation Ethernet Server Adapter X520-2 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 42 Region 0: Memory at 33fffd80000 (64-bit, prefetchable) [size=512K] Region 2: I/O ports at 2020 [disabled] [size=32] Region 4: Memory at 33fffe04000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME- Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] MSI-X: Enable+ Count=64 Masked- Vector table: BAR=4 offset=00000000 PBA: BAR=4 offset=00002000 Capabilities: [a0] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #2, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <1us, L1 <8us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+ UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES- TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn+ ChkCap+ ChkEn+ Capabilities: [140 v1] Device Serial Number 90-e2-ba-ff-ff-a5-9d-94 Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 1 ARICtl: MFVC- ACS-, Function Group: 0 Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration-, Interrupt Message Number: 000 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+ IOVSta: Migration- Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00 VF offset: 128, stride: 2, Device ID: 10ed Supported Page Size: 00000553, System Page Size: 00000001 Region 0: Memory at 0000000092200000 (64-bit, non-prefetchable) Region 3: Memory at 0000000092300000 (64-bit, non-prefetchable) VF Migration: offset: 00000000, BIR: 0 Kernel driver in use: ixgbe On Wed, Feb 3, 2016 at 3:31 PM, Choi, Sy Jong <sy.jong.choi at intel.com> wrote: > Hi Saurabh, > > May I know the model number of your physical nic? > > Regards, > Choi, Sy Jong > Platform Application Engineer > > -----Original Message----- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Saurabh Mishra > Sent: Thursday, February 04, 2016 3:47 AM > To: dev at dpdk.org; users at dpdk.org > Subject: [dpdk-dev] DPDK ixgbevf multi-queue disabled > > Is there any way to enable multi-queue for SR-IOV of ixgbe? > > I've seen that PF driver automatically disables multi-queue when VFs are > created from host. > > We want to use multiple queues with DPDK in case of ixgbevf too. > > [781203.692378] ixgbe 0000:06:00.0: Multiqueue Disabled: Rx Queue count = > 1, Tx Queue count = 1 > > [781203.699858] ixgbe 0000:06:00.0: registered PHC device on p5p1 > > [781203.861774] ixgbe 0000:06:00.0 p5p1: detected SFP+: 5 > > [781204.104038] ixgbe 0000:06:00.0 p5p1: NIC Link is Up 10 Gbps, Flow > Control: RX/TX > > [781206.035467] ixgbe 0000:06:00.1 p5p2: SR-IOV enabled with 2 VFs > > [781206.136011] pci 0000:06:10.1: [8086:10ed] type 00 class 0x020000 > > [781206.136375] pci 0000:06:10.3: [8086:10ed] type 00 class 0x020000 > > [781206.136776] ixgbe 0000:06:00.1: removed PHC on p5p2 > > [781206.227015] ixgbe 0000:06:00.1: irq 116 for MSI/MSI-X > > [781206.227046] ixgbe 0000:06:00.1: irq 117 for MSI/MSI-X > > [781206.227062] ixgbe 0000:06:00.1: Multiqueue Disabled: Rx Queue count = > 1, Tx Queue count = 1 > > [781206.235804] ixgbe 0000:06:00.1: registered PHC device on p5p2 > > [781206.407537] ixgbe 0000:06:00.1 p5p2: detected SFP+: 6 > > > [781206.649795] ixgbe 0000:06:00.1 p5p2: NIC Link is Up 10 Gbps, Flow > Control: RX/TX > > > Thanks, > /Saurabh >