[dpdk-dev] Scenario of dpdk-iokit //Re: Fwd: dpdk-iokit: Turn dpdk into the IO field.

2013-11-19 Thread March Jane

 ? Nowadays dilemmas in enterprise-class storage system & motivation -- 

The model of most storage system is ?Front-end cards + CPU + back-end 
magnetic media hard drive?, in such system, the hard drive is very slow in 
terms of its capability in random accessing - roughly 200IO per second, in 
contrast, the processor is very fast, therefore many software stack can 
tolerate waste of CPU cycle for exploiting capabilities of hard drive. 

In addition, regularly software for SAN is developed under kernel space 
in order to achieve low latency - few milliseconds per request, and IO pressure 
in scenario of heavy workload. Usually software in user-space with POSIX is 
slow. However kernel-space developing is nightmare for engineers, even we have 
approaches to simulate these code under user-space. 

However, so far flash is coming to popular, the gap between CPU and media 
is overturned, a single flash card can be easy to reach 1M iops, if plug 10 
such cards inside a server, the processor is hard to back. Thus, today?s 
challenge in flash storage is to exploit the capability of processor, 
ironically.  In addition to this purpose, move to user-space is also a 
motivation, if move to user-space is OS-bypass rather than moving to POSIX.


Best

- March






On Nov 14, 2013, at 8:03 PM, St Leger, Jim  wrote:

> March:
> How long have you been working on the project?
> What is your goal for final implementation?
> Do you have any progress so far?
> Jim
>  
> From: March Jane 
> Date: November 12, 2013 at 2:27:37 AM PST
> To: 
> Subject: [dpdk-dev] Fwd: dpdk-iokit: Turn dpdk into the IO field.
> 
> Hi,
>  dpdk folks,
>I am working on a project which is offering a complete IO stack for dpdk, 
> particularly a SCSI target stack in dpdk ? dpdk-iokit, once I finished my 
> initial code, will upload to github. I would like to know if there is any 
> other people is working on that purpose? 
> 
> 
>  - march
> 



[dpdk-dev] Scenario of dpdk-iokit //Re: Fwd: dpdk-iokit: Turn dpdk into the IO field.

2013-11-19 Thread March Jane

Gratitude to reply. 

FCP and Full-offload iSCSI will be nice, for example Qlogic 16Gbps FCP.

But IO scenario is very different from IP scenario as my understanding.

March   

BTW: Do you have people at Bay Area?


PS:
Already finished header file for dpdk-iokit/libiokit_sctgt


-- next part --



-- next part --




On Nov 19, 2013, at 5:45 PM, Vincent JARDIN  wrote:

> (off list since it coul become a troll ;) )
> 
> At 6WIND, we developed a librte_crypto as a generic framework to manage any 
> crypto framework:
>  - Intel's QuickAssist
>  - Cavium' Nitrox II
>  - AES/NI SW crypto
> 
> it was required in order to manage high rate of PCI IOs for Cryptos (IPsec, 
> SSL, etc...)
> 
> So, after the librte_pmd_mlx4, Virtio, Vmxnet3, that's more than 6 ultra low 
> latency/very efficient drivers that we added into the DPDK and promoted.
> 
> Which storage drivers would you foresee first to be run in userland?
> 
> Best regards,
> 
> 
> On 19/11/2013 02:22, March Jane wrote:
>> 
>>  ? Nowadays dilemmas in enterprise-class storage system & motivation --
>> 
>>  The model of most storage system is ?Front-end cards + CPU + 
>> back-end magnetic media hard drive?, in such system, the hard drive is very 
>> slow in terms of its capability in random accessing - roughly 200IO per 
>> second, in contrast, the processor is very fast, therefore many software 
>> stack can tolerate waste of CPU cycle for exploiting capabilities of hard 
>> drive.
>>  
>>  In addition, regularly software for SAN is developed under kernel space 
>> in order to achieve low latency - few milliseconds per request, and IO 
>> pressure in scenario of heavy workload. Usually software in user-space with 
>> POSIX is slow. However kernel-space developing is nightmare for engineers, 
>> even we have approaches to simulate these code under user-space.
>> 
>> However, so far flash is coming to popular, the gap between CPU and 
>> media is overturned, a single flash card can be easy to reach 1M iops, if 
>> plug 10 such cards inside a server, the processor is hard to back. Thus, 
>> today?s challenge in flash storage is to exploit the capability of 
>> processor, ironically.  In addition to this purpose, move to user-space is 
>> also a motivation, if move to user-space is OS-bypass rather than moving to 
>> POSIX.
>> 
>> 
>>  Best
>> 
>>  - March
>> 
>> 
>> 



[dpdk-dev] Scenario of dpdk-iokit //Re: Fwd: dpdk-iokit: Turn dpdk into the IO field.

2013-11-19 Thread Vincent JARDIN
so on the list if you prefer to do so ;) did you try a "perf top" on 
your system under load?


On 19/11/2013 10:56, March Jane wrote:
>
>   Gratitude to reply.
>   
>   FCP and Full-offload iSCSI will be nice, for example Qlogic 16Gbps FCP.
>   
>   But IO scenario is very different from IP scenario as my understanding.
>
>   March   
>
>   BTW: Do you have people at Bay Area?
>
>
>   PS:
>   Already finished header file for dpdk-iokit/libiokit_sctgt
>
>   
>
>
>
>
>   
>   
>
>
>
>
>
>
>
> On Nov 19, 2013, at 5:45 PM, Vincent JARDIN  
> wrote:
>
>> (off list since it coul become a troll ;) )
>>
>> At 6WIND, we developed a librte_crypto as a generic framework to manage any 
>> crypto framework:
>>   - Intel's QuickAssist
>>   - Cavium' Nitrox II
>>   - AES/NI SW crypto
>>
>> it was required in order to manage high rate of PCI IOs for Cryptos (IPsec, 
>> SSL, etc...)
>>
>> So, after the librte_pmd_mlx4, Virtio, Vmxnet3, that's more than 6 ultra low 
>> latency/very efficient drivers that we added into the DPDK and promoted.
>>
>> Which storage drivers would you foresee first to be run in userland?
>>
>> Best regards,
>>
>>
>> On 19/11/2013 02:22, March Jane wrote:
>>>
>>>   ? Nowadays dilemmas in enterprise-class storage system & motivation --
>>>
>>> The model of most storage system is ?Front-end cards + CPU + 
>>> back-end magnetic media hard drive?, in such system, the hard drive is very 
>>> slow in terms of its capability in random accessing - roughly 200IO per 
>>> second, in contrast, the processor is very fast, therefore many software 
>>> stack can tolerate waste of CPU cycle for exploiting capabilities of hard 
>>> drive.
>>> 
>>> In addition, regularly software for SAN is developed under kernel space 
>>> in order to achieve low latency - few milliseconds per request, and IO 
>>> pressure in scenario of heavy workload. Usually software in user-space with 
>>> POSIX is slow. However kernel-space developing is nightmare for engineers, 
>>> even we have approaches to simulate these code under user-space.
>>>
>>>  However, so far flash is coming to popular, the gap between CPU and 
>>> media is overturned, a single flash card can be easy to reach 1M iops, if 
>>> plug 10 such cards inside a server, the processor is hard to back. Thus, 
>>> today?s challenge in flash storage is to exploit the capability of 
>>> processor, ironically.  In addition to this purpose, move to user-space is 
>>> also a motivation, if move to user-space is OS-bypass rather than moving to 
>>> POSIX.
>>>
>>>
>>> Best
>>>
>>> - March
>>>
>>>
>>>
>



[dpdk-dev] rte_ring_sc_dequeue returns 0 but sets packet to NULL

2013-11-19 Thread Jose Gavine Cueto
Hi,

I am encountering a strange behavior of rte_ring_sc_dequeue, though I'm not
yet sure what causes this.

I have a code:

rc = rte_ring_sc_dequeue(fwdp->rxtx_rings->xmit_ring, &rpackets);

At first dequeue, rpackets gets a correct address of an rte_mbuf, however
at the second dequeue it returns 0 which is successful but sets the
rte_mbuf result to a NULL value.  Is this even possible, because its
happening in my scenario.  Or it could be just there's something wrong with
my code.

Cheers,
Pepe

-- 
To stop learning is like to stop loving.


[dpdk-dev] [PATCH 1/2] igb/ixgbe: ETH_MQ_RX_NONE should disable RSS

2013-11-19 Thread Maxime Leroy
As explained in rte_ethdev.h, ETH_MQ_RX_NONE allows to not choose RSS, DCB
or VMDQ mode to select the rx queues.

But the igb/ixgbe code always select RSS mode with ETH_MQ_RX_NONE. This patch
fixes this incoherence between the api and the source code.

Signed-off-by: Maxime Leroy 
---
 lib/librte_pmd_e1000/igb_rxtx.c   |4 ++--
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c |4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index f785d9f..641ceea 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -1745,8 +1745,6 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
*/
if (dev->data->nb_rx_queues > 1)
switch (dev->data->dev_conf.rxmode.mq_mode) {
-   case ETH_MQ_RX_NONE:
-   /* if mq_mode not assign, we use rss mode.*/
case ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
@@ -1754,6 +1752,8 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev); 
break;
+   case ETH_MQ_RX_NONE:
+   /* if mq_mode is none, disable rss mode.*/
default: 
igb_rss_disable(dev);
break;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c 
b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 0f7be95..e1b90f9 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3217,8 +3217,6 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 */
if (dev->data->nb_rx_queues > 1)
switch (dev->data->dev_conf.rxmode.mq_mode) {
-   case ETH_MQ_RX_NONE:
-   /* if mq_mode not assign, we use rss mode.*/
case ETH_MQ_RX_RSS:
ixgbe_rss_configure(dev);
break;
@@ -3231,6 +3229,8 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
ixgbe_vmdq_rx_hw_configure(dev);
break;

+   case ETH_MQ_RX_NONE:
+   /* if mq_mode is none, disable rss mode.*/
default: ixgbe_rss_disable(dev);
}
else
-- 
1.7.10.4



[dpdk-dev] [PATCH 2/2] igb/ixgbe: allow RSS with only one Rx queue

2013-11-19 Thread Maxime Leroy
We should be able to enable RSS with one Rx queue.
RSS hash can be useful independently of the number of queues.
Applications can use RSS hash to identify different flows.

Signed-off-by: Maxime Leroy 
---
 lib/librte_pmd_e1000/igb_rxtx.c   |7 ++-
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++-
 2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 641ceea..8c1e2cc 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -1743,8 +1743,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
/*
* SRIOV inactive scheme
*/
-   if (dev->data->nb_rx_queues > 1)
-   switch (dev->data->dev_conf.rxmode.mq_mode) {
+   switch (dev->data->dev_conf.rxmode.mq_mode) {
case ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
@@ -1757,9 +1756,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
default: 
igb_rss_disable(dev);
break;
-   }
-   else
-   igb_rss_disable(dev);
+   }
}

return 0;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c 
b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e1b90f9..ae9eda8 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3215,8 +3215,7 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 * SRIOV inactive scheme
 * any DCB/RSS w/o VMDq multi-queue setting
 */
-   if (dev->data->nb_rx_queues > 1)
-   switch (dev->data->dev_conf.rxmode.mq_mode) {
+   switch (dev->data->dev_conf.rxmode.mq_mode) {
case ETH_MQ_RX_RSS:
ixgbe_rss_configure(dev);
break;
@@ -3232,9 +3231,7 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
case ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default: ixgbe_rss_disable(dev);
-   }
-   else
-   ixgbe_rss_disable(dev);
+   }
} else {
switch (RTE_ETH_DEV_SRIOV(dev).active) {
/*
-- 
1.7.10.4



[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho


[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC 82599x 
two ports. My machine has a PCIe bug that does not allow me to send on more 
then 3 ports at wire rate. I get close to 40G 64 byte packets, but the forth 
port does is about 70% of wire rate because of the PCIe hardware bottle neck 
problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
 wrote:
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>

Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

   lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)


Port :

0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2

1, nb_lcores  2, private 0x700208, lcores:  3  4

2, nb_lcores  2, private 0x702e70, lcores:  5  6

3, nb_lcores  2, private 0x705ad8, lcores:  7  8



Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4

Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5

Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 2 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1c

Create: Default RX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  2:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 3 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1d

Create: Default RX  3:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  3:0  - Memory u

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
Sorry I miss-typed the speed of my machine it is 2.4Ghz not 3.4GHz, but that 
should not change the problem here.

I am not sure how to determine your machine has a problem other then starting 
up one port at a time and see if the rate drops when you start up the forth 
port.

Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 10:42 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:
How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

  lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

   Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

   1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

   2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

   3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

   4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

   5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

   6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

   7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

   8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)


Port :

   0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2

   1, nb_lcores  2, private 0x700208, lcores:  3  4

   2, nb_lcores  2, private 0x702e70, lcores:  5  6

   3, nb_lcores  2, private 0x705ad8, lcores:  7  8



Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4

   Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5

   Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Sequence TX 1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 2 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1c

   Create: Default RX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
BTW, the configuration looks fine, but you need to make sure the lcores are not 
split between two different CPU sockets. You can use the 
dpdk/tools/cpu_layout.py to do dump out the system configuration.


Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 10:42 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:
How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

  lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

   Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

   1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

   2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

   3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

   4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

   5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

   6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

   7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

   8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)


Port :

   0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2

   1, nb_lcores  2, private 0x700208, lcores:  3  4

   2, nb_lcores  2, private 0x702e70, lcores:  5  6

   3, nb_lcores  2, private 0x705ad8, lcores:  7  8



Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4

   Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5

   Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Sequence TX 1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 2 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1c

   Create: Default RX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


   Create: Default TX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

   Create: Range TX2:0  - Memory used (MBUFs 1024 x (size 1

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
 wrote:
>
> BTW, the configuration looks fine, but you need to make sure the lcores are 
> not split between two different CPU sockets. You can use the 
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office, 
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
>   lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX (pid:qid):
>
>4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 1: 0)
>
>5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 2: 0) , TX (pid:qid):
>
>6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 2: 0)
>
>7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 3: 0) , TX (pid:qid):
>
>8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 3: 0)
>
>
> Port :
>
>0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2
>
>1, nb_lcores  2, private 0x700208, lcores:  3  4
>
>2, nb_lcores  2, private 0x702e70, lcores:  5  6
>
>3, nb_lcores  2, private 0x705ad8, lcores:  7  8
>
>
>
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4
>
>Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
> Hdr 64)) + 395392 =515 KB
>
>
>
> Port memory used =  10251 KB
>
> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5
>
>Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Sequence TX 1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 1984 +
> Hdr 64)) + 395392 =515 KB
>
>
>
> Port memory used =  10251 KB
>
> Initialize Port 2 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1c
>
>Create: Default RX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>Create: Default

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
Normally when I see this problem it means the the lcores are not mapped 
correctly. What can happen is you have a Rx and a TX on the same physical core 
or two RX/TX on the same physical core.

Make sure you have a Rx or Tx running on a single core look at the 
cpu_layout.py output and verify the configuration is correct. If you have 8 
physical cores in the then you need to make sure on one of the lcores on that 
core is being used.

Let me know what happens.

Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 11:04 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:

BTW, the configuration looks fine, but you need to make sure the lcores are not 
split between two different CPU sockets. You can use the 
dpdk/tools/cpu_layout.py to do dump out the system configuration.


Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533


On Nov 19, 2013, at 10:42 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:

How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

 lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

  Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

  1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

  2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

  3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

  4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

  5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

  6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

  7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

  8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)


Port :

  0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2

  1, nb_lcores  2, private 0x700208, lcores:  3  4

  2, nb_lcores  2, private 0x702e70, lcores:  5  6

  3, nb_lcores  2, private 0x705ad8, lcores:  7  8



Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4

  Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


  Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

  Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

  Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

  Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5

  Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


  Create: Default 

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
 wrote:
> Normally when I see this problem it means the the lcores are not mapped
> correctly. What can happen is you have a Rx and a TX on the same physical
> core or two RX/TX on the same physical core.
>
> Make sure you have a Rx or Tx running on a single core look at the
> cpu_layout.py output and verify the configuration is correct. If you have 8
> physical cores in the then you need to make sure on one of the lcores on
> that core is being used.
>
> Let me know what happens.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:04 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
>  wrote:
>
>
> BTW, the configuration looks fine, but you need to make sure the lcores are
> not split between two different CPU sockets. You can use the
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
>  lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>   Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>   1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>   2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>   3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX (pid:qid):
>
>   4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 1: 0)
>
>   5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 2: 0) , TX (pid:qid):
>
>   6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 2: 0)
>
>   7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 3: 0) , TX (pid:qid):
>
>   8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 3: 0)
>
>
> Port :
>
>   0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2
>
>   1, nb_lcores  2, private 0x700208, lcores:  3  4
>
>   2, nb_lcores  2, private 0x702e70, lcores:  5  6
>
>   3, nb_lcores  2, private 0x705ad8, lcores:  7  8
>
>
>
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4
>
>   Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>   Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>   Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>   Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>   Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
> Hdr 64)) + 395392 =515 KB
>
>
>
> Port memory used =  10251 KB
>
> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5
>
>   Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
Give this a try, if that does not work then something else is going on here. I 
am trying to make sure we do not cross the QPI for any reason putting the RX/TX 
queues related to a port on the same core.

sudo ./app/build/pktgen -c 3ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[2:4].0, [6:8].1, [3:5].2, [7:9].3" -f test/forward.lua

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua

cores =  [0, 1, 2, 8, 9, 10]
sockets =  [1, 0]
   Socket 1Socket 0
   -   -
Core 0  [0, 12] [1, 13]
Core 1  [2, 14] [3, 15]
Core 2  [4, 16] [5, 17]
Core 8  [6, 18] [7, 19]
Core 9  [8, 20] [9, 21]
Core 10 [10, 22][11, 23]

Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 11:35 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:
Normally when I see this problem it means the the lcores are not mapped
correctly. What can happen is you have a Rx and a TX on the same physical
core or two RX/TX on the same physical core.

Make sure you have a Rx or Tx running on a single core look at the
cpu_layout.py output and verify the configuration is correct. If you have 8
physical cores in the then you need to make sure on one of the lcores on
that core is being used.

Let me know what happens.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
mobile 940.213.5533

On Nov 19, 2013, at 11:04 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:


BTW, the configuration looks fine, but you need to make sure the lcores are
not split between two different CPU sockets. You can use the
dpdk/tools/cpu_layout.py to do dump out the system configuration.


Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
mobile 940.213.5533


On Nov 19, 2013, at 10:42 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:

How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

 Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

 1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

 2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

 3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

 4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

 5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

 6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

 7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

 8, type  TX , rx_cnt  0, tx_cnt  1 privat

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 4:18 PM, Wiles, Roger Keith
 wrote:
> Give this a try, if that does not work then something else is going on here.
> I am trying to make sure we do not cross the QPI for any reason putting the
> RX/TX queues related to a port on the same core.
>
> sudo ./app/build/pktgen -c 3ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[2:4].0, [6:8].1, [3:5].2, [7:9].3" -f test/forward.lua
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> cores =  [0, 1, 2, 8, 9, 10]
> sockets =  [1, 0]
>Socket 1Socket 0
>-   -
> Core 0  [0, 12] [1, 13]
> Core 1  [2, 14] [3, 15]
> Core 2  [4, 16] [5, 17]
> Core 8  [6, 18] [7, 19]
> Core 9  [8, 20] [9, 21]
> Core 10 [10, 22][11, 23]
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:35 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
>  wrote:
>
> Normally when I see this problem it means the the lcores are not mapped
> correctly. What can happen is you have a Rx and a TX on the same physical
> core or two RX/TX on the same physical core.
>
> Make sure you have a Rx or Tx running on a single core look at the
> cpu_layout.py output and verify the configuration is correct. If you have 8
> physical cores in the then you need to make sure on one of the lcores on
> that core is being used.
>
> Let me know what happens.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:04 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
>  wrote:
>
>
> BTW, the configuration looks fine, but you need to make sure the lcores are
> not split between two different CPU sockets. You can use the
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
> lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>  Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>  1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>  2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>  3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX (pid:qid):
>
>  4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 1: 0)
>
>  5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 2: 0) , TX (pid:qid):
>
>  6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 2: 0)
>
>  7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 3: 0) , TX (pid:qid):
>
>  8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), R

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread Wiles, Roger Keith
I do not think a newer version will effect the performance, but you can try it.

git clone git://github.com/Pktgen/Pktgen-DPDK

This one is 2.2.5 and DPDK 1.5.0


Keith Wiles, Principal Technologist for Networking member of the CTO office, 
Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]

On Nov 19, 2013, at 3:33 PM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 4:18 PM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:
Give this a try, if that does not work then something else is going on here.
I am trying to make sure we do not cross the QPI for any reason putting the
RX/TX queues related to a port on the same core.

sudo ./app/build/pktgen -c 3ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[2:4].0, [6:8].1, [3:5].2, [7:9].3" -f test/forward.lua

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


cores =  [0, 1, 2, 8, 9, 10]
sockets =  [1, 0]
  Socket 1Socket 0
  -   -
Core 0  [0, 12] [1, 13]
Core 1  [2, 14] [3, 15]
Core 2  [4, 16] [5, 17]
Core 8  [6, 18] [7, 19]
Core 9  [8, 20] [9, 21]
Core 10 [10, 22][11, 23]


Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
mobile 940.213.5533

On Nov 19, 2013, at 11:35 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:

Normally when I see this problem it means the the lcores are not mapped
correctly. What can happen is you have a Rx and a TX on the same physical
core or two RX/TX on the same physical core.

Make sure you have a Rx or Tx running on a single core look at the
cpu_layout.py output and verify the configuration is correct. If you have 8
physical cores in the then you need to make sure on one of the lcores on
that core is being used.

Let me know what happens.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
mobile 940.213.5533

On Nov 19, 2013, at 11:04 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:


BTW, the configuration looks fine, but you need to make sure the lcores are
not split between two different CPU sockets. You can use the
dpdk/tools/cpu_layout.py to do dump out the system configuration.


Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
mobile 940.213.5533


On Nov 19, 2013, at 10:42 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
mailto:keith.wiles at windriver.com>> wrote:

How do you have Pktgen configured in this case?

On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.

Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136  mobile 940.213.5533  fax 000.000.

On Nov 19, 2013, at 10:09 AM, jinho hwang mailto:hwang.jinho at gmail.com>> wrote:

Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho



Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

3, type  RX , rx_cnt  1, tx_cnt  0 private (

[dpdk-dev] 82546EB Copper issue

2013-11-19 Thread Ognjen Joldzic
Hi,

Recently I came across a 82546EB Dual-port Gigabit Ethernet (Copper) NIC
and tried to include it in our current DPDK setup. However, the card
doesn't seem to be supported (I was unable to bind the igb_uio driver).
There was a post in mailing list earlier this year stating that this
particular NIC is not supported as of r1.2.3.
Is the situation any different with the 1.5.0 release (or are there any
plans to support this model)?

Thanks,
Ognjen