[dpdk-dev] l2fwd-vf application

2013-08-13 Thread jinho hwang
On Wed, Jul 31, 2013 at 9:04 PM, Thomas Monjalon
 wrote:
> Hello,
>
> 31/07/2013 09:29, Jayakumar Satri :
>>Just started experimenting with the dpdk. Working with
>> dpdk-1.2.3r4. I was running the l2fwd-vf application using virtual box. I
>> got the following error.
>>
>> ?Cause: No Ethernet port ? bye?
>
> You are trying to use a VF driver. Is virtual box emulating a VF device ?
> This example application initialize 3 drivers: igb, ixgbe, ixgbevf.
> Please check the device support of your hypervisor (virtual box here).
> You can also check the supported drivers of DPDK-1.3 in this page:
> http://dpdk.org/doc/nics
>
> About the form of your email:
> - Please do not use HTML when posting to this mailing-list.
> - Your email CANNOT be confidential so do not put this ugly signature (twice)
> Just look how your email is ugly when archived:
> http://dpdk.org/ml/archives/dev/2013-July/000367.html
>
> --
> Thomas

Thomas,

I have played around the vf function as well. I got the vf to work in
virtual machine, but I can not receive any packets from hypervisor. I
expected to receive packets from both hypervisor and vm based on l2
switching in NIC. My suspicion is that I may not configure the
physical function correctly in hypervisor? (I do not see any
difference between having vf and not having vf except for delivering
kernel commands for mmu and pci realloc).

In summary, I want to receive packets both in VM (via VF) and in
hypervisor (via PF).

Can you guide me for this?

Thank you,

Jinho


[dpdk-dev] DMAR fault

2013-08-13 Thread jinho hwang
Hi All,

I am using iommu to receive packets both from hypervisor and from VM. KVM
is used for the virtualization. However, after I deliver the kernel options
(iommu and pci realloc), I can not receive packets in hypervisor, but VF
works fine in VM. When I tried to receive packets in hypervisor, dmesg
shows the following:

ixgbe :03:00.1: complete
ixgbe :03:00.1: PCI INT A disabled
igb_uio :03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
igb_uio :03:00.1: setting latency timer to 64
igb_uio :03:00.1: irq 87 for MSI/MSI-X
uio device registered with irq 57
DRHD: handling fault status reg 2
DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*


DMAR:[fault reason 02] Present bit in context entry is clear

03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port
Backplane Connection (rev 01)
Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
Mezz
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
SERR-  [disabled]
Capabilities: 
Kernel driver in use: igb_uio
Kernel modules: ixgbe

We can see those addresses are not matched. So the kernel got fault. I am
wondering why this happens?

One suspicion for this is BIOS. I am currently using BIOS version 3.0, but
the latest is 6.3.0. Does this affect the matter?

Any help appreciated!

Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 



[dpdk-dev] DMAR fault

2013-08-12 Thread jinho hwang
On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette
wrote:

>
> On 08/12/2013 04:19 PM, jinho hwang wrote:
>
> Hi All,
>
> I am using iommu to receive packets both from hypervisor and from VM. KVM
> is used for the virtualization. However, after I deliver the kernel options
> (iommu and pci realloc), I can not receive packets in hypervisor, but VF
> works fine in VM. When I tried to receive packets in hypervisor, dmesg
> shows the following:
>
> ixgbe :03:00.1: complete
> ixgbe :03:00.1: PCI INT A disabled
> igb_uio :03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
> igb_uio :03:00.1: setting latency timer to 64
> igb_uio :03:00.1: irq 87 for MSI/MSI-X
> uio device registered with irq 57
> DRHD: handling fault status reg 2
> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
>
>
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual
> Port Backplane Connection (rev 01)
> Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
> Mezz
> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> SERR-  Latency: 0, Cache Line Size: 64 bytes
> Interrupt: pin A routed to IRQ 38
> Region 0: Memory at *d940* (64-bit, prefetchable) [size=4M]
> Region 2: I/O ports at ece0 [size=32]
> Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
> Expansion ROM at  [disabled]
> Capabilities: 
> Kernel driver in use: igb_uio
> Kernel modules: ixgbe
>
> We can see those addresses are not matched. So the kernel got fault. I am
> wondering why this happens?
>
> I have seen this happen when VT-d is enabled in the bios.  If you are
> using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets
> are received.
>
> Pb
>
>
>  One suspicion for this is BIOS. I am currently using BIOS version 3.0,
> but the latest is 6.3.0. Does this affect the matter?
>
>  Any help appreciated!
>
>  Jinho
>
>
>
>
Paul,

thanks. I tried your suggestion, but it works like no iommu command in boot
line. I passed intel_iommu=pt, and receive packets from hypervisor.
However, when I started VM with "-device pci-assign,host=01:00.0", it shows
the following message:

qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.
 Unable to assign device "(null)"
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device initialization
failed.
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
'kvm-pci-assign' could not be initialized

The device is detached from kernel, and move to pci-stub. dmesg does not
show any DMAR fault message anymore.

Any idea?

Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://dpdk.org/ml/archives/dev/attachments/20130812/28f347ff/attachment.html>


[dpdk-dev] DMAR fault

2013-08-12 Thread jinho hwang
On Mon, Aug 12, 2013 at 6:22 PM, Paul Barrette
wrote:

>
> On 08/12/2013 06:07 PM, jinho hwang wrote:
>
>
> On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette <
> paul.barrette at windriver.com> wrote:
>
>>
>> On 08/12/2013 04:19 PM, jinho hwang wrote:
>>
>> Hi All,
>>
>> I am using iommu to receive packets both from hypervisor and from VM. KVM
>> is used for the virtualization. However, after I deliver the kernel options
>> (iommu and pci realloc), I can not receive packets in hypervisor, but VF
>> works fine in VM. When I tried to receive packets in hypervisor, dmesg
>> shows the following:
>>
>> ixgbe :03:00.1: complete
>> ixgbe :03:00.1: PCI INT A disabled
>> igb_uio :03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
>> igb_uio :03:00.1: setting latency timer to 64
>> igb_uio :03:00.1: irq 87 for MSI/MSI-X
>> uio device registered with irq 57
>> DRHD: handling fault status reg 2
>> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
>>
>>
>> DMAR:[fault reason 02] Present bit in context entry is clear
>>
>> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual
>> Port Backplane Connection (rev 01)
>> Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
>> Mezz
>> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>> ParErr- Stepping- SERR- FastB2B- DisINTx+
>> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
>> SERR- > Latency: 0, Cache Line Size: 64 bytes
>> Interrupt: pin A routed to IRQ 38
>> Region 0: Memory at *d940* (64-bit, prefetchable) [size=4M]
>> Region 2: I/O ports at ece0 [size=32]
>> Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
>> Expansion ROM at  [disabled]
>> Capabilities: 
>> Kernel driver in use: igb_uio
>> Kernel modules: ixgbe
>>
>> We can see those addresses are not matched. So the kernel got fault. I am
>> wondering why this happens?
>>
>>  I have seen this happen when VT-d is enabled in the bios.  If you are
>> using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets
>> are received.
>>
>> Pb
>>
>>
>>  One suspicion for this is BIOS. I am currently using BIOS version 3.0,
>> but the latest is 6.3.0. Does this affect the matter?
>>
>>  Any help appreciated!
>>
>>  Jinho
>>
>>
>>
>>
> Paul,
>
>  thanks. I tried your suggestion, but it works like no iommu command in
> boot line. I passed intel_iommu=pt, and receive packets from hypervisor.
> However, when I started VM with "-device pci-assign,host=01:00.0", it shows
> the following message:
>
>  qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.
>  Unable to assign device "(null)"
>  qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
> initialization failed.
> qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
> 'kvm-pci-assign' could not be initialized
>
>  The device is detached from kernel, and move to pci-stub. dmesg does not
> show any DMAR fault message anymore.
>
>  Any idea?
>
>  Jinho
>
>
> Jinho,
>  you need to specify both
>
> " intel_iommu=on iommu=pt"
>
> Pb
>

I tried that as well, but it works as if I only add intel_iommu=on.. which
means I do not receive any packets from hypervisor. I also have pci realloc
added. Does this affect?

Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://dpdk.org/ml/archives/dev/attachments/20130812/dca6585a/attachment.html>


[dpdk-dev] I couldn't manage to configure RSS

2013-08-17 Thread jinho hwang
On Sat, Aug 17, 2013 at 11:57 AM, Hamed khanmirza  wrote:

> Hi,
>
> I successfully managed to run l2fw example and also l3fw with only one
> queue. But I couldn't config the RSS in l3fw and also test-pmd application.
> I simply create two queues per port on two lcores and set the RSS_ETH_IPV4
> (I laso tried other options). I generate a traffic with different
> destination addresses but all the traffic goes to the first queue and the
> other queue in both applications is empty !
> I also fully read both application codes, api_reference, programmers
> guide, but unfortunately couldn't find anyway to activate this feature.
> Have you any advice for me? Is there any trick or other configuration I
> should enable?
> My Ethernet card is X540, and I'm using Intel E5-2650 CPU on ubuntu LTS
> 12.0.4.
>
>
Don't you have to different source addresses? How come packets with
different destination addresses go to your machine?
-- next part --
An HTML attachment was scrubbed...
URL: 



[dpdk-dev] About a zero copy framework

2013-12-10 Thread jinho hwang
On Tue, Dec 10, 2013 at 4:09 PM, Thomas Monjalon
 wrote:
> Hello,
>
> 10/12/2013 17:40, GongJinrong :
>>   I am trying to develop an open source guest/host zero copy
>> communication channel framework for kvm, I found DPDK has similar module,
>> but it seems couples with intel NICs, how can I use DPDK to do the zero copy
>> for kvm without intel NICs(host to guest or guest to guest)? I only need a
>> data channel framework.
>
> Have you seen virtio-net-pmd ?
> http://dpdk.org/browse/virtio-net-pmd
> It could be close to what you want.
>
> --
> Thomas

You can also look into DPDK + OpenVSwitch (OVS). Intel integrated DPDK
in order to achieve high-speed packet delivery with zero-copy.


[dpdk-dev] 1 core / 1 port receiving performance discrepancy

2013-07-12 Thread jinho hwang
Hi Guys,

I am currently testing my code with two machines(X5650 @ 2.67GHz,
82599EB): one runs my code, and the other runs Pktgen. The first task
is to see how my code performs comparing to test-pmd. Both use only
one port with one core to receive packets. test-pmd uses rxonly mode.
My code only receives packets and discards them right away, and there
are no other processing at all. Different from what I expected,
test-pmd shows 33M packets (do not count on the second --- just total
number of packets sent from Pktgen), whereas mine shows 18M packets
with the same traffic.

I have tried to configure many things such as increasing rx queue
size, increasing max packet receive buffer, but the number did not
make differences.

Can I get some help?

Thank you,

Jinho


[dpdk-dev] NIC Stops Transmitting

2013-07-26 Thread jinho hwang
On Fri, Jul 26, 2013 at 4:04 PM, Scott Talbert  wrote:
> On Fri, 26 Jul 2013, Stephen Hemminger wrote:
>
>>> I'm writing an application using DPDK that transmits a large number of
>>> packets (it doesn't receive any).  When I transmit at 2 Gb/sec,
>>> everything
>>> will run fine for several seconds (receiver is receiving at correct
>>> rate),
>>> but then the NIC appears to get 'stuck' and doesn't transmit any more
>>> packets.  In this state, rte_eth_tx_burst() is returning zero (suggesting
>>> that there are no available transmit descriptors), but even if I sleep()
>>> for a second and try again, rte_eth_tx_burst() still returns 0.  It
>>> almost
>>> appears as if a packet gets stuck in the transmit ring and keeps
>>> everything from flowing.  I'm using an Intel 82599EB NIC.
>>>
>> Make sure there is enough memory for mbufs.
>> Also what is your ring size and transmit free threshold?
>> It is easy to instrument the driver to see where it is saying "no space
>> left"
>> Also be careful with threshold values, many values of
>> pthresh/hthresh/wthresh
>> don't work. I would check the Intel reference manual for your hardware.
>
>
> Thanks for the tips.  I don't think I'm running out of mbufs, but I'll check
> that again.  I am using these values from one of the examples - which claim
> to be correct for the 82599EB.
>
> /*
>  * These default values are optimized for use with the Intel(R) 82599 10 GbE
>  * Controller and the DPDK ixgbe PMD. Consider using other values for other
>  * network controllers and/or network drivers.
>  */
> #define TX_PTHRESH 36 /**< Default values of TX prefetch threshold reg. */
> #define TX_HTHRESH 0  /**< Default values of TX host threshold reg. */
> #define TX_WTHRESH 0  /**< Default values of TX write-back threshold reg. */
>
> static const struct rte_eth_txconf tx_conf = {
> .tx_thresh = {
> .pthresh = TX_PTHRESH,
> .hthresh = TX_HTHRESH,
> .wthresh = TX_WTHRESH,
> },
> .tx_free_thresh = 0, /* Use PMD default values */
> .tx_rs_thresh = 0, /* Use PMD default values */
> };
>
> /*
>  * Configurable number of RX/TX ring descriptors
>  */
> #define RTE_TEST_TX_DESC_DEFAULT 512
> static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
>

Scott,

I am wondering whether you use multiple cores accessing the same
receive queue. I had this problem before, but after I make the same
number of receiving queues as the number of receiving cores, the
problem disappeared. I did not dig more since I did not care how many
receive queues I have did not matter.

Jinho


[dpdk-dev] CPU does not support x86-64 instruction set

2013-06-06 Thread jinho hwang
Hi,

I am having trouble to compile the dpdk in a virtual machine. I am using
KVM as a hypervisor, and guest os is ubuntu 12.10, kernel 3.5.0, x86_64. I
have been using dpdk for a week and run the examples in a native platform.
But when I try to move it to virtual machine, it gives me the following
error:

/home/guest/dpdk-1.2.3r1/lib/librte_eal/linuxapp/eal/eal.c:1:0: error: CPU
you selected does not support x86-64 instruction set

I am sure that I have installed 64 bit version as it shows in "uname -a".
Also, /proc/cpuinfo shows "model name: QEMU Virtual CPU version
(cpu64-rhe16)".

Do you have any idea?

Thank you,

Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 



[dpdk-dev] CPU does not support x86-64 instruction set

2013-06-06 Thread jinho hwang
On Thu, Jun 6, 2013 at 5:27 PM, jinho hwang  wrote:

> Hi,
>
> I am having trouble to compile the dpdk in a virtual machine. I am using
> KVM as a hypervisor, and guest os is ubuntu 12.10, kernel 3.5.0, x86_64. I
> have been using dpdk for a week and run the examples in a native platform.
> But when I try to move it to virtual machine, it gives me the following
> error:
>
> /home/guest/dpdk-1.2.3r1/lib/librte_eal/linuxapp/eal/eal.c:1:0: error: CPU
> you selected does not support x86-64 instruction set
>
> I am sure that I have installed 64 bit version as it shows in "uname -a".
> Also, /proc/cpuinfo shows "model name: QEMU Virtual CPU version
> (cpu64-rhe16)".
>
> Do you have any idea?
>
> Thank you,
>
> Jinho
>

I figured out just now. It was because gcc version. I used to use 4.4, but
in the guest virtual machine, I used 4.7 which may have additional
functions to check strictly. - Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://dpdk.org/ml/archives/dev/attachments/20130606/0a77b188/attachment.html>


[dpdk-dev] impact on only 1 hugepage

2013-06-21 Thread jinho hwang
Hi All,

I want to ask experts about what impacts only 1 hugepage (whatever size...
maybe 4GB) can impact. Since the goal of using hugepages is to reduce the
number of pages used so that the lookup speed increases. Also, the hugepage
is allocated at the beginning of booting the system, it must be consecutive
physically.

Hope to hear from experts!

Jinho
-- next part --
An HTML attachment was scrubbed...
URL: 



[dpdk-dev] only 6 packets received

2013-06-26 Thread jinho hwang
Hi,

I have a dpdk running and one port is connected to switch. Before dpdk is
started, this port has IP address which can be seen by public. However,
after dpdk started with testpmd. After sometime later, I can only see 6
packets (RX-packets) and lots of RX-errors.

I have 82599EB NIC, Xeon X5650, 64bit redhat 6.2.

Thank you,

Jinho

-- 
Jinho Hwang
PhD Candidate
Department of Computer Science
The George Washington University
Washington, DC 20052
hwang.jinho at gmail.com (email)
276.336.0971 (Cell)
202.994.4875 (fax)
070.8285.6546 (myLg070)
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://dpdk.org/ml/archives/dev/attachments/20130626/310395d1/attachment.html>


[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
Hi All,

I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?

Thank you,

Jinho


[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
 wrote:
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>

Hi Keith,

Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.

My configuration is:

sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua


=== port to lcore mapping table (# lcores 9) ===

   lcore: 0 1 2 3 4 5 6 7 8

port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1

port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1

port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1

Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1

Display and Timer on lcore 0, rx:tx counts per port/lcore


Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128

Lcore:

1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):

2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)

3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):

4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)

5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):

6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)

7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):

8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)


Port :

0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2

1, nb_lcores  2, private 0x700208, lcores:  3  4

2, nb_lcores  2, private 0x702e70, lcores:  5  6

3, nb_lcores  2, private 0x705ad8, lcores:  7  8



Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4

Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5

Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 1:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 2 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1c

Create: Default RX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB


Create: Default TX  2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Range TX2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Sequence TX 2:0  - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 =   2435 KB

Create: Special TX  2:0  - Memory used (MBUFs   64 x (size 1984 +
Hdr 64)) + 395392 =515 KB



Port memory used =  10251 KB

Initialize Port 3 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:4a:e6:1d

Create: Default RX  3:0  - Memory used (MBUF

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
 wrote:
>
> BTW, the configuration looks fine, but you need to make sure the lcores are 
> not split between two different CPU sockets. You can use the 
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office, 
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
>   lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX (pid:qid):
>
>4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 1: 0)
>
>5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 2: 0) , TX (pid:qid):
>
>6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 2: 0)
>
>7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 3: 0) , TX (pid:qid):
>
>8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 3: 0)
>
>
> Port :
>
>0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2
>
>1, nb_lcores  2, private 0x700208, lcores:  3  4
>
>2, nb_lcores  2, private 0x702e70, lcores:  5  6
>
>3, nb_lcores  2, private 0x705ad8, lcores:  7  8
>
>
>
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4
>
>Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Range TX0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Sequence TX 0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1984 +
> Hdr 64)) + 395392 =515 KB
>
>
>
> Port memory used =  10251 KB
>
> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a5
>
>Create: Default RX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>Create: Default TX  1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>Create: Range TX1:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) 

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
 wrote:
> Normally when I see this problem it means the the lcores are not mapped
> correctly. What can happen is you have a Rx and a TX on the same physical
> core or two RX/TX on the same physical core.
>
> Make sure you have a Rx or Tx running on a single core look at the
> cpu_layout.py output and verify the configuration is correct. If you have 8
> physical cores in the then you need to make sure on one of the lcores on
> that core is being used.
>
> Let me know what happens.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:04 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
>  wrote:
>
>
> BTW, the configuration looks fine, but you need to make sure the lcores are
> not split between two different CPU sockets. You can use the
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
>  lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>   Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>   1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>   2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>   3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX (pid:qid):
>
>   4, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 1: 0)
>
>   5, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 2: 0) , TX (pid:qid):
>
>   6, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 2: 0)
>
>   7, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 3: 0) , TX (pid:qid):
>
>   8, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 3: 0)
>
>
> Port :
>
>   0, nb_lcores  2, private 0x6fd5a0, lcores:  1  2
>
>   1, nb_lcores  2, private 0x700208, lcores:  3  4
>
>   2, nb_lcores  2, private 0x702e70, lcores:  5  6
>
>   3, nb_lcores  2, private 0x705ad8, lcores:  7  8
>
>
>
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:2f:f2:a4
>
>   Create: Default RX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>
>   Create: Default TX  0:0  - Memory used (MBUFs 1024 x (size 1984 +
> Hdr 64)) + 395392 =   2435 KB
>
>   Create: Range TX0:

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-19 Thread jinho hwang
On Tue, Nov 19, 2013 at 4:18 PM, Wiles, Roger Keith
 wrote:
> Give this a try, if that does not work then something else is going on here.
> I am trying to make sure we do not cross the QPI for any reason putting the
> RX/TX queues related to a port on the same core.
>
> sudo ./app/build/pktgen -c 3ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[2:4].0, [6:8].1, [3:5].2, [7:9].3" -f test/forward.lua
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> cores =  [0, 1, 2, 8, 9, 10]
> sockets =  [1, 0]
>Socket 1Socket 0
>-   -
> Core 0  [0, 12] [1, 13]
> Core 1  [2, 14] [3, 15]
> Core 2  [4, 16] [5, 17]
> Core 8  [6, 18] [7, 19]
> Core 9  [8, 20] [9, 21]
> Core 10 [10, 22][11, 23]
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:35 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
>  wrote:
>
> Normally when I see this problem it means the the lcores are not mapped
> correctly. What can happen is you have a Rx and a TX on the same physical
> core or two RX/TX on the same physical core.
>
> Make sure you have a Rx or Tx running on a single core look at the
> cpu_layout.py output and verify the configuration is correct. If you have 8
> physical cores in the then you need to make sure on one of the lcores on
> that core is being used.
>
> Let me know what happens.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:04 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
>  wrote:
>
>
> BTW, the configuration looks fine, but you need to make sure the lcores are
> not split between two different CPU sockets. You can use the
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
> lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1
>
>  Display and Timer on lcore 0, rx:tx counts per port/lcore
>
>
> Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
>
> Lcore:
>
>  1, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 0: 0) , TX (pid:qid):
>
>  2, type  TX , rx_cnt  0, tx_cnt  1 private (nil), RX (pid:qid): ,
> TX (pid:qid): ( 0: 0)
>
>  3, type  RX , rx_cnt  1, tx_cnt  0 private (nil), RX (pid:qid): (
> 1: 0) , TX

[dpdk-dev] ways to generate 40Gbps with two NICs x two ports?

2013-11-20 Thread jinho hwang
On Tue, Nov 19, 2013 at 4:38 PM, Wiles, Roger Keith
 wrote:
>
> I do not think a newer version will effect the performance, but you can try 
> it.
>
> git clone git://github.com/Pktgen/Pktgen-DPDK
>
> This one is 2.2.5 and DPDK 1.5.0
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office, 
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 3:33 PM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 4:18 PM, Wiles, Roger Keith
>  wrote:
>
> Give this a try, if that does not work then something else is going on here.
> I am trying to make sure we do not cross the QPI for any reason putting the
> RX/TX queues related to a port on the same core.
>
> sudo ./app/build/pktgen -c 3ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[2:4].0, [6:8].1, [3:5].2, [7:9].3" -f test/forward.lua
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> cores =  [0, 1, 2, 8, 9, 10]
> sockets =  [1, 0]
>   Socket 1Socket 0
>   -   -
> Core 0  [0, 12] [1, 13]
> Core 1  [2, 14] [3, 15]
> Core 2  [4, 16] [5, 17]
> Core 8  [6, 18] [7, 19]
> Core 9  [8, 20] [9, 21]
> Core 10 [10, 22][11, 23]
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:35 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 12:24 PM, Wiles, Roger Keith
>  wrote:
>
> Normally when I see this problem it means the the lcores are not mapped
> correctly. What can happen is you have a Rx and a TX on the same physical
> core or two RX/TX on the same physical core.
>
> Make sure you have a Rx or Tx running on a single core look at the
> cpu_layout.py output and verify the configuration is correct. If you have 8
> physical cores in the then you need to make sure on one of the lcores on
> that core is being used.
>
> Let me know what happens.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
> On Nov 19, 2013, at 11:04 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:54 AM, Wiles, Roger Keith
>  wrote:
>
>
> BTW, the configuration looks fine, but you need to make sure the lcores are
> not split between two different CPU sockets. You can use the
> dpdk/tools/cpu_layout.py to do dump out the system configuration.
>
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> mobile 940.213.5533
>
>
> On Nov 19, 2013, at 10:42 AM, jinho hwang  wrote:
>
> On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
>  wrote:
>
> How do you have Pktgen configured in this case?
>
> On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
> 82599x two ports. My machine has a PCIe bug that does not allow me to send
> on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
> the forth port does is about 70% of wire rate because of the PCIe hardware
> bottle neck problem.
>
> Keith Wiles, Principal Technologist for Networking member of the CTO office,
> Wind River
> direct 972.434.4136  mobile 940.213.5533  fax 000.000.
>
> On Nov 19, 2013, at 10:09 AM, jinho hwang  wrote:
>
> Hi All,
>
> I have two NICs (82599) x two ports that are used as packet generators. I
> want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
> seem to be able to do it when two port in a NIC are used simultaneously.
> Does anyone know how to generate 40Gbps without replicating packets in the
> switch?
>
> Thank you,
>
> Jinho
>
>
>
> Hi Keith,
>
> Thank you for the e-mail. I am not sure how I figure out whether my
> PCIe also has any problems to prevent me from sending full line-rates.
> I use Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz. It is hard for
> me to figure out where is the bottleneck.
>
> My configuration is:
>
> sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
> "[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
>
>
> === port to lcore mapping table (# lcores 9) ===
>
> lcore: 0 1 2 3 4 5 6 7 8
>
> port   0:  D: T  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   1:  D: T  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0  0: 0  0: 0 =  1: 1
>
> port   2:  D: T  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1  0: 0  0: 0 =  1: 1
>
> port   3:  D: T  0: 0  0: 0  0: 0  0: 0  0: 0  0: 0  1: 0  0: 1 =  1: 1
>
> Total   :  0: 0  1: 0  0: 1  1: 0  0: 1  1: 0  0: 1  1: 0