[dpdk-dev] DPDK failed to handle intialize VF

2013-11-12 Thread cheng.luo...@hitachi.com
Hi,

I am using DPDK for SRIOV.
I have one Intel X520 with two PFs.
After binding the two PFs to igb_uio, I create two VFs and assign
them to a virtual machine.
I found that to wake up VF, the VF will send some message to PF 
such as IXGBE_VF_SET_MACVLAN(0x06) and IXGBE_VF_API_NEGOTIATE(0x08)
for the initialization.

However, DPDK's PMD driver does not handle this messgae.
In the source code /lib/librte_pmd_ixgbe/ixgbe_pf.c
the function ixgbe_rcv_msg_from_vf only handle the following message:
IXGBE_VF_SET_MAC_ADDR   (0x02)
IXGBE_VF_SET_MULTICAST  (0x03)
IXGBE_VF_SET_LPE  (0x04)
IXGBE_VF_SET_VLAN   (0x05)
and take other message as error message and take no operation.
Therefore, if take DPDK to control the PF, I can not use VF with 
ixgbevf driver in the VMs.
The PMD driver will endless print error message as follow:
-
PMD: Unhandled Msg 0006
PMD: Unhandled Msg 0008
---

And in the VMs, it also prints endless error message
-
[  447.339765] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.748556] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.844914] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps


While I also read the code in ixgbe, and found that it can handle message 0x06 
and 0x08.
Is this a bug for DPDK ?
Or any step is wrong for my configuration?

I think DPDK should be ok to run in the host while with VF in the VM uses 
normal driver such as ixgbevf.







[dpdk-dev] raw frame to rte_mbuf

2013-11-12 Thread Jose Gavine Cueto
Hi,

In DPDK how should a raw ethernet frame converted to rte_mbuf * ?  For
example if I have an ARP packet:

void * arp_pkt

how should this be converted to an rte_mbuf * for transmission, does a
simple cast suffice ?

Cheers,
Pepe

-- 
To stop learning is like to stop loving.


[dpdk-dev] i210 performance ?

2013-11-12 Thread Armin Steinhoff

Hi All,

after porting Intel's Data Plan to QNX 6.5 I'm doing some performance
test with "testpmd".

I'm using a single i210 board installed on a dual core QNX 6.5 machine
and using "testpmd" as a packet generator (UDP packets).

The test are done with:

start:   testpmd -c 3 -n  -> interactive mode by default
set burst 1
set fwd txonly
start
stop

Without a delay between packets I see a huge number of dropped packets.
After introducing a delay of 60us after sending the "packet burst"
dropping of packets doesn't happen.

My questions are:  why dropps the i210 packets ?  Is it because of the
fact that probably all send descriptors are occupied ?

Best Regards

Armin Steinhoff




[dpdk-dev] raw frame to rte_mbuf

2013-11-12 Thread Prashant Upadhyaya
Hi Pepe,

Ofcourse a simple cast will not suffice.
Please look the rte_mbuf structure in the header files and let me know if you 
still have the confusion.
There is a header and payload. Your raw frame will go in the payload.


Regards
-Prashant

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Jose Gavine Cueto
Sent: Tuesday, November 12, 2013 1:49 PM
To: dev at dpdk.org
Subject: [dpdk-dev] raw frame to rte_mbuf

Hi,

In DPDK how should a raw ethernet frame converted to rte_mbuf * ?  For example 
if I have an ARP packet:

void * arp_pkt

how should this be converted to an rte_mbuf * for transmission, does a simple 
cast suffice ?

Cheers,
Pepe

--
To stop learning is like to stop loving.




===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===


[dpdk-dev] raw frame to rte_mbuf

2013-11-12 Thread Etai Lev-Ran
Hi Pepe,

In addition, you may want to consider the frame's lifetime, to ensure memory
is used and released
in a valid way.
When sending, it may be de-referenced by DPDK and consequently a memory free
may be tried. 
Hence, it is important that the raw buffer used for the ARP packet is
allocated with a 
reference added (or, alternately, just add-ref to the packet and ensure
it'll not be freed by DPDK
directly).

Regards,
Etai

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Prashant Upadhyaya
Sent: Tuesday, November 12, 2013 11:15 AM
To: Jose Gavine Cueto; dev at dpdk.org
Subject: Re: [dpdk-dev] raw frame to rte_mbuf

Hi Pepe,

Ofcourse a simple cast will not suffice.
Please look the rte_mbuf structure in the header files and let me know if
you still have the confusion.
There is a header and payload. Your raw frame will go in the payload.


Regards
-Prashant

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Jose Gavine Cueto
Sent: Tuesday, November 12, 2013 1:49 PM
To: dev at dpdk.org
Subject: [dpdk-dev] raw frame to rte_mbuf

Hi,

In DPDK how should a raw ethernet frame converted to rte_mbuf * ?  For
example if I have an ARP packet:

void * arp_pkt

how should this be converted to an rte_mbuf * for transmission, does a
simple cast suffice ?

Cheers,
Pepe

--
To stop learning is like to stop loving.





===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.

===



[dpdk-dev] raw frame to rte_mbuf

2013-11-12 Thread Jose Gavine Cueto
I see thanks for the tip.

Cheers,
Pepe


On Tue, Nov 12, 2013 at 6:08 PM, Etai Lev-Ran  wrote:

> Hi Pepe,
>
> In addition, you may want to consider the frame's lifetime, to ensure
> memory
> is used and released
> in a valid way.
> When sending, it may be de-referenced by DPDK and consequently a memory
> free
> may be tried.
> Hence, it is important that the raw buffer used for the ARP packet is
> allocated with a
> reference added (or, alternately, just add-ref to the packet and ensure
> it'll not be freed by DPDK
> directly).
>
> Regards,
> Etai
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Prashant Upadhyaya
> Sent: Tuesday, November 12, 2013 11:15 AM
> To: Jose Gavine Cueto; dev at dpdk.org
> Subject: Re: [dpdk-dev] raw frame to rte_mbuf
>
> Hi Pepe,
>
> Ofcourse a simple cast will not suffice.
> Please look the rte_mbuf structure in the header files and let me know if
> you still have the confusion.
> There is a header and payload. Your raw frame will go in the payload.
>
>
> Regards
> -Prashant
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jose Gavine Cueto
> Sent: Tuesday, November 12, 2013 1:49 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] raw frame to rte_mbuf
>
> Hi,
>
> In DPDK how should a raw ethernet frame converted to rte_mbuf * ?  For
> example if I have an ARP packet:
>
> void * arp_pkt
>
> how should this be converted to an rte_mbuf * for transmission, does a
> simple cast suffice ?
>
> Cheers,
> Pepe
>
> --
> To stop learning is like to stop loving.
>
>
>
>
>
> 
> ===
> Please refer to http://www.aricent.com/legal/email_disclaimer.html
> for important disclosures regarding this electronic communication.
>
> 
> ===
>
>


-- 
To stop learning is like to stop loving.


[dpdk-dev] Fwd: dpdk-iokit: Turn dpdk into the IO field.

2013-11-12 Thread March Jane



Hi,
  dpdk folks,
I am working on a project which is offering a complete IO stack for dpdk, 
particularly a SCSI target stack in dpdk ? dpdk-iokit, once I finished my 
initial code, will upload to github. I would like to know if there is any other 
people is working on that purpose? 


  - march




[dpdk-dev] olflags in SRIOV VF environment

2013-11-12 Thread Prashant Upadhyaya
Hi guys,

I am facing a peculiar issue with the usage of struct rte_mbuf-> ol_flags field 
in the rte_mbuf when I receive the packets with the rte_eth_rx_burst function.
I use the ol_flags field to identify whether is an IPv4 or IPv6 packet or not 
thus -

if ((pkts_burst->ol_flags & PKT_RX_IPV4_HDR) ||
(pkts_burst->ol_flags & 
PKT_RX_IPV6_HDR))

[pkts_burst is my rte_mbuf pointer]

Now here are the observations -


1.   This works mighty fine when my app is working on the native machine

2.   This works good when I run this in a VM and use one VF over SRIOV from 
one NIC port

3.   This works good when I run this in two VM's and use one VF from 2 
different NIC ports (one VF from each) and use these VF's in these 2 VM's (VF1 
from NIC port1 in VM1 and VF2 from NIC port2 in VM2)

4.   However the ol_flags fails to classify the packets when I use 2 VM's 
and use 2 VF's from the 'same' NIC port and expose one each to the 2 VM's I have

There is no bug in my 'own' application, because when I stopped inspecting the 
ol_flags for classification of IPv4 and V6 packets and wrote a mini logic of my 
own by inspecting the ether type of the packets (the packets themselves come 
proper in all the cases, thankfully), my entire usecase passes (it is a rather 
significant usecase, so it can't be luck)

Any idea guys why it works and doesn't work ?

Regards
-Prashant





===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===


[dpdk-dev] olflags in SRIOV VF environment

2013-11-12 Thread Vladimir Medvedkin
Hi Prashant,

May be it doesn't work due to Known Issues and Limitations (see Release
Notes)
quote:

6.1 In packets provided by the PMD, some flags are missing
In packets provided by the PMD, some flags are missing. The application
does not have access to information provided by the hardware (packet is
broadcast, packet is multicast, packet is IPv4 and so on).

Regards,
Vladimir



2013/11/12 Prashant Upadhyaya 

> Hi guys,
>
> I am facing a peculiar issue with the usage of struct rte_mbuf-> ol_flags
> field in the rte_mbuf when I receive the packets with the rte_eth_rx_burst
> function.
> I use the ol_flags field to identify whether is an IPv4 or IPv6 packet or
> not thus -
>
> if ((pkts_burst->ol_flags & PKT_RX_IPV4_HDR) ||
> (pkts_burst->ol_flags &
> PKT_RX_IPV6_HDR))
>
> [pkts_burst is my rte_mbuf pointer]
>
> Now here are the observations -
>
>
> 1.   This works mighty fine when my app is working on the native
> machine
>
> 2.   This works good when I run this in a VM and use one VF over SRIOV
> from one NIC port
>
> 3.   This works good when I run this in two VM's and use one VF from 2
> different NIC ports (one VF from each) and use these VF's in these 2 VM's
> (VF1 from NIC port1 in VM1 and VF2 from NIC port2 in VM2)
>
> 4.   However the ol_flags fails to classify the packets when I use 2
> VM's and use 2 VF's from the 'same' NIC port and expose one each to the 2
> VM's I have
>
> There is no bug in my 'own' application, because when I stopped inspecting
> the ol_flags for classification of IPv4 and V6 packets and wrote a mini
> logic of my own by inspecting the ether type of the packets (the packets
> themselves come proper in all the cases, thankfully), my entire usecase
> passes (it is a rather significant usecase, so it can't be luck)
>
> Any idea guys why it works and doesn't work ?
>
> Regards
> -Prashant
>
>
>
>
>
>
> ===
> Please refer to http://www.aricent.com/legal/email_disclaimer.html
> for important disclosures regarding this electronic communication.
>
> ===
>


[dpdk-dev] [PATCH] ixgbe: Fix offloading bits when RX bulk alloc is used.

2013-11-12 Thread Ivan Boule
On 11/08/2013 08:47 PM, Bryan Benson wrote:
> This is a fix for the ixgbe hardware offload flags not being set when bulk 
> alloc RX is used. The issue was caused by masking off the bits that store the 
> hardware offload values in the status_error field to retrieve the done bit 
> for the descriptor.
>
> Commit 7431041062b9fd0555bac7ca4abebc99e3379fa5 in DPDK-1.3.0 introduced bulk 
> dequeue, which included the bug.
> ---
>   lib/librte_pmd_ixgbe/ixgbe_rxtx.c |   13 +++--
>   1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c 
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index 07830b7..a183c11 100755
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> @@ -1035,7 +1035,8 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
>   struct igb_rx_entry *rxep;
>   struct rte_mbuf *mb;
>   uint16_t pkt_len;
> - int s[LOOK_AHEAD], nb_dd;
> + uint32_t s[LOOK_AHEAD];
> + int nb_dd;
>   int i, j, nb_rx = 0;
>   
>   
> @@ -1058,12 +1059,12 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
>   for (j = LOOK_AHEAD-1; j >= 0; --j)
>   s[j] = rxdp[j].wb.upper.status_error;
>   
> - /* Clear everything but the status bits (LSB) */
> - for (j = 0; j < LOOK_AHEAD; ++j)
> - s[j] &= IXGBE_RXDADV_STAT_DD;
> + nb_dd = 0;
> + /* add to nd_dd when the status bit is set (LSB) */
> + for (j = 0; j < LOOK_AHEAD; ++j) {
> + nb_dd += s[j] & IXGBE_RXDADV_STAT_DD;
> + }
>   
> - /* Compute how many status bits were set */
> - nb_dd = s[0]+s[1]+s[2]+s[3]+s[4]+s[5]+s[6]+s[7];
>   nb_rx += nb_dd;
>   
>   /* Translate descriptor info to mbuf format */
Acked.

-- 
Ivan Boule
6WIND Development Engineer



[dpdk-dev] [PATCH] ixgbe: Fix offloading bits when RX bulk alloc is used.

2013-11-12 Thread Thomas Monjalon
12/11/2013 16:49, Ivan Boule :
> On 11/08/2013 08:47 PM, Bryan Benson wrote:
> > This is a fix for the ixgbe hardware offload flags not being set when
> > bulk alloc RX is used. The issue was caused by masking off the bits that
> > store the hardware offload values in the status_error field to retrieve
> > the done bit for the descriptor.
> > 
> > Commit 7431041062b9fd0555bac7ca4abebc99e3379fa5 in DPDK-1.3.0 introduced
> > bulk dequeue, which included the bug. ---
> > 
> >   lib/librte_pmd_ixgbe/ixgbe_rxtx.c |   13 +++--
> >   1 file changed, 7 insertions(+), 6 deletions(-)
> 
> Acked.

Applied.

Good catch. Thanks
-- 
Thomas