[dpdk-dev] intel x540-at2

2014-01-06 Thread Jose Gavine Cueto
Hi Thomas,

On Sun, Jan 5, 2014 at 10:54 PM, Thomas Monjalon
wrote:

> 05/01/2014 22:31, Jose Gavine Cueto :
> > venky.venkatesan at intel.com> wrote:
> > > Was the DPDK library compiled on a different machine and the used in
> the
> > > VM? It looks like it has been compiled for native AVX (hence the
> > > vzeroupper). Could you dump cpuinfo in the VM and see what instruction
> set
> > > the VM supports?
> >
> > Yes, it was compiled in a different machine and it was used in my VM.
> [...]
> > model name  : Intel(R) Core(TM) i5-3340M CPU @ 2.70GHz
> > flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca
> > cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm
> constant_tsc
> > rep_good nopl pni monitor ssse3 lahf_lm
> [...]
> >  It seems that there is no avx here, does this mean this doesn't support
> > avx instructions ?
>
> Yes, you have no avx on this machine.
> Tip to clearly check this type of flag:
> grep --color -m1 avx /proc/cpuinfo
>
> So, you have 2 solutions:
> 1) build DPDK on this machine
> 2) build DPDK for a default machine:
> CONFIG_RTE_MACHINE=default
> defconfig files are wrongly called "default" but have CONFIG_RTE_MACHINE
> set to
> native. So the compilation flags are guessed from /proc/cpuinfo. You can
> look
> for AUTO_CPUFLAGS to better understand it.
>
> --
> Thomas
>

Thanks and I will try your suggestion.  I will also post the result
whenever it is available, for the meantime I will be rebuilding the DPDK
lib.

Cheers,
Pepe

-- 
To stop learning is like to stop loving.


[dpdk-dev] [virtio-net-pmd PATCH] pmd: Fix pci_id match

2014-01-06 Thread Thomas Monjalon
Hello Asias,

04/01/2014 07:07, Asias He :
> Hex numbers in /proc/ioports are lowercase. we should make it lowercase
> in pci_id as well. Otherwise devices like:
> 
>00:0a.0 Ethernet controller: Red Hat, Inc Virtio network device
> 
> would not be handled by virtio-net-pmd.
> 
> Signed-off-by: Asias He 

Good catch !

I have acked and applied it with this title:
pmd: fix PCI id match when probing

I think we should distinguish such patch for an extension from other ones.
I suggest using --subject-prefix 'virtio-net-pmd PATCH'

Thank you
-- 
Thomas


[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread TSADOK, Shlomi (Shlomi)
Hello

A quick general question:

Is there a requirement for using a specific NIC/Chipset for DPDK or any NIC 
will do?

If there's such a requirement, why is that? Are the're any hardware 
optimizations etc. in DPDK enabled NICs?

Thanks

Shlomi





[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread Daniel Kaminsky
Hi Shlomi,

Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact list can
be found at lib/librte_eal/common/include/rte_pci_dev_ids.h

The reason is the the DPDK includes a poll mode driver and for a different
NIC you might need a different poll mode driver.

Regards,
Daniel Kaminsky


On Mon, Jan 6, 2014 at 2:05 PM, TSADOK, Shlomi (Shlomi) <
shlomi.tsadok at alcatel-lucent.com> wrote:

> Hello
>
> A quick general question:
>
> Is there a requirement for using a specific NIC/Chipset for DPDK or any
> NIC will do?
>
> If there's such a requirement, why is that? Are the're any hardware
> optimizations etc. in DPDK enabled NICs?
>
> Thanks
>
> Shlomi
>
>
>
>


[dpdk-dev] Issue when the kernel parameter intel_iommu=on is being used

2014-01-06 Thread Sridhar S
Hi,

Thanks for information.


If I use kernel parameters intel_iommu=on and iommu=pt, then the following
error has been observed.

ERROR REPORT
dmar: DRHD: handling fault status reg 2
dmar: DMAR:[DMA Write] Request device [01:00.0] fault addr 4f883000
DMAR:[fault reason 02] Present bit in context entry is clear
##

Does this mean no context entry hasn't been created for the Connect X3
device?
But, as per my understanding , the intel iommu code (intel_iommu.c) creates
root entries and context entries as per DMAR table which provided by BIOS
to OS.Also, it should create entry for Connect X3 device as well(?).

Or  the created memory via DPDK API is not belongs to Domain(VT-d) to which
Connect X3 device is assigned to?
Or  does this error code is generic?

Can you share your knowledge on this issue.

Thanks
Sri


On Sun, Jan 5, 2014 at 10:58 PM, Fran?ois-Fr?d?ric Ozog  wrote:

> Hi,
>
> To understand the issue, you may have a look at:
>
> http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/
> vt-directed-io-spec.html
>
> When you have no IOMMU, "physical" address space is accessed directly by
> hardware, so your core works.
>
> When VT-d is active, there is DMA/IRQ remapping hardware layer between the
> device and the memory/cpu. If you look at ?3.4.3 of the spec, you that for
> each device of each bus there is a context (enumerated at boot time,
> leveraging BIOS/ACPI). For each device, you may have address translation
> programmed so that DMA produced by hardware is actually mapped to a
> physical
> address.
>
> When you use the Linux kernel API for mapping DMA memory, Linux takes care
> of the "details".
>
> For DPDK, documentation ?5.6 Using Linux IOMMU Pass-Through to Run Intel?
> DPDK with Intel? VT-d says that you should have iommu=pt kernel parameter
> on. Do you have it ?
>
> FF
>
>
> > -Message d'origine-
> > De : dev [mailto:dev-bounces at dpdk.org] De la part de Sridhar S
> > Envoy? : dimanche 5 janvier 2014 13:38
> > ? : dev at dpdk.org
> > Objet : [dpdk-dev] Issue when the kernel parameter intel_iommu=on is
> being
> > used
> >
> > Hello,
> >
> >
> >
> > I am using DPDK 1.5 for development of host pmd for device ?Connect X3?.
> >
> >
> >
> > I am observing issue  while the ConnectX3 device DMA to a memory which is
> > allocated with rte_memzone_reserve_aligned() API .
> >
> > The issue(please refer ERROR below) has been observed if the system runs
> > with the kernel parameter ?intel_iommu=on?.
> >
> >
> >
> > ERROR :
> >
> > dmar: DRHD: handling fault status reg 302
> >
> > dmar: DMAR:[DMA Write] Request device [01:00.0] fault addr 4f883000
> >
> > DMAR:[fault reason 01] Present bit in root entry is clear
> >
> >
> > The reported "fault Addr" is the physical address which was returned by
> the
> > Above API.
> >
> >
> >
> > I don?t see any issue with the same code when the system up with kernel
> > parameter intel_iommu=off.
> >
> >
> >
> >
> > Can you share your comments on this issue?
> >
> >
> > Thanks in advance
> >
> > Sri
>
>


[dpdk-dev] Redirection Table

2014-01-06 Thread Ivan Boule
On 12/31/2013 08:45 PM, Michael Quicquaro wrote:
> Has anyone used the "port config all reta (hash,queue)" command of testpmd
> with any success?
>
> I haven't found much documentation on it.
>
> Can someone provide an example on why and how it was used.
>
> Regards and Happy New Year,
> Michael Quicquaro
Hi Michael,

"RETA" stands for Redirection Table.
It is a per-port configurable table of 128 entries that is used by the
RSS filtering feature of Intel 1GbE and 10GbE controllers to select the
RX queue into which to store a received IP packet.
When receiving an IPv4/IPv6 packet, the controller computes a 32-bit
hash on:

   * the source address and the destination address of the IP header of
 the packet,
   * the source port and the destination port of the UDP/TCP header, if any.

Then, the controller takes the 7 lower bits of the RSS hash as an index
into the RETA table to get the RX queue number where to store the packet.

The API of the DPDK includes a function that is exported by Poll Mode
Drivers to configure RETA entries of a given port.

For test purposes, the testpmd application includes the following command

 "port config X rss reta (hash,queue)[,(hash,queue)]"

to configure RETA entries of a port X, with each couple (hash,queue)
contains the index of a RETA entry (between 0 and 127 included) and the
RX queue number (between 0 and 15) to be stored into that RETA entry.

Best regards
Ivan

-- 
Ivan Boule
6WIND Development Engineer



[dpdk-dev] question about PKT_RX_IPV4_HDR_EXT

2014-01-06 Thread Ivan Boule
On 12/26/2013 10:46 PM, Wang, Shawn wrote:
> Hi:
>
> Can anyone explain more details about the rte_mbuf ol_flag : 
> PKT_RX_IPV4_HDR_EXT?
> The document said ?RX packet with extended IPv4 header.?
> But what is the extended IPv4 header looks like? What is the difference with 
> normal IPv4 header?
> Can anyone give me an example?
>
> Thanks a lot.
> Wang, Shawn
Hi,

A extended IPv4 header is a IPv4 header with additional options, whose
total header size is greater than 20 bytes. Intel 1GbE and 10Gbe
Ethernet controllers are able to recognize such packets, and, in this 
case,set a
dedicated flag into the RX descriptor where they store the packet.
Then, to supply this hardware-detected packet characteristics to the 
upper-level application,
the RX functions of DPDK PollMode Drivers of the 1GbE and 10Gbe Ethernet 
controllers set the
PKT_RX_IPV4_HDR_EXT generic flag into the mbuf that contains the packet.

Best regards,
Ivan

-- 
Ivan Boule
6WIND Development Engineer



[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread Thomas Monjalon
06/01/2014 14:31, Daniel Kaminsky :
> Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact list can
> be found at lib/librte_eal/common/include/rte_pci_dev_ids.h

There are more supported NICs than in rte_pci_dev_ids.h.
Please have a look at the online documentation:
http://dpdk.org/doc/nics

-- 
Thomas


[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread TSADOK, Shlomi (Shlomi)
Thanks guys

So basically "only" Intel* and mlx4 are supported? Do I get it right?


Regards
Shlomi


-Original Message-
From: Thomas Monjalon [mailto:thomas.monja...@6wind.com] 
Sent: Monday, January 06, 2014 5:39 PM
To: TSADOK, Shlomi (Shlomi)
Cc: dev at dpdk.org; Daniel Kaminsky
Subject: Re: [dpdk-dev] Specific NIC for DPDK?

06/01/2014 14:31, Daniel Kaminsky :
> Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact 
> list can be found at lib/librte_eal/common/include/rte_pci_dev_ids.h

There are more supported NICs than in rte_pci_dev_ids.h.
Please have a look at the online documentation:
http://dpdk.org/doc/nics

--
Thomas


[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread Thomas Monjalon
06/01/2014 16:41, TSADOK, Shlomi (Shlomi) :
> So basically "only" Intel* and mlx4 are supported? Do I get it right?

Yes, they are the poll mode drivers for hardware NICs.
Note that pcap should allow to run DPDK with other NICs but without 
performance gain.

-- 
Thomas

> -Original Message-
> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> 06/01/2014 14:31, Daniel Kaminsky :
> > Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact
> > list can be found at lib/librte_eal/common/include/rte_pci_dev_ids.h
> 
> There are more supported NICs than in rte_pci_dev_ids.h.
> Please have a look at the online documentation:
>   http://dpdk.org/doc/nics


[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread St Leger, Jim
You could write a PMD for any NIC. The work is only done for the NICs listed 
here http://dpdk.org/doc/nics
But nothing stopping anyone from using other NICs if they want to write the PMD.

Regards,
Jim

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of TSADOK, Shlomi (Shlomi)
Sent: Monday, January 06, 2014 8:42 AM
To: Thomas Monjalon
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] Specific NIC for DPDK?

Thanks guys

So basically "only" Intel* and mlx4 are supported? Do I get it right?


Regards
Shlomi


-Original Message-
From: Thomas Monjalon [mailto:thomas.monja...@6wind.com]
Sent: Monday, January 06, 2014 5:39 PM
To: TSADOK, Shlomi (Shlomi)
Cc: dev at dpdk.org; Daniel Kaminsky
Subject: Re: [dpdk-dev] Specific NIC for DPDK?

06/01/2014 14:31, Daniel Kaminsky :
> Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact 
> list can be found at lib/librte_eal/common/include/rte_pci_dev_ids.h

There are more supported NICs than in rte_pci_dev_ids.h.
Please have a look at the online documentation:
http://dpdk.org/doc/nics

--
Thomas


[dpdk-dev] Specific NIC for DPDK?

2014-01-06 Thread TSADOK, Shlomi (Shlomi)
Got it.. Thank you guys!


Shlomi



-Original Message-
From: St Leger, Jim [mailto:jim.st.le...@intel.com] 
Sent: Monday, January 06, 2014 6:23 PM
To: TSADOK, Shlomi (Shlomi); Thomas Monjalon
Cc: dev at dpdk.org
Subject: RE: [dpdk-dev] Specific NIC for DPDK?

You could write a PMD for any NIC. The work is only done for the NICs listed 
here http://dpdk.org/doc/nics But nothing stopping anyone from using other NICs 
if they want to write the PMD.

Regards,
Jim

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of TSADOK, Shlomi (Shlomi)
Sent: Monday, January 06, 2014 8:42 AM
To: Thomas Monjalon
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] Specific NIC for DPDK?

Thanks guys

So basically "only" Intel* and mlx4 are supported? Do I get it right?


Regards
Shlomi


-Original Message-
From: Thomas Monjalon [mailto:thomas.monja...@6wind.com]
Sent: Monday, January 06, 2014 5:39 PM
To: TSADOK, Shlomi (Shlomi)
Cc: dev at dpdk.org; Daniel Kaminsky
Subject: Re: [dpdk-dev] Specific NIC for DPDK?

06/01/2014 14:31, Daniel Kaminsky :
> Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact 
> list can be found at lib/librte_eal/common/include/rte_pci_dev_ids.h

There are more supported NICs than in rte_pci_dev_ids.h.
Please have a look at the online documentation:
http://dpdk.org/doc/nics

--
Thomas


[dpdk-dev] Redirection Table

2014-01-06 Thread Michael Quicquaro
Thanks for the details.  Can the hash function be modified so that I can
provide my own RSS function?  i.e.  my ultimate goal is to provide RSS that
is not dependent on packet contents.

You may have seen my thread "generic load balancing".  At this point, I'm
realizing that the only way to accomplish this is to let the packets land
where they may (the queue where the NIC places the packet) and distribute
them (to other queues) by having some of the CPU processing devoted to this
task.  Can you verify this?

Regards,
- Michael.


On Mon, Jan 6, 2014 at 10:21 AM, Ivan Boule  wrote:

> On 12/31/2013 08:45 PM, Michael Quicquaro wrote:
>
>> Has anyone used the "port config all reta (hash,queue)" command of testpmd
>> with any success?
>>
>> I haven't found much documentation on it.
>>
>> Can someone provide an example on why and how it was used.
>>
>> Regards and Happy New Year,
>> Michael Quicquaro
>>
> Hi Michael,
>
> "RETA" stands for Redirection Table.
> It is a per-port configurable table of 128 entries that is used by the
> RSS filtering feature of Intel 1GbE and 10GbE controllers to select the
> RX queue into which to store a received IP packet.
> When receiving an IPv4/IPv6 packet, the controller computes a 32-bit
> hash on:
>
>   * the source address and the destination address of the IP header of
> the packet,
>   * the source port and the destination port of the UDP/TCP header, if any.
>
> Then, the controller takes the 7 lower bits of the RSS hash as an index
> into the RETA table to get the RX queue number where to store the packet.
>
> The API of the DPDK includes a function that is exported by Poll Mode
> Drivers to configure RETA entries of a given port.
>
> For test purposes, the testpmd application includes the following command
>
> "port config X rss reta (hash,queue)[,(hash,queue)]"
>
> to configure RETA entries of a port X, with each couple (hash,queue)
> contains the index of a RETA entry (between 0 and 127 included) and the
> RX queue number (between 0 and 15) to be stored into that RETA entry.
>
> Best regards
> Ivan
>
> --
> Ivan Boule
> 6WIND Development Engineer
>
>