Hi Thomas,
On Sun, Jan 5, 2014 at 10:54 PM, Thomas Monjalon
wrote:
> 05/01/2014 22:31, Jose Gavine Cueto :
> > venky.venkatesan at intel.com> wrote:
> > > Was the DPDK library compiled on a different machine and the used in
> the
> > > VM? It looks like it has been compiled for native AVX (hence
Hello Asias,
04/01/2014 07:07, Asias He :
> Hex numbers in /proc/ioports are lowercase. we should make it lowercase
> in pci_id as well. Otherwise devices like:
>
>00:0a.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> would not be handled by virtio-net-pmd.
>
> Signed-off-by:
Hello
A quick general question:
Is there a requirement for using a specific NIC/Chipset for DPDK or any NIC
will do?
If there's such a requirement, why is that? Are the're any hardware
optimizations etc. in DPDK enabled NICs?
Thanks
Shlomi
Hi Shlomi,
Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact list can
be found at lib/librte_eal/common/include/rte_pci_dev_ids.h
The reason is the the DPDK includes a poll mode driver and for a different
NIC you might need a different poll mode driver.
Regards,
Daniel Kaminsky
Hi,
Thanks for information.
If I use kernel parameters intel_iommu=on and iommu=pt, then the following
error has been observed.
ERROR REPORT
dmar: DRHD: handling fault status reg 2
dmar: DMAR:[DMA Write] Request device [01:00.0] fault addr 4f883000
DMAR:[fault reason 02] Present bit
On 12/31/2013 08:45 PM, Michael Quicquaro wrote:
> Has anyone used the "port config all reta (hash,queue)" command of testpmd
> with any success?
>
> I haven't found much documentation on it.
>
> Can someone provide an example on why and how it was used.
>
> Regards and Happy New Year,
> Michael Qu
On 12/26/2013 10:46 PM, Wang, Shawn wrote:
> Hi:
>
> Can anyone explain more details about the rte_mbuf ol_flag :
> PKT_RX_IPV4_HDR_EXT?
> The document said ?RX packet with extended IPv4 header.?
> But what is the extended IPv4 header looks like? What is the difference with
> normal IPv4 header?
06/01/2014 14:31, Daniel Kaminsky :
> Currently DPDK supports most of Intel 1Gb and 10Gb NICs. The exact list can
> be found at lib/librte_eal/common/include/rte_pci_dev_ids.h
There are more supported NICs than in rte_pci_dev_ids.h.
Please have a look at the online documentation:
http://dp
Thanks guys
So basically "only" Intel* and mlx4 are supported? Do I get it right?
Regards
Shlomi
-Original Message-
From: Thomas Monjalon [mailto:thomas.monja...@6wind.com]
Sent: Monday, January 06, 2014 5:39 PM
To: TSADOK, Shlomi (Shlomi)
Cc: dev at dpdk.org; Daniel Kaminsky
Subject:
06/01/2014 16:41, TSADOK, Shlomi (Shlomi) :
> So basically "only" Intel* and mlx4 are supported? Do I get it right?
Yes, they are the poll mode drivers for hardware NICs.
Note that pcap should allow to run DPDK with other NICs but without
performance gain.
--
Thomas
> -Original Message
You could write a PMD for any NIC. The work is only done for the NICs listed
here http://dpdk.org/doc/nics
But nothing stopping anyone from using other NICs if they want to write the PMD.
Regards,
Jim
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of TSADOK, Shlomi
Got it.. Thank you guys!
Shlomi
-Original Message-
From: St Leger, Jim [mailto:jim.st.le...@intel.com]
Sent: Monday, January 06, 2014 6:23 PM
To: TSADOK, Shlomi (Shlomi); Thomas Monjalon
Cc: dev at dpdk.org
Subject: RE: [dpdk-dev] Specific NIC for DPDK?
You could write a PMD for any
Thanks for the details. Can the hash function be modified so that I can
provide my own RSS function? i.e. my ultimate goal is to provide RSS that
is not dependent on packet contents.
You may have seen my thread "generic load balancing". At this point, I'm
realizing that the only way to accompl
13 matches
Mail list logo