-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Shankari Vaidyalingam
Sent: Thursday, July 17, 2014 9:15 AM
To: dev at dpdk.org
Subject: Re: [dpdk-dev] Access to open flow table using DPDK libraries
Hi,
>
> I would like to know is there a way for the application usi
2.6.32 is minimum, but I believe still needs patches to fix hugetlbfs issues.
I think the first kernel which had all the features we need, and doesn't
require patches, is 2.6.33.6.
Jeff
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Thomas Monjalon
Sent: Wednesday
Danny, can you specify multiple --vdev parameters?
"--vdev=eth_packet0,iface=eth0 --vdev=eth_packet1,iface=eth1"
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Zhou, Danny
Sent: Friday, July 11, 2014 1:27 PM
To: John W. Linville
Cc: dev at dpdk.org
Subject: Re: [d
Do you know if the host's hugepages are mapped into the container?
Seeing as containers are meant to provide isolation, it seems to make sense
that the host would not automatically share hugepages with a container, but I'm
not sure.
Jeff
-Original Message-
From: dev [mailto:dev-boun...@
Hi Declan,
I'm worried about one thing in "bond_ethdev_tx_broadcast()" related to freeing
of the broadcasted packets.
> +static uint16_t
> +bond_ethdev_tx_broadcast(void *queue, struct rte_mbuf **bufs, uint16_t
> nb_pkts)
> +{
> + struct bond_dev_private *internals;
> + struct bond_tx_qu
Hello,
> I measured a roundtrip latency (using Spirent traffic generator) of sending
> 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply forward
> back to the incoming port (l3fwd without any lookup code, i.e., dstport =
> port_id).
> However, to my surprise, the average latenc
I agree, we should wait for comments then test the performance when the patches
have settled.
-Original Message-
From: Olivier MATZ [mailto:olivier.m...@6wind.com]
Sent: Friday, May 09, 2014 9:06 AM
To: Shaw, Jeffrey B; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 05/11] mbuf
Hello Olivier, have you tested this patch to see if there is a negative impact
to performance?
Wouldn't the processor have to mask the high bytes of the physical address when
it is used, for example, to populate descriptors with buffer addresses? When
compute bound, this could steal CPU cycles
Have you tried calling "rte_eal_init()" closer to the beginning of the program
in your secondary process (i.e. the first thing in main())?
The same mmap address is required. The reason is simple, if process A thinks
the virtual address of an mbuf is 123, and process B thinks the virtual address
...@gmail.com] On Behalf
Of HS
Sent: Monday, March 31, 2014 11:16 AM
To: Shaw, Jeffrey B
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] 82599ES NIC support
> Can you check if your PCI device ID is listed in "lib/librte_eal/common/
include/rte_pci_dev_ids.h"?
82599ES is not listed in lib/li
Can you check if your PCI device ID is listed in
"lib/librte_eal/common/include/rte_pci_dev_ids.h"?
Can you verify that you have bound your device to "igb_uio", perhaps using
"tools/pci_unbind.py" (maybe renamed to tools/igb_uio_bind.py)?
You might also try to edit the ".config" (in your build di
Hi Qing,
The idea is that we do not want to clean the descriptor ring until we have used
"enough" descriptors.
So (nb_tx_desc -nb_tx_free) tells us how many descriptors we've used. Once
we've used "enough" (i.e. tx_free_thresh) then we will try to clean the
descriptor ring.
If you look at the
12 matches
Mail list logo