Hi Michael,
As far as I know, RSS is used to distribute packets between cores based on
hashing the packets' initial bytes, so round robin distribution is not
possible in hardware. You can configure the hash seed and which fields to
use in the hash. If the input packets have same or very similar by
Hi,
As far as I can tell, this is really hardware dependent. Some hash functions
allow uplink and downlink packets of the same "session" to go to the same
queue (I know Chelsio can do this).
For the Intel card, you may find what you want in:
http://www.intel.com/content/www/us/en/ethernet-control
You are welcome !
Even if you insert packets in batch into a fifo, the mutex is still
unpredictable. If one pthread_lock costs 1ms, you are going to lose packets,
regardless of the number of RSS queues and ring sizes
Batching comes with
another issue: need to flush a batch after a certain timeo
Round robin would actually be awful for any protocol because it would
cause out of order packets.
That is why flow based algorithms like flow director and RSS work much better.
On Wed, Dec 4, 2013 at 8:31 PM, Prashant Upadhyaya
wrote:
> Hi,
>
> It's a real pity that Intel 82599 NIC (and possibly
Hi,
I'm trying to use the VMDq technology for pre-filter packets on the
NIC;unfortunately I found only two examples about this, and both express
conditions on the VLAN tag, while I need to select packets based on
their (source) MAC address.
After looking to the API, I find out the function
*rte
Hi,
The memory allocation is not an issue since that is contained entirely
within DPDK itself and does not leak outside, i.e. all DPDK data
structures are managed with DPDK memory management functions and
that's valid and OK.
The thread model integration issue is because EAL creates its own
threa
I played around with pcap to compare it against "pure" DPDK user-space
driver. I realized that interrupt affinity suddenly becomes important,
since it uses interrupts. So I needed a way to set it.
The only place where I know which core will handle which pcap
interface, is within my DPDK applicatio
Hi
On a 10Gbps link, there is a new packet every 650ns on average on each
direction. So handling latency is extremely important.
Traditional "fast" userland mutexes involves system call and scheduling
costs (look at the kernel code: it is "hairy"). I measured difference
between mutex controlled f
Hi,
I think I get the picture. DPDK is not really flexible at memory allocation
(nor the Linux kernel which requires boot parameters for 1GB huge pages)...
Let's assume that "static" memory configuration is acceptable.
Is the thread model integration issue related to the fact we set affinity
ATF
Understood. Thanks for getting back.
Regards,
Sambath
On Wed, Dec 4, 2013 at 1:51 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:
> On Wed, 4 Dec 2013 13:47:10 -0800
> Sambath Kumar Balasubramanian wrote:
>
> > Thanks Stephen. I was going to do prototype something similar (not do
On Wed, 4 Dec 2013 13:47:10 -0800
Sambath Kumar Balasubramanian wrote:
> Thanks Stephen. I was going to do prototype something similar (not doing
> the wakeup inline but using a background thread)
> and is it a worthwhile effort to move this as a feature of the RTE ring or
> is it best left at th
Thanks Fran?ois-Fr?d?ric. Trying to embark on a small prototype and see the
results. Thanks for the timing data. Really helpful.
Regards,
Sambath
On Wed, Dec 4, 2013 at 12:02 PM, Fran?ois-Fr?d?ric Ozog wrote:
> You are welcome !
>
>
>
> Even if you insert packets in batch into a fifo, the mute
Thanks Stephen. I was going to do prototype something similar (not doing
the wakeup inline but using a background thread)
and is it a worthwhile effort to move this as a feature of the RTE ring or
is it best left at the application level.
On Wed, Dec 4, 2013 at 1:25 PM, Stephen Hemminger <
stephe
On Wed, 4 Dec 2013 03:46:36 -0800
Sambath Kumar Balasubramanian wrote:
> Hi,
>
> The ring library seems to be an excellent IPC. But looking at one use
> case where the fast path code posts events to event thread for example, the
> event thread will spend some cycles polling the ring rather tha
Hi all,
I am writing a dpdk application that will receive packets from one
interface and process them. It does not forward packets in the traditional
sense. However, I do need to process them at full line rate and therefore
need more than one core. The packets can be somewhat generic in nature a
On Wed, Dec 4, 2013 at 11:44 AM, Richardson, Bruce
wrote:
> [BR] Hi. Just so you know, a fix for this will be present in the Intel DPDK
> 1.5.2 patch release from Intel, which should be publically available very
> shortly. This fix we are releasing I also previously posted as a patch on
> this
Hey,
I guess the main hurdle is that we already have our own multi-threaded
architecture and ways to control thread startup/shutdown, priorities
and affinities and they are all balanced very delicately (our
application is latency sensitive, runs on rt_preempt, boots with
isolcpus, etc). In additio
>
> Hi Bruce,
>
> I made a dead simple patch that seems to fix my problem. Could you check
> and see whether I'm on the right track here?
>
> The patch is as follows:
>
> -- >8 --
[BR] Hi. Just so you know, a fix for this will be present in the Intel DPDK
1.5.2 patch release from Intel, which
Thanks, this solved my problem. But now, trying to compile qemu and i have
the follow error:
LINK x86_64-softmmu/qemu-system-x86_64
/usr/bin/ld:
/home/xerifao/ovs_dpdk/dpdk-1.5.1r1/x86_64-default-linuxapp-gcc/lib/librte_eal.a(eal.o):
relocation R_X86_64_32 against `.rodata.str1.8' can not be used
This should fix it
diff --git a/openvswitch/Makefile.am b/openvswitch/Makefile.am
index fbee87b..b8da768 100644
--- a/openvswitch/Makefile.am
+++ b/openvswitch/Makefile.am
@@ -28,7 +28,9 @@ endif
@HAVE_DPDK_TRUE@$(dpdk_lib_dir)/librte_mbuf.a \
@HAVE_DPDK_TRUE@$(dpdk_lib_dir)/librte_r
I am using Virtual Box Version 4.2.18 r88780, dpdk-1.5.1r1 and Fedora
19 with 3.11.9 kernel. I have one e1000 and one virtio-net NIC eth2
and eth3 respectively. eth2 is used for ssh access to the VM. I am
trying to get testpmd working on eth3 using the librte_pmd_virtio.so
driver. I have changed th
Hi,
I just completed such a consulting mission for a customer. They were using
libpcap as the network back end and the most challenging hurdle was to
transform a single threaded capture architecture to a multi-threaded one
with DPDK. The other key take away, is that DPDK capture helps to get only
Thanks Fran?ois-Fr?d?ric. That puts real good perspective on the cost for
the vent assuming each packet in the fast will result
in an event. If event rate is orders of magnitude less than the packet
rate, then I guess we can still achieve 10G since the "extra cost" will be
in the event thread and n
Hi,
The ring library seems to be an excellent IPC. But looking at one use
case where the fast path code posts events to event thread for example, the
event thread will spend some cycles polling the ring rather than waiting
for the event. One approach could be a fast path code basically posts the
24 matches
Mail list logo