Well done!
I guess that's the shortest question on the list, and probably the one
that's going to trigger the largest discussion.
A few months, ago I had to answer it for a customer. And here is my
understanding:
- DPDK is also a high performance multi-core application framework. You take
out t
Hi,
I am bumping into a similar problem than the one explained here
(https://www.mail-archive.com/e1000-devel at lists.sourceforge.net/msg07684.htm
l): At some point in time, a receive queue gets "FULL" i.e. tail==head
(reading the NIC registers) and the thread associated to that queue cannot
retr
Hi Prashant,
May be you could monitor RAM, QPI and PCIe activity with
http://software.intel.com/en-us/articles/intel-performance-counter-monitor-a
-better-way-to-measure-cpu-utilization
It may ease investigating the issue.
Fran?ois-Fr?d?ric
> -Message d'origine-
> De?: dev [mailto:dev-
> > First and easy answer: it is open source, so anyone can recompile. So,
> > what's the issue?
>
> I'm talking from a pure distribution perspective here: Requiring to
> recompile all DPDK based applications to distribute a bugfix or to add
> support for a new PMD is not ideal.
>
> So ideally O
Hi,
Most of the time rdtsc is used for timestamping and a few cycles incorrect
are most of the time not an issue (a precision of 0.1us for session start is
usually enough).
Sometimes you need to serialize because the time you want to measure is very
short, in the order of few nanoseconds.
If the
> -Message d'origine-
> De?: dev [mailto:dev-bounces at dpdk.org] De la part de Michael Quicquaro
> Envoy??: vendredi 24 janvier 2014 00:23
> ??: Robert Sanford
> Cc?: dev at dpdk.org; mayhan at mayhan.org
> Objet?: Re: [dpdk-dev] Rx-errors with testpmd (only 75% line rate)
>
> Thank you,
Hi Thomas,
I am afraid I introduced unnecessary complexity in the discussion as the
spinlock issues I mentioned are connected to a work in progress on my side
(implement a Chelsio cxgb5 PMD) but *not* to the general DPDK.
I'll explain some aspects of the context and how critical sections has to
Hi,
Can you check that the threads you use for handling the queues are on the
same socket as the card ?
cat /sys/class/net//device/numa_node
will give you the node.
Fran?ois-Fr?d?ric
> -Message d'origine-
> De?: dev [mailto:dev-bounces at dpdk.org] De la part de Michael Quicquaro
> Env
Hi,
To understand the issue, you may have a look at:
http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/
vt-directed-io-spec.html
When you have no IOMMU, "physical" address space is accessed directly by
hardware, so your core works.
When VT-d is active, there is DMA/IRQ
> -Message d'origine-
> De?: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> Envoy??: vendredi 20 d?cembre 2013 16:39
> ??: Fran?ois-Fr?d?ric Ozog
> Cc?: dev at dpdk.org
> Objet?: Re: [dpdk-dev] Bit spinlocks in DPDK
>
> Hello,
>
> 07/12/2013 18:54, Fran?ois-Fr?d?ric Ozog :
> >
Hi,
It depends on the kernel version. For the latests ones you can use:
cat /sys/class/net//device/numa_node
in all other case, you can use lspci fallback (in case even no driver is yet
loaded).
lspci | grep Ethernet
09:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network
Connection
> -Message d'origine-
> De?: dev [mailto:dev-bounces at dpdk.org] De la part de Thomas Monjalon
> Envoy??: vendredi 6 d?cembre 2013 23:24
> ??: Pashupati Kumar
> Cc?: dev at dpdk.org
> Objet?: Re: [dpdk-dev] Bit spinlocks in DPDK
>
> 06/12/2013 14:12, Pashupati Kumar :
> > From: Thomas Mo
Can we (as a community) be leading the way for the NIC vendors?
I mean, a few years ago I had the discussion with Chelsio to solve MPLS and GTP
load balancing.
They were happy to integrate the "requirements" in the roadmap
So could we build a list of such "requirements" and publish it? NIC v
Hi,
If the traffic you manage is above MPLS or GTP encapsulations, then you can
use cards that provide flexible hash functions. Chelsio cxgb5 provides
combination of "offset", length and tuple that may help.
The only reason I would have loved to get a pure round robin feature was to
pass certain
Hi,
As far as I can tell, this is really hardware dependent. Some hash functions
allow uplink and downlink packets of the same "session" to go to the same
queue (I know Chelsio can do this).
For the Intel card, you may find what you want in:
http://www.intel.com/content/www/us/en/ethernet-control
You are welcome !
Even if you insert packets in batch into a fifo, the mutex is still
unpredictable. If one pthread_lock costs 1ms, you are going to lose packets,
regardless of the number of RSS queues and ring sizes
Batching comes with
another issue: need to flush a batch after a certain timeo
Hi
On a 10Gbps link, there is a new packet every 650ns on average on each
direction. So handling latency is extremely important.
Traditional "fast" userland mutexes involves system call and scheduling
costs (look at the kernel code: it is "hairy"). I measured difference
between mutex controlled f
Hi,
I think I get the picture. DPDK is not really flexible at memory allocation
(nor the Linux kernel which requires boot parameters for 1GB huge pages)...
Let's assume that "static" memory configuration is acceptable.
Is the thread model integration issue related to the fact we set affinity
ATF
Hi,
I just completed such a consulting mission for a customer. They were using
libpcap as the network back end and the most challenging hurdle was to
transform a single threaded capture architecture to a multi-threaded one
with DPDK. The other key take away, is that DPDK capture helps to get only
19 matches
Mail list logo