Howdy!
I am having trouble building Pktgen-DPDK from Github on Ubuntu 14.04. Is
this supported? If so, would anybody tell me how to get the build working?
I have tried to be faithful to the instructions in README.md and I have
tested with both master and the latest release tag (pktgen-2.7.1).
Th
Hi Pablo,
On 24 July 2014 12:33, De Lara Guarch, Pablo wrote:
> I think you are seeing the same error as other people are seeing for
> DPDK-1.7 on Ubuntu 14.04.
> Are you using kernel 3.13.0-24 or 3.13.0-30/32?
>
Thanks for the quick response. I'm currently using kernel 3.13.0-24.
Hi Michael,
I'm writing to follow up the previous discussion about memory barriers in
virtio-net device implementations, and Cc'ing the DPDK list because I
believe this is relevant to them too.
First, thanks again for getting in touch and reviewing our code.
I have now found a missed case where
On 7 April 2015 at 17:30, Michael S. Tsirkin wrote:
> Just guessing from the available info:
>
> I think you refer to this:
> The driver MUST handle spurious interrupts from the device.
>
> The intent is to be able to handle some spurious interrupts once in a
> while. AFAIK linux trigger
Howdy,
On 8 April 2015 at 17:15, Xie, Huawei wrote:
> luke:
> 1. host read the flag. 2 guest toggles the flag 3.guest checks the used.
> 4. host update used.
> Is this your case?
>
Yep, that is exactly the case I mean.
Cheers,
-Luke
On 23 April 2015 at 10:11, Bruce Richardson
wrote:
> Also, if I read your quoted performance numbers in your earlier mail
> correctly,
> we are only looking at a 1-4% performance increase. Is the additional code
> to
> maintain worth the benefit?
>
... and if so, how would one decide whether it
On 22 April 2015 at 18:33, Huawei Xie wrote:
> update of used->idx and read of avail->flags could be reordered.
> memory fence should be used to ensure the order, otherwise guest could see
> a stale used->idx value after it toggles the interrupt suppression flag.
>
This patch looks right to me.
On 24 April 2015 at 03:01, Linhaifeng wrote:
> If not add memory fence what would happen? Packets loss or interrupt
> loss?How to test it ?
>
You should be able to test it like this:
1. Boot two Linux kernel (e.g. 3.13) guests.
2. Connect them via vhost switch.
3. Run continuous traffic between
Hi Tim,
On 16 April 2015 at 12:38, O'Driscoll, Tim wrote:
> Following the launch of DPDK by Intel as an internal development project,
> the launch of dpdk.org by 6WIND in 2013, and the first DPDK RPM packages
> for Fedora in 2014, 6WIND, Red Hat and Intel would like to prepare for
> future relea
Hi Neil,
Thanks for taking the time to reflect on my ideas.
On 24 April 2015 at 19:00, Neil Horman wrote:
> DPDK will always be
> something of a niche market for user to whoom every last ounce of
> performance is
> the primary requirement
This does seem like an excellent position. It is succi
Howdy,
Just noticed a line of code that struck me as odd and so I am writing just
in case it is a bug:
http://dpdk.org/browse/dpdk/tree/drivers/net/mlx5/mlx5_rxtx.c#n1014
Specifically the check "(mpw.length != length)" in mlx_tx_burst_mpw() looks
like a descriptor-format optimization for the spe
Hi Adrien,
On 14 September 2016 at 16:30, Adrien Mazarguil
wrote:
> Your interpretation is correct (this is intentional and not a bug).
>
Thanks very much for clarifying.
This is interesting to me because I am also working on a ConnectX-4 (Lx)
driver based on the newly released driver interfac
Hi Adrien,
Thanks for taking the time to write a detailed reply. This indeed sounds
reasonable to me. Users will need to take these special-cases into account
when predicting performance on their own anticipated workloads, which is a
bit tricky, but then that is life when dealing with complex new
On 22 September 2016 at 11:01, Jianbo Liu wrote:
> Tested with testpmd, host: txonly, guest: rxonly
> size (bytes) improvement (%)
> 644.12
> 128 6
> 256 2.65
> 512 -1.12
> 1024 -7.02
>
Have you conside
Howdy!
This memcpy discussion is absolutely fascinating. Glad to be a fly on the
wall!
On 21 January 2015 at 22:25, Jim Thompson wrote:
>
> The differences with DPDK are that a) entire cores (including the AVX/SSE
> units and even AES-NI (FPU) are dedicated to DPDK, and b) DPDK is a library,
>
On 22 January 2015 at 14:29, Jay Rolette wrote:
> Microseconds matter. Scaling up to 100GbE, nanoseconds matter.
>
True. Is there a cut-off point though? Does one nanosecond matter?
AVX512 will fit a 64-byte packet in one register and move that to or from
memory with one instruction. L1/L2 cach
Hi John,
On 19 January 2015 at 02:53, wrote:
> This patch set optimizes memcpy for DPDK for both SSE and AVX platforms.
> It also extends memcpy test coverage with unaligned cases and more test
> points.
>
I am really interested in this work you are doing on memory copies
optimized for packet d
On 26 January 2015 at 02:30, Wang, Zhihong wrote:
> Hi Luke,
>
>
>
> I?m very glad that you?re interested in this work. J
>
Great :).
I never published any performance data, and haven?t run cachebench.
>
> We use test_memcpy_perf.c in DPDK to do the test mainly, because it?s the
> environmen
Hi again John,
Thank you for the patient answers :-)
Thank you for pointing this out: I was mistakenly testing your Sandy Bridge
code on Haswell (lacking -DRTE_MACHINE_CPUFLAG_AVX2).
Correcting that, your code is both the fastest and the smallest in my
humble micro benchmarking tests.
Looks lik
Howdy,
I am writing to share some SIMD (SSE2 and AVX2) IP checksum routines. The
commit log for rte_ip.h said that this was an area of future interest for
DPDK.
Code:
https://github.com/lukego/snabbswitch/blob/ipchecksum-simd/src/lib/checksum.c
Feedback welcome. We are currently reviewing and in
On 7 May 2015 at 16:02, Avi Kivity wrote:
> One problem we've seen with dpdk is that it is a framework, not a library:
> it wants to create threads, manage memory, and generally take over. This
> is a problem for us, as we are writing a framework (seastar, [1]) and need
> to create threads, mana
On 8 May 2015 at 06:16, Wiles, Keith wrote:
> The PMDs or drivers would not be useful without DPDK MBUFS IMO
>
Surprisingly perhaps, I would find them very useful.
To me there are two parts to a driver: the hardware setup and the
transmit/receive.
The hardware setup is complex and generic. You
Hi Bruce,
On 8 May 2015 at 11:06, Bruce Richardson wrote:
> For the Intel NIC drivers, the hardware setup part used in DPDK is based
> off
> the other Intel drivers for other OS's. The code you are interested in
> should
> therefore be contained within the subfolders off each individual PMD. As
On 8 May 2015 at 11:42, Bruce Richardson wrote:
> The code in those directories is "common" code that is maintained by Intel
> -
> which is why you see repeated comments about not modifying it for DPDK. It
> is
> just contained in it's own subfolder in each DPDK driver for easier
> updating
> off
Hi Paul,
On 11 May 2015 at 02:14, Paul Emmerich wrote:
> Another possible solution would be a more dynamic approach to mbufs:
Let me suggest a slightly more extreme idea for your consideration. This
method can easily do > 100 Mpps with one very lightly loaded core. I don't
know if it works for
On 5 November 2014 at 14:00, Thomas Monjalon
wrote:
> It seems to be close to the bifurcated driver needs.
> Not sure if it can solve the security issues if there is no dedicated MMU
> in the NIC.
>
> I feel we should sum up pros and cons of
> - igb_uio
> - uio_pci_generic
>
Hi Tim,
On 22 October 2014 15:48, O'driscoll, Tim wrote:
> 2.0 (Q1 2015) DPDK Features:
> Bifurcated Driver: With the Bifurcated Driver, the kernel will retain
> direct control of the NIC, and will assign specific queue pairs to DPDK.
> Configuration of the NIC is controlled by the kernel via et
On 22 May 2015 at 10:05, Maciej Grochowski
wrote:
> What I'm going to do today is to compile newest kernel for vhost and guest
> and debug where packet flow stuck, I will report the result
>
Compiling the guest virtio-net driver with debug printouts enabled can be
really helpful in these situati
On 9 June 2015 at 09:04, Linhaifeng wrote:
> On 2015/4/24 15:27, Luke Gorrie wrote:
> > You should be able to test it like this:
> >
> > 1. Boot two Linux kernel (e.g. 3.13) guests.
> > 2. Connect them via vhost switch.
> > 3. Run continuous traffic between t
On 9 June 2015 at 10:46, Michael S. Tsirkin wrote:
> By the way, similarly, host side must re-check avail idx after writing
> used flags. I don't see where snabbswitch does it - is that a bug
> in snabbswitch?
Good question.
Snabb Switch does not use interrupts from the guest. We always set
VR
30 matches
Mail list logo