Here are some notes from the DPDK Network Stack discussion, I can remember
please help me fill in anything I missed.
Items I remember we talked about:
* The only reason for a DPDK TCP/IP stack is for performance and possibly
lower latency
* Meaning the developer is willing to re-writ
On 10/9/15 11:40 AM, Panu Matilainen wrote:
> On 10/09/2015 01:03 PM, Montorsi, Francesco wrote:
>> Hi Panu,
>>
>>
>>
>>> -Original Message-
>>> From: Panu Matilainen [mailto:pmatilai at redhat.com]
>>> Sent: venerd? 9 ottobre 2015 10:26
>>> To: Montorsi, Francesco ; Thomas Monjalon
>>>
On 10/09/2015 08:45 AM, Yuanhan Liu wrote:
> This patch set enables vhost-user multiple queues.
>
> Overview
>
>
> It depends on some QEMU patches that has already been merged to upstream.
> Those qemu patches introduce some new vhost-user messages, for vhost-user
> mq enabling negotiation
If DPDK is used on VF while the host is using Linux Kernel driver
as PF driver on FVL NIC, then VF Rx is reported only in batches of
4 packets. It is due to the kernel driver assumes VF driver is working
in interrupt mode, but DPDK VF is working in Polling mode.
This patch fixes this issue by using
Hi all,
I'm using rte_eth_rx_burst() to successfully retrieve packets from a
DPDK-enabled port. I can process the packet and everything works fine. My only
issue is that I cannot find any mean to retrieve a timestamp for every single
packet. As a dirty-workaround I'm using gettimeofday() to time
Hi Olga
Thanks for the pointer towards the use of "accelerated verbs".
Yes, SRIOV is enabled, dpdk on the hypervisor on the probed VFs. That said,
it also fails on the underlying PF as far as I see (e.g. below the log
shows (VF: false) for device mlx4_0 and the code fails in RD creation on
this a
On 10/09/2015 01:13 PM, Montorsi, Francesco wrote:
>>> It seems the patch missed the boat :)
>>
>> Correct, sorry. I'm attaching it now.
> Ok, for some reason the email client is removing the attachment... I'm
> copying and pasting it:
> (the points marked as TODO are functions that still contain
Signed-off-by: Yuanhan Liu
---
doc/guides/rel_notes/release_2_2.rst | 5 +
1 file changed, 5 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst
b/doc/guides/rel_notes/release_2_2.rst
index 5687676..34c910f 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_n
From: Changchun Ouyang
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
examples/vhost/main.c | 97 +--
1 file changed, 56 insertions(+), 41 deletions(-)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 23b7aa7..06a
From: Changchun Ouyang
This patch demonstrates the usage of vhost mq feature, by leveraging
the VMDq+RSS HW feature to receive packets and distribute them into
different queue in the pool according to 5 tuples.
Queue number is specified by the --rxq option.
HW queue numbers in pool is exactly s
From: Changchun Ouyang
In non-SRIOV environment, VMDq RSS could be enabled by MRQC register.
In theory, the queue number per pool could be 2 or 4, but only 2 queues
are available due to HW limitation, the same limit also exists in Linux
ixgbe driver.
Signed-off-by: Changchun Ouyang
Signed-off-b
From: Changchun Ouyang
The new API rte_vhost_core_id_set() is to bind a virtq to a specific
core, while the another API rte_vhost_core_id_get() is for getting
the bind core for a virtq.
The usage, which will be introduced soon, could be find at examles/vhost/main.c.
Signed-off-by: Changchun Ouy
From: Changchun Ouyang
This message is used to enable/disable a specific vring queue pair.
The first queue pair is enabled by default.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
v6: add a vring state changed callback, for informing the application
that a specific vring
By setting VHOST_USER_PROTOCOL_F_MQ protocol feature bit, and
VIRTIO_NET_F_MQ feature bit.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/virtio-net-user.h | 4 +++-
lib/librte_vhost/virtio-net.c | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib
From: Changchun Ouyang
The old code adjusts the config bytes we want to read depending on
what kind of features we have, but we later cast the entire buf we
read with "struct virtio_net_config", which is obviously wrong.
The right way to go is to read related config bytes when corresponding
feat
Destroy corresponding device when a VHOST_USER_RESET_OWNER message is
received, otherwise, the vhost-switch would still try to access vq
of that device, which results to SIGSEG fault, and let vhost-switch
crash in the end.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
lib/librt
From: Changchun Ouyang
Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id,
instead, which will be set to a proper value for a specific queue
when we have multiple queue support enabled.
For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ,
so it should not break anything.
Sig
All queue pairs, including the default (the first) queue pair,
are allocated dynamically, when a vring_call message is received
first time for a specific queue pair.
This is a refactor work for enabling vhost-user multiple queue;
it should not break anything as it does no functional changes:
we do
To tell the frontend (qemu) how many queue pairs we support.
And it is initiated to VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/vhost-net-user.c | 7 +++
lib/librte_vhost/vhost_user/vhost-net-user.h | 1 +
2 files changed, 8 insertions(+)
dif
The two protocol features messages are introduced by qemu vhost
maintainer(Michael) for extendting vhost-user interface. Here is
an excerpta from the vhost-user spec:
Any protocol extensions are gated by protocol feature bits,
which allows full backwards compatibility on both master
an
This patch set enables vhost-user multiple queues.
Overview
It depends on some QEMU patches that has already been merged to upstream.
Those qemu patches introduce some new vhost-user messages, for vhost-user
mq enabling negotiation. Here is the main negotiation steps (Qemu
as master, and
On 10/09/2015 01:03 PM, Montorsi, Francesco wrote:
> Hi Panu,
>
>
>
>> -Original Message-
>> From: Panu Matilainen [mailto:pmatilai at redhat.com]
>> Sent: venerd? 9 ottobre 2015 10:26
>> To: Montorsi, Francesco ; Thomas Monjalon
>>
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] rte_eal
2015-09-08 12:57, Dumitrescu, Cristian:
> From: Singh, Jasvinder
> > This patchset links to ABI change announced for librte_table. For lpm table,
> > name parameter has been included in LPM table parameters structure.
> > It will eventually allow applications to create more than one instances
> > o
On 10/08/2015 05:58 PM, Montorsi, Francesco wrote:
> Hi,
>
>> -Original Message-
>> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
>> Sent: mercoled? 2 settembre 2015 15:10
>> To: Montorsi, Francesco
>> Cc: dev at dpdk.org; Bruce Richardson
>> Subject: Re: [dpdk-dev] rte_eal_
Hi all,
I am running Fedora Rawhide with the latest Linux kernel (4.3.0 rc4)
and the latest dpdk no longer compiles, with error message "struct
pci_dev has no member msi_list".
This is due to kernel patch number
4a7cc831670550e6b48ef5760e7213f89935ff0d
which is now in v4.3-rc1, v4.3-rc2 and v4.
> > It seems the patch missed the boat :)
>
> Correct, sorry. I'm attaching it now.
Ok, for some reason the email client is removing the attachment... I'm copying
and pasting it:
(the points marked as TODO are functions that still contain rte_panic()
calls...)
dpdk-2.1.0/lib/librte_eal/
Hi,
> I just recognized that this dead loop is the same one that I have
> experienced (see
> http://dpdk.org/ml/archives/dev/2015-October/024737.html for reference).
> Just applying the changes in this patch (only 07/12) will not fix the
> dead loop at least in my setup.
Yes, exactly. I observ
Hi Panu,
> -Original Message-
> From: Panu Matilainen [mailto:pmatilai at redhat.com]
> Sent: venerd? 9 ottobre 2015 10:26
> To: Montorsi, Francesco ; Thomas Monjalon
>
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] rte_eal_init() alternative?
>
> > Something like the attached patch.
29 matches
Mail list logo