This single patch is for people to get familiar with the optimization and is
for collecting feedbacks.
It isn't splitted because it is straightforward.
Haven't finished the cleanups.
The description and illustration of the idea is in a previous mail titled
"virtio optimization idea".
---
config
Hello everyone.
Investigating IXGBE driver I found an mspdc counter (MAC Short Packet Discard).
And I am wondering why this counter is not used in the calculation of total RX
errors (ierrors field in rte_eth_stats structure). Is it already a part of
another counter, for example, rlec (Receive L
On 9/17/2015 1:25 AM, Kyle Larose wrote:
> Hi Huawei,
>
>> Kyle:
>> Could you tell us how did you produce this issue, very small pool size
>> or you are using pipeline model?
> If I understand correctly, by pipeline model you mean a model whereby
> multiple threads handle a given packet, with some
On 9/14/2015 5:44 AM, Thomas Monjalon wrote:
> Hi,
>
> 2015-09-11 12:32, Kyle Larose:
>> Looking through the version tree for virtio_rxtx.c, I saw the following
>> commit:
>>
>> http://dpdk.org/browse/dpdk/commit/lib/librte_pmd_virtio?id=8c09c20fb4cde76e53d87bd50acf2b441ecf6eb8
>>
>> Does anybody k
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Igor Ryzhov
> Hello everyone.
Hi Igor,
> Investigating IXGBE driver I found an mspdc counter (MAC Short Packet
> Discard). And I am wondering why this counter is not used in the calculation
> of total RX errors (ierrors field in rte_eth_st
This is virtual PMD which communicates with COMBO-80G and COMBO-100G
cards through sze2 layer. Communication with COMBO card is managed
through interface provided by libsze2 library and kernel modules
(combov3, szedata2_cv3).
To compile and use PMD, it is necessary to have libsze2 library installe
Add virtual PMD which communicates with COMBO cards through sze2
layer using libsze2 library.
Since link_speed is uint16_t, there can not be used number for 100G
speed, therefore link_speed is set to ETH_LINK_SPEED_10G until the
type of link_speed is solved.
v2:
Code cleanup.
Fix error handling b
Add new RX function for handling scattered packets.
Signed-off-by: Matej Vido
Reviewed-by: Jan Viktorin
---
drivers/net/szedata2/rte_eth_szedata2.c | 356 +++-
1 file changed, 354 insertions(+), 2 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c
b/
TX function modified to handle chained mbufs.
Signed-off-by: Matej Vido
Reviewed-by: Jan Viktorin
---
drivers/net/szedata2/rte_eth_szedata2.c | 108 +++-
1 file changed, 91 insertions(+), 17 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c
b/driver
Signed-off-by: Matej Vido
Reviewed-by: Jan Viktorin
---
doc/guides/nics/index.rst| 1 +
doc/guides/nics/szedata2.rst | 105 +++
doc/guides/prog_guide/source_org.rst | 1 +
3 files changed, 107 insertions(+)
create mode 100644 doc/guides/ni
Add szedata2 PMD to 2.2 release notes.
Signed-off-by: Matej Vido
Reviewed-by: Jan Viktorin
---
doc/guides/rel_notes/release_2_2.rst | 4
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst
b/doc/guides/rel_notes/release_2_2.rst
index 682f468..c78f94d 100644
Hi Folks,
While doing some implementation with dpdk-1.7.0 on NIC 82599, I am facing
an issue. If i am sending the packets to the NIC at the line rate and the
application which i developed using DPDK is not processing the packets that
much faster, then the NIC starts to drop packets (of course, thi
I have seen the API definition says nothing about accuracy but some PMD
implementations sacrifice accuracy for the sake of performance. If I'm not
understanding the code wrongly i40e and ixgbe check DD bit just for the
first descriptor in a group of 4, and they take all of them as used if the
firs
Hi Thomas,
On Wed, Sep 02, 2015 at 04:18:33PM +0200, Thomas Monjalon wrote:
> > First, it would be easier for us to ship a single binary package that
> > ships a single shared library to cover all of DPDK that library
> > consumers might need, rather than having it split up as you do. I
> > unders
I'm a newbie and testing DPDK KNI with 1G intel NIC.
According to my understanding of DPDK documents,
KNI should not raise interrupts when sending/receiving packets.
But when I transmit bunch of packets to my KNI ports,
'top command' shows ksoftirqd with 50% CPU load.
Would you give me some comm
Hello, Harry.
Thank you, I'll wait for result of mspdc testing.
About rte_eth_stats - I found that not generic fields of the structure are all
deprecated already. I will research xstats API, thank you.
Best regards,
Igor
> 18 . 2015 ?., ? 11:04, Van Haaren, Harry
> ???(?):
>
>> From
This patch set enables vhost-user multiple queues.
Overview
It depends on some QEMU patches that, hopefully, will be merged soon.
Those qemu patches introduce some new vhost-user messages, for vhost-user
mq enabling negotiation. Here is the main negotiation steps (Qemu
as master, and DPD
The two protocol features messages are introduced by qemu vhost
maintainer(Michael) for extendting vhost-user interface. Here is
an excerpta from the vhost-user spec:
Any protocol extensions are gated by protocol feature bits,
which allows full backwards compatibility on both master
an
To tell the frontend (qemu) how many queue pairs we support.
And it is initiated to VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/vhost-net-user.c | 7 +++
lib/librte_vhost/vhost_user/vhost-net-user.h | 1 +
2 files changed, 8 insertions(+)
dif
All queue pairs, including the default (the first) queue pair,
are allocated dynamically, when a vring_call message is received
first time for a specific queue pair.
This is a refactor work for enabling vhost-user multiple queue;
it should not break anything as it does no functional changes:
we do
From: Changchun Ouyang
Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id,
instead, which will be set to a proper value for a specific queue
when we have multiple queue support enabled.
For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ,
so it should not break anything.
Sig
From: Changchun Ouyang
This message is used to enable/disable a specific vring queue pair.
The first queue pair is enabled by default.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/rte_virtio_net.h | 1 +
lib/librte_vhost/vhost_rxtx.c
Destroy corresponding device when a VHOST_USER_RESET_OWNER message is
received, otherwise, the vhost-switch would still try to access vq
of that device, which results to SIGSEG fault, and let vhost-switch
crash in the end.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
lib/librt
From: Changchun Ouyang
Fix the max virtio queue pair read issue.
Control queue can't work for vhost-user mulitple queue mode,
so introduce a counter to void the dead loop when polling
the control queue.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
drivers/net/virtio/virtio_
By setting VHOST_USER_PROTOCOL_F_MQ protocol feature bit, and
VIRTIO_NET_F_MQ feature bit.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/virtio-net-user.h | 4 +++-
lib/librte_vhost/virtio-net.c | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib
From: Changchun Ouyang
The new API rte_vhost_core_id_set() is to bind a virtq to a specific
core, while the another API rte_vhost_core_id_get() is for getting
the bind core for a virtq.
The usage, which will be introduced soon, could be find at examles/vhost/main.c.
Signed-off-by: Changchun Ouy
From: Changchun Ouyang
In non-SRIOV environment, VMDq RSS could be enabled by MRQC register.
In theory, the queue number per pool could be 2 or 4, but only 2 queues
are available due to HW limitation, the same limit also exists in Linux
ixgbe driver.
Signed-off-by: Changchun Ouyang
Signed-off-b
From: Changchun Ouyang
This patch demonstrates the usage of vhost mq feature, by leveraging
the VMDq+RSS HW feature to receive packets and distribute them into
different queue in the pool according to 5 tuples.
Queue number is specified by the --rxq option.
HW queue numbers in pool is exactly s
From: Changchun Ouyang
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
examples/vhost/main.c | 97 +--
1 file changed, 56 insertions(+), 41 deletions(-)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 23b7aa7..06a
Sorry that I typed wrong email address of Changchun; I will resend them.
Sorry for the noisy.
--yliu
On Fri, Sep 18, 2015 at 11:01:01PM +0800, Yuanhan Liu wrote:
> This patch set enables vhost-user multiple queues.
>
> Overview
>
>
> It depends on some QEMU patches that, hopefu
This patch set enables vhost-user multiple queues.
Overview
It depends on some QEMU patches that, hopefully, will be merged soon.
Those qemu patches introduce some new vhost-user messages, for vhost-user
mq enabling negotiation. Here is the main negotiation steps (Qemu
as master, and DPD
The two protocol features messages are introduced by qemu vhost
maintainer(Michael) for extendting vhost-user interface. Here is
an excerpta from the vhost-user spec:
Any protocol extensions are gated by protocol feature bits,
which allows full backwards compatibility on both master
an
To tell the frontend (qemu) how many queue pairs we support.
And it is initiated to VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/vhost-net-user.c | 7 +++
lib/librte_vhost/vhost_user/vhost-net-user.h | 1 +
2 files changed, 8 insertions(+)
dif
All queue pairs, including the default (the first) queue pair,
are allocated dynamically, when a vring_call message is received
first time for a specific queue pair.
This is a refactor work for enabling vhost-user multiple queue;
it should not break anything as it does no functional changes:
we do
From: Changchun Ouyang
Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id,
instead, which will be set to a proper value for a specific queue
when we have multiple queue support enabled.
For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ,
so it should not break anything.
Sig
From: Changchun Ouyang
This message is used to enable/disable a specific vring queue pair.
The first queue pair is enabled by default.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/rte_virtio_net.h | 1 +
lib/librte_vhost/vhost_rxtx.c
Destroy corresponding device when a VHOST_USER_RESET_OWNER message is
received, otherwise, the vhost-switch would still try to access vq
of that device, which results to SIGSEG fault, and let vhost-switch
crash in the end.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
lib/librt
From: Changchun Ouyang
Fix the max virtio queue pair read issue.
Control queue can't work for vhost-user mulitple queue mode,
so introduce a counter to void the dead loop when polling
the control queue.
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
drivers/net/virtio/virtio_
By setting VHOST_USER_PROTOCOL_F_MQ protocol feature bit, and
VIRTIO_NET_F_MQ feature bit.
Signed-off-by: Yuanhan Liu
---
lib/librte_vhost/vhost_user/virtio-net-user.h | 4 +++-
lib/librte_vhost/virtio-net.c | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib
From: Changchun Ouyang
The new API rte_vhost_core_id_set() is to bind a virtq to a specific
core, while the another API rte_vhost_core_id_get() is for getting
the bind core for a virtq.
The usage, which will be introduced soon, could be find at examles/vhost/main.c.
Signed-off-by: Changchun Ouy
From: Changchun Ouyang
In non-SRIOV environment, VMDq RSS could be enabled by MRQC register.
In theory, the queue number per pool could be 2 or 4, but only 2 queues
are available due to HW limitation, the same limit also exists in Linux
ixgbe driver.
Signed-off-by: Changchun Ouyang
Signed-off-b
From: Changchun Ouyang
This patch demonstrates the usage of vhost mq feature, by leveraging
the VMDq+RSS HW feature to receive packets and distribute them into
different queue in the pool according to 5 tuples.
Queue number is specified by the --rxq option.
HW queue numbers in pool is exactly s
From: Changchun Ouyang
Signed-off-by: Changchun Ouyang
Signed-off-by: Yuanhan Liu
---
examples/vhost/main.c | 97 +--
1 file changed, 56 insertions(+), 41 deletions(-)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 23b7aa7..06a
On Thu, Sep 17, 2015 at 09:28:31PM +0100, Zoltan Kiss wrote:
> Hi,
>
> The recv function does a prefetch on cacheline1, however it seems to me that
> rx_pkts[pos] should be uninitialized pointer at that time:
>
> http://dpdk.org/browse/dpdk/tree/drivers/net/ixgbe/ixgbe_rxtx_vec.c#n287
>
> So I g
Also fixed a bug in many of them where if the rte_malloc of
the TAILQ fails, then we return a pointer to some arbitrary
existing struct.
---
lib/librte_acl/rte_acl.c | 53 +--
lib/librte_hash/rte_cuckoo_hash.c | 6 +++--
lib/librte_hash/rte_fbk_hash.c
DPDK package lacks of a mechanism to install libraries, headers
applications and kernel modules to a file system tree.
This patch set allows to install files according to the next
proposal:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
By adding a parameter H=1 (hierarchy-fi
Add hierarchy-file support to the DPDK scripts, tools, examples,
makefiles and config files when invoking "make install H=1"
(hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
and dpdk spec file.
scripts, tools, examples, makefiles and
Add hierarchy-file support to the DPDK bind scripts,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
and dpdk spec file
bind scripts will be installed in:
$(DESTDIR)/usr/sbin/dpdk_nic_bind
Signed-of
Add hierarchy-file support to the DPDK documentation,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
and dpdk spec file
documentation will be installed in:
$(DESTDIR)/usr/share/doc/dpdk
Signed-off-
Add hierarchy-file support to the DPDK app files,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
app files will be installed in: $(DESTDIR)/usr/bin
Signed-off-by: Mario Carrillo
---
mk/rte.app.mk
Add hierarchy-file support to the DPDK headers,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
headers will be installed in: $(DESTDIR)/usr/include
Signed-off-by: Mario Carrillo
---
mk/internal/r
Add hierarchy-file support to the DPDK libs,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
for this case, if the architecture is 64 bits libs will be
instaled in: $(DESTDIR)/usr/lib64 else it will
Add hierarchy-file support to the DPDK modules for linux,
when invoking "make install H=1" (hierarchy-file)
This hierarchy is based on:
http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
headers will be installed in: $(DESTDIR)/lib/modules
Signed-off-by: Mario Carrillo
---
mk/
53 matches
Mail list logo