timestamp but the
data offset of the follow-up mbufs was not adjusted accordingly.
This caused 16 bytes of packet data to be missing between
the segments.
Signed-off-by: Martin Weiser
---
v2:
* Added comments for clarification.
drivers/net/igc/igc_txrx.c | 26 ++
1 file
Hi Bruce,
thank you very much for your feedback.
Please see my answers inline below.
I will send a v2 of the patch.
Best regards,
Martin
Am 29.10.24 um 18:42 schrieb Bruce Richardson:
> On Mon, Oct 28, 2024 at 03:17:07PM +0100, Martin Weiser wrote:
>>
>> The issue only appeare
timestamp but the
data offset of the follow-up mbufs was not adjusted accordingly.
This caused 16 bytes of packet data to be missing between
the segments.
Signed-off-by: Martin Weiser
---
drivers/net/igc/igc_txrx.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/net/igc/igc_txrx.c
timestamp but the
data offset of the follow-up mbufs was not adjusted accordingly.
This caused 16 bytes of packet data to be missing between
the segments.
Signed-off-by: Martin Weiser
---
drivers/net/igc/igc_txrx.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/net/igc/igc_txrx.c b
:39 schrieb Martin Weiser:
Previously, the rx timestamp was written to the last segment of the mbuf
chain, which was unexpected.
Signed-off-by: Martin Weiser
---
drivers/net/ice/ice_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers
Previously, the rx timestamp was written to the last segment of the mbuf
chain, which was unexpected.
Signed-off-by: Martin Weiser
---
drivers/net/ice/ice_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index
Previously, the rx timestamp was written to the last segment of the mbuf
chain, which was unexpected.
Signed-off-by: Martin Weiser
---
drivers/net/ice/ice_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index
n our
systems with kernel 5.8.
Are you aware of any such kernel-side limitation? This basically makes
af_xdp unusable with mempools larger than 2GB.
Best regards,
Martin
Am 29.10.20 um 12:25 schrieb Martin Weiser:
> The multiplication of two u32 integers may cause an overflow with large
> me
The multiplication of two u32 integers may cause an overflow with large
mempool sizes.
Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")
Cc: ciara.lof...@intel.com
Signed-off-by: Martin Weiser
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 3 ++-
1 file changed, 2 insertions(+),
Sorry, please ignore my previous statement about this having been
reworked in master. I was comparing to the wrong checkout.
This issue seems to be still present in the current master.
Am 09.04.20 um 14:06 schrieb Martin Weiser:
> Hi,
>
> I should have mentioned that our findings appl
Hi,
I should have mentioned that our findings apply to DPDK 20.02. I can see
in master that this since has been reworked to use rte_eal_alarm_set()
instead of using a thread.
But maybe this should be addressed in stable?
Best regards,
Martin Weiser
Am 09.04.20 um 12:30 schrieb Martin Weiser
calls e.g. rte_eth_link_get_nowait() on
an ixgbe interface with no link this causes a lot of pthreads never to
be cleaned up.
Since each thread holds a mmap to the stack this can quite quickly
exhaust the allowed number of memory mappings for the process.
Best regards,
Martin Weiser
Hi,
just bumping this since there has been no reply at all for a long time.
Would it be better if I opened a bug for this?
Best,
Martin
Am 22.01.19 um 16:07 schrieb Martin Weiser:
> Hi,
>
> We are using a Xeon D with an integrated X722 NIC that provides two
> ports of 8086:37d2 a
Hi,
We are using a Xeon D with an integrated X722 NIC that provides two
ports of 8086:37d2 and two ports of 8086:37d0. All four ports show the
same behavior: they return a link speed value of 2 for a 10Gbps link.
This only seems to happen when internally the update_link_reg() function
in i40e
Hi,
is there a specific reason that the rx offload capability
DEV_RX_OFFLOAD_SCATTER is not available in the i40e and cxgbe drivers in
DPDK 18.08?
We previously used this feature with DPDK 17.11 to handle jumbo frames
while using 2k mbufs and it worked without a problem.
It also seems that simply
This patch adds support for explicitly selecting 2.5G and 5G speeds on
X550.
Signed-off-by: Martin Weiser
---
drivers/net/ixgbe/ixgbe_ethdev.c | 21 +++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe
Hi Yongseok,
I can confirm that this patch fixes the crashes and freezing in my tests
so far.
We still see an issue that once the mbufs run low and reference counts
are used as well as freeing of mbufs in processing lcores happens we
suddenly lose a large amount of mbufs that will never return to
y applicable to v17.08 as I rebased it on top of
> Nelio's flow cleanup patch. But as this is a simple patch, you can easily
> apply
> it manually.
>
> Thanks,
> Yongseok
>
> [1] http://dpdk.org/dev/patchwork/patch/29781
>
>> On Sep 26, 2017, at 2:23 AM, Mart
Hi,
we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK
17.08 as well as dpdk-net-next and are
experiencing mbuf leaks as well as crashes (and in some instances even
kernel panics in a mlx5 module) under
certain load conditions.
We initially saw these issues only in our own DPDK-b
These adapters support 100G link speed but the speed_capa bitmask in the
device info did not reflect that.
Signed-off-by: Martin Weiser
---
drivers/net/cxgbe/cxgbe_ethdev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
>> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Martin Weiser
>> Sent: Thursday, June 22, 2017 9:58 AM
>> To: rahul.lakkire...@chelsio.com
>> Cc: dev@dpdk.org; Martin Weiser
>> Subject: [dpdk-dev] [PATCH v2] cxgbe: report 100G link speed capability
These adapters support 100G link speed but the speed_capa bitmask in the
device info did not reflect that.
Signed-off-by: Martin Weiser
---
drivers/net/cxgbe/cxgbe_ethdev.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net
These adapters support 100G link speed but the speed_capa bitmask in the
device info did not reflect that.
Signed-off-by: Martin Weiser
---
drivers/net/cxgbe/cxgbe_ethdev.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net
working as expected with PCIe x8 v3.
Best regards,
Martin
On 04.01.17 13:33, Martin Weiser wrote:
> Hello,
>
> I have performed some more thorough testing on 3 different machines to
> illustrate the strange results with XL710.
> Please note that all 3 systems were able to forward
hugepages=1
isolcpus=1-5,7-11
### Test 1
No packets lost.
### Test 2
No packets lost.
### Test 3
No packets lost.
Best regards,
Martin
On 03.01.17 13:18, Martin Weiser wrote:
> Hello,
>
> we are also seeing this issue on one of our test systems while it does
> not occur o
Hello,
we are also seeing this issue on one of our test systems while it does
not occur on other test systems with the same DPDK version (we tested
16.11 and current master).
The system that we can reproduce this issue on also has a X552 ixgbe NIC
which can forward the exact same traffic using th
We will try to reproduce what have seen in our lab, and then debug.
>
> Regards,
> Helin
>
>> -Original Message-
>> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com]
>> Sent: Wednesday, November 4, 2015 6:17 PM
>> To: Zhang, Helin; dev at dpdk.or
On 04.11.15 16:54, Van Haaren, Harry wrote:
>> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com]
>> Subject: Re: [dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK
>> 2.1
>> The
>> rx-error which showed up immediately after starting the i
Hi Helin,
I have been doing some tests with the current DPDK master to see if the
issues we had with performance and statistics have improved.
In our own applications we usually disable flow control using the
following code:
struct rte_eth_fc_conf fc_conf = { .mode = RTE_FC_NONE };
int ret = rte_
regardless of
the actual NIC. What do you think?
Regards,
Martin
On 02.11.15 18:32, Van Haaren, Harry wrote:
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Martin Weiser
>> Sent: Wednesday, October 21, 2015 9:38 AM
>> To: dev at dpdk.org
>> Subject: [dpdk-dev
On 14.09.15 11:50, Tahhan, Maryam wrote:
>> From: Kyle Larose [mailto:eomereadig at gmail.com]
>> Sent: Wednesday, September 9, 2015 6:43 PM
>> To: Tahhan, Maryam
>> Cc: Olivier MATZ; Andriy Berestovskyy; dev at dpdk.org
>> Subject: Re: [dpdk-dev] ixgbe: account more Rx errors Issue
>>
>>
>> On Mo
on this fix.
> But what do you mean the performance problem? Did you mean the performance
> number is not good as expected, or else?
>
> Regards,
> Helin
>
>> -Original Message-
>> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com]
>> Sent:
ded counters from the ierrors.
>
> Here is an example:
> https://github.com/Juniper/contrail-vrouter/commit/72f6ca05ac81d0ca5e7eb93c6ffe7a93648c2b00#diff-99c1f65a00658c7d38b3d1b64cb5fd93R1306
>
> Regards,
> Andriy
>
> On Wed, Oct 21, 2015 at 10:38 AM, Martin Weiser
> wro
Hi Martin
>
> Yes, the statistics issue has been reported several times recently.
> We will check the issue and try to fix it or get a workaround soon. Thank you
> very much!
>
> Regards,
> Helin
>
>> -----Original Message-
>> From: Martin Weiser [mailto:martin
Hi,
with DPDK 2.1 we are seeing the ierrors counter increasing for 82599ES
ports without reason. Even directly after starting test-pmd the error
counter immediately is 1 without even a single packet being sent to the
device:
./testpmd -c 0xfe -n 4 -- --portmask 0x3 --interactive
...
testpmd> show
Hi Helin,
in one of our test setups involving i40e adapters we are experiencing
packet drops which are not reflected in the interfaces statistics.
The call to rte_eth_stats_get suggests that all packets were properly
received but the total number of packets received through
rte_eth_rx_burst is les
Hi Helin, Hi Olivier,
we are seeing an issue with the ixgbe and i40e drivers which we could
track down to our setting of the private area size of the mbufs.
The issue can be easily reproduced with the l2fwd example application
when a small modification is done: just set the priv_size parameter in
regards,
Martin
On 23.01.15 12:52, Bruce Richardson wrote:
> On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote:
>> Hi Bruce,
>>
>> I now had the chance to reproduce the issue we are seeing with a DPDK
>> example app.
>> I started out with a vanilla DPDK
t anything else just let me know.
Best regards,
Martin
On 21.01.15 14:49, Bruce Richardson wrote:
> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
>> Hi again,
>>
>> I did some further testing and it seems like this issue is linked to
>> jumbo frames.
e anything special to consider regarding jumbo frames when moving
from DPDK 1.7 to 1.8 that we might have missed?
Martin
On 19.01.15 11:26, Martin Weiser wrote:
> Hi everybody,
>
> we quite recently updated one of our applications to DPDK 1.8.0 and are
> now seeing a segmentation fau
Hi everybody,
we quite recently updated one of our applications to DPDK 1.8.0 and are
now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
I just did some quick debugging and I only have a very limited
understanding of the code in question but it seems that the 'continue'
i
41 matches
Mail list logo