[PATCH v2] igc: fix invalid length and corrupted multi-segment mbufs

2024-11-01 Thread Martin Weiser
timestamp but the data offset of the follow-up mbufs was not adjusted accordingly. This caused 16 bytes of packet data to be missing between the segments. Signed-off-by: Martin Weiser --- v2: * Added comments for clarification. drivers/net/igc/igc_txrx.c | 26 ++ 1 file

Re: [PATCH] igc: fix invalid length and corrupted multi-segment mbufs

2024-11-01 Thread Martin Weiser
Hi Bruce, thank you very much for your feedback. Please see my answers inline below. I will send a v2 of the patch. Best regards, Martin Am 29.10.24 um 18:42 schrieb Bruce Richardson: > On Mon, Oct 28, 2024 at 03:17:07PM +0100, Martin Weiser wrote: >> >> The issue only appeare

[PATCH] igc: fix invalid length and corrupted multi-segment mbufs

2024-10-28 Thread Martin Weiser
timestamp but the data offset of the follow-up mbufs was not adjusted accordingly. This caused 16 bytes of packet data to be missing between the segments. Signed-off-by: Martin Weiser --- drivers/net/igc/igc_txrx.c | 9 + 1 file changed, 9 insertions(+) diff --git a/drivers/net/igc/igc_txrx.c

[PATCH] igc: fix invalid length and corrupted multi-segment mbufs

2024-10-28 Thread Martin Weiser
timestamp but the data offset of the follow-up mbufs was not adjusted accordingly. This caused 16 bytes of packet data to be missing between the segments. Signed-off-by: Martin Weiser ---  drivers/net/igc/igc_txrx.c | 9 +  1 file changed, 9 insertions(+) diff --git a/drivers/net/igc/igc_txrx.c b

Re: [PATCH v2] net/ice: write rx timestamp to the first mbuf segment in scattered rx

2023-08-08 Thread Martin Weiser
:39 schrieb Martin Weiser: Previously, the rx timestamp was written to the last segment of the mbuf chain, which was unexpected. Signed-off-by: Martin Weiser --- drivers/net/ice/ice_rxtx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers

[PATCH v2] net/ice: write rx timestamp to the first mbuf segment in scattered rx

2023-08-08 Thread Martin Weiser
Previously, the rx timestamp was written to the last segment of the mbuf chain, which was unexpected. Signed-off-by: Martin Weiser --- drivers/net/ice/ice_rxtx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index

[PATCH] net/ice: write rx timestamp to the first mbuf segment in scattered rx

2023-08-08 Thread Martin Weiser
Previously, the rx timestamp was written to the last segment of the mbuf chain, which was unexpected. Signed-off-by: Martin Weiser ---  drivers/net/ice/ice_rxtx.c | 2 +-  1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index

Re: [dpdk-dev] [PATCH] net/af_xdp: fix integer overflow in umem size calculation

2020-10-29 Thread Martin Weiser
n our systems with kernel 5.8. Are you aware of any such kernel-side limitation? This basically makes af_xdp unusable with mempools larger than 2GB. Best regards, Martin Am 29.10.20 um 12:25 schrieb Martin Weiser: > The multiplication of two u32 integers may cause an overflow with large > me

[dpdk-dev] [PATCH] net/af_xdp: fix integer overflow in umem size calculation

2020-10-29 Thread Martin Weiser
The multiplication of two u32 integers may cause an overflow with large mempool sizes. Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") Cc: ciara.lof...@intel.com Signed-off-by: Martin Weiser --- drivers/net/af_xdp/rte_eth_af_xdp.c | 3 ++- 1 file changed, 2 insertions(+),

Re: [dpdk-dev] net/ixgbe: ixgbe_dev_link_update_share() leaks memory and memory mappings due to not cleaning up pthreads

2020-04-09 Thread Martin Weiser
Sorry, please ignore my previous statement about this having been reworked in master. I was comparing to the wrong checkout. This issue seems to be still present in the current master. Am 09.04.20 um 14:06 schrieb Martin Weiser: > Hi, > > I should have mentioned that our findings appl

Re: [dpdk-dev] net/ixgbe: ixgbe_dev_link_update_share() leaks memory and memory mappings due to not cleaning up pthreads

2020-04-09 Thread Martin Weiser
Hi, I should have mentioned that our findings apply to DPDK 20.02. I can see in master that this since has been reworked to use rte_eal_alarm_set() instead of using a thread. But maybe this should be addressed in stable? Best regards, Martin Weiser Am 09.04.20 um 12:30 schrieb Martin Weiser

[dpdk-dev] net/ixgbe: ixgbe_dev_link_update_share() leaks memory and memory mappings due to not cleaning up pthreads

2020-04-09 Thread Martin Weiser
calls e.g. rte_eth_link_get_nowait() on an ixgbe interface with no link this causes a lot of pthreads never to be cleaned up. Since each thread holds a mmap to the stack this can quite quickly exhaust the allowed number of memory mappings for the process. Best regards, Martin Weiser

Re: [dpdk-dev] i40e rte_eth_link_get_nowait() on X722 returns wrong link_speed value 20000 instead of 10000

2019-04-09 Thread Martin Weiser
Hi, just bumping this since there has been no reply at all for a long time. Would it be better if I opened a bug for this? Best, Martin Am 22.01.19 um 16:07 schrieb Martin Weiser: > Hi, > > We are using a Xeon D with an integrated X722 NIC that provides two > ports of 8086:37d2 a

[dpdk-dev] i40e rte_eth_link_get_nowait() on X722 returns wrong link_speed value 20000 instead of 10000

2019-01-22 Thread Martin Weiser
Hi, We are using a Xeon D with an integrated X722 NIC that provides two ports of 8086:37d2 and two ports of 8086:37d0. All four ports show the same behavior: they return a link speed value of 2 for a 10Gbps link. This only seems to happen when internally the update_link_reg() function in i40e

[dpdk-dev] DEV_RX_OFFLOAD_SCATTER not available in i40e an cxgbe

2018-09-18 Thread Martin Weiser
Hi, is there a specific reason that the rx offload capability DEV_RX_OFFLOAD_SCATTER is not available in the i40e and cxgbe drivers in DPDK 18.08? We previously used this feature with DPDK 17.11 to handle jumbo frames while using 2k mbufs and it worked without a problem. It also seems that simply

[dpdk-dev] [PATCH] net/ixgbe: allow for setting 2.5G and 5G speeds on X550

2018-01-26 Thread Martin Weiser
This patch adds support for explicitly selecting 2.5G and 5G speeds on X550. Signed-off-by: Martin Weiser --- drivers/net/ixgbe/ixgbe_ethdev.c | 21 +++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe

Re: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak

2017-10-10 Thread Martin Weiser
Hi Yongseok, I can confirm that this patch fixes the crashes and freezing in my tests so far. We still see an issue that once the mbufs run low and reference counts are used as well as freeing of mbufs in processing lcores happens we suddenly lose a large amount of mbufs that will never return to

Re: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak

2017-10-06 Thread Martin Weiser
y applicable to v17.08 as I rebased it on top of > Nelio's flow cleanup patch. But as this is a simple patch, you can easily > apply > it manually. > > Thanks, > Yongseok > > [1] http://dpdk.org/dev/patchwork/patch/29781 > >> On Sep 26, 2017, at 2:23 AM, Mart

[dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak

2017-09-26 Thread Martin Weiser
Hi, we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK 17.08 as well as dpdk-net-next and are experiencing mbuf leaks as well as crashes (and in some instances even kernel panics in a mlx5 module) under certain load conditions. We initially saw these issues only in our own DPDK-b

[dpdk-dev] [PATCH v3] cxgbe: report 100G link speed capability for Chelsio T6 adapters

2017-06-22 Thread Martin Weiser
These adapters support 100G link speed but the speed_capa bitmask in the device info did not reflect that. Signed-off-by: Martin Weiser --- drivers/net/cxgbe/cxgbe_ethdev.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c

Re: [dpdk-dev] [PATCH v2] cxgbe: report 100G link speed capability for Chelsio T6 adapters

2017-06-22 Thread Martin Weiser
>> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Martin Weiser >> Sent: Thursday, June 22, 2017 9:58 AM >> To: rahul.lakkire...@chelsio.com >> Cc: dev@dpdk.org; Martin Weiser >> Subject: [dpdk-dev] [PATCH v2] cxgbe: report 100G link speed capability

[dpdk-dev] [PATCH v2] cxgbe: report 100G link speed capability for Chelsio T6 adapters

2017-06-22 Thread Martin Weiser
These adapters support 100G link speed but the speed_capa bitmask in the device info did not reflect that. Signed-off-by: Martin Weiser --- drivers/net/cxgbe/cxgbe_ethdev.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net

[dpdk-dev] [PATCH] cxgbe: report 100G link speed capability for Chelsio T6 adapters

2017-06-22 Thread Martin Weiser
These adapters support 100G link speed but the speed_capa bitmask in the device info did not reflect that. Signed-off-by: Martin Weiser --- drivers/net/cxgbe/cxgbe_ethdev.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net

Re: [dpdk-dev] XL710 with i40e driver drops packets on RX even on a small rates.

2017-01-06 Thread Martin Weiser
working as expected with PCIe x8 v3. Best regards, Martin On 04.01.17 13:33, Martin Weiser wrote: > Hello, > > I have performed some more thorough testing on 3 different machines to > illustrate the strange results with XL710. > Please note that all 3 systems were able to forward

Re: [dpdk-dev] XL710 with i40e driver drops packets on RX even on a small rates.

2017-01-04 Thread Martin Weiser
hugepages=1 isolcpus=1-5,7-11 ### Test 1 No packets lost. ### Test 2 No packets lost. ### Test 3 No packets lost. Best regards, Martin On 03.01.17 13:18, Martin Weiser wrote: > Hello, > > we are also seeing this issue on one of our test systems while it does > not occur o

Re: [dpdk-dev] XL710 with i40e driver drops packets on RX even on a small rates.

2017-01-03 Thread Martin Weiser
Hello, we are also seeing this issue on one of our test systems while it does not occur on other test systems with the same DPDK version (we tested 16.11 and current master). The system that we can reproduce this issue on also has a X552 ixgbe NIC which can forward the exact same traffic using th

[dpdk-dev] i40e: disabling flow control makes XL710 NIC discard all packets

2015-11-05 Thread Martin Weiser
We will try to reproduce what have seen in our lab, and then debug. > > Regards, > Helin > >> -Original Message- >> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com] >> Sent: Wednesday, November 4, 2015 6:17 PM >> To: Zhang, Helin; dev at dpdk.or

[dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK 2.1

2015-11-04 Thread Martin Weiser
On 04.11.15 16:54, Van Haaren, Harry wrote: >> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com] >> Subject: Re: [dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK >> 2.1 >> The >> rx-error which showed up immediately after starting the i

[dpdk-dev] i40e: disabling flow control makes XL710 NIC discard all packets

2015-11-04 Thread Martin Weiser
Hi Helin, I have been doing some tests with the current DPDK master to see if the issues we had with performance and statistics have improved. In our own applications we usually disable flow control using the following code: struct rte_eth_fc_conf fc_conf = { .mode = RTE_FC_NONE }; int ret = rte_

[dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK 2.1

2015-11-04 Thread Martin Weiser
regardless of the actual NIC. What do you think? Regards, Martin On 02.11.15 18:32, Van Haaren, Harry wrote: >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Martin Weiser >> Sent: Wednesday, October 21, 2015 9:38 AM >> To: dev at dpdk.org >> Subject: [dpdk-dev

[dpdk-dev] ixgbe: account more Rx errors Issue

2015-10-22 Thread Martin Weiser
On 14.09.15 11:50, Tahhan, Maryam wrote: >> From: Kyle Larose [mailto:eomereadig at gmail.com] >> Sent: Wednesday, September 9, 2015 6:43 PM >> To: Tahhan, Maryam >> Cc: Olivier MATZ; Andriy Berestovskyy; dev at dpdk.org >> Subject: Re: [dpdk-dev] ixgbe: account more Rx errors Issue >> >> >> On Mo

[dpdk-dev] i40e: problem with rx packet drops not accounted in statistics

2015-10-22 Thread Martin Weiser
on this fix. > But what do you mean the performance problem? Did you mean the performance > number is not good as expected, or else? > > Regards, > Helin > >> -Original Message- >> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com] >> Sent:

[dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK 2.1

2015-10-22 Thread Martin Weiser
ded counters from the ierrors. > > Here is an example: > https://github.com/Juniper/contrail-vrouter/commit/72f6ca05ac81d0ca5e7eb93c6ffe7a93648c2b00#diff-99c1f65a00658c7d38b3d1b64cb5fd93R1306 > > Regards, > Andriy > > On Wed, Oct 21, 2015 at 10:38 AM, Martin Weiser > wro

[dpdk-dev] i40e: problem with rx packet drops not accounted in statistics

2015-10-21 Thread Martin Weiser
Hi Martin > > Yes, the statistics issue has been reported several times recently. > We will check the issue and try to fix it or get a workaround soon. Thank you > very much! > > Regards, > Helin > >> -----Original Message- >> From: Martin Weiser [mailto:martin

[dpdk-dev] ixgbe: ierrors counter spuriously increasing in DPDK 2.1

2015-10-21 Thread Martin Weiser
Hi, with DPDK 2.1 we are seeing the ierrors counter increasing for 82599ES ports without reason. Even directly after starting test-pmd the error counter immediately is 1 without even a single packet being sent to the device: ./testpmd -c 0xfe -n 4 -- --portmask 0x3 --interactive ... testpmd> show

[dpdk-dev] i40e: problem with rx packet drops not accounted in statistics

2015-09-09 Thread Martin Weiser
Hi Helin, in one of our test setups involving i40e adapters we are experiencing packet drops which are not reflected in the interfaces statistics. The call to rte_eth_stats_get suggests that all packets were properly received but the total number of packets received through rte_eth_rx_burst is les

[dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd

2015-07-29 Thread Martin Weiser
Hi Helin, Hi Olivier, we are seeing an issue with the ixgbe and i40e drivers which we could track down to our setting of the private area size of the mbufs. The issue can be easily reproduced with the l2fwd example application when a small modification is done: just set the priv_size parameter in

[dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0

2015-01-23 Thread Martin Weiser
regards, Martin On 23.01.15 12:52, Bruce Richardson wrote: > On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote: >> Hi Bruce, >> >> I now had the chance to reproduce the issue we are seeing with a DPDK >> example app. >> I started out with a vanilla DPDK

[dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0

2015-01-23 Thread Martin Weiser
t anything else just let me know. Best regards, Martin On 21.01.15 14:49, Bruce Richardson wrote: > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote: >> Hi again, >> >> I did some further testing and it seems like this issue is linked to >> jumbo frames.

[dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0

2015-01-20 Thread Martin Weiser
e anything special to consider regarding jumbo frames when moving from DPDK 1.7 to 1.8 that we might have missed? Martin On 19.01.15 11:26, Martin Weiser wrote: > Hi everybody, > > we quite recently updated one of our applications to DPDK 1.8.0 and are > now seeing a segmentation fau

[dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0

2015-01-19 Thread Martin Weiser
Hi everybody, we quite recently updated one of our applications to DPDK 1.8.0 and are now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes. I just did some quick debugging and I only have a very limited understanding of the code in question but it seems that the 'continue' i