Re: [EXTERNAL] [PATCH v2] app/graph: fix destination buffer too small

2024-07-06 Thread Mahmoud Matook
On 07/04, David Marchand wrote:

> On Thu, Jun 27, 2024 at 2:44 AM Kiran Kumar Kokkilagadda
>  wrote:
> > > On 06/25, Kiran Kumar Kokkilagadda wrote: > > > From: Mahmoud Maatuq
> > >  > Sent: Tuesday, June 25, 2024 1: 31
> > > AM > To: Sunil Kumar Kori ; Rakesh Kudurumalla
> > > ;
> > > On 06/25, Kiran Kumar Kokkilagadda wrote:
> > > > Subject: [EXTERNAL] [PATCH v2] app/graph: fix destination buffer too
> > > > small
> > > >
> > > > as sizeof(config. rx. mempool_name) is < sizeof(res->mempool) we
> > > > should copy at most sizeof(config. rx. mempool_name) and replace memcpy
> > > with strlcpy as mempool name is a null terminated string Coverity issue:
> > > 415430 Fixes: 3850cb06ab9c ("app/graph:
> > > >
> > > >
> > > > as sizeof(config.rx.mempool_name) is < sizeof(res->mempool) we should
> > > >
> > > > copy at most sizeof(config.rx.mempool_name) and replace memcpy with
> > > >
> > > > strlcpy as mempool name is a null terminated string
> > > >
> > > >
> > > >
> > > > Coverity issue: 415430
> > > >
> > > > Fixes: 3850cb06ab9c ("app/graph: add ethdev commands")
> > > >
> > > > Cc: sk...@marvell.com
> > > >
> > > >
> > > >
> > > > Signed-off-by: Mahmoud Maatuq
> > > >
> > > mailto:mahmoudmatook...@gmail.co
> > > m>>
> > > >
> > > > ---
> >
> > Acked-by: Kiran Kumar Kokkilagadda 
> 
> This patch does not pass the CI, please check.

The warning in CI is clear, (truncation warning) as the size of
destination buffer: config.rx.mempool_name is around 26 bytes but the
size of of source buffer: res->mempool is 128 bytes

Some possible solutions: 
  * increase the destination buffer size: by tracking it we need to
  increase RTE_MEMZONE_NAMESIZE, (more memory consumption). 

  * if the mem pool name is usually around the same size as destination
  we could decrease the size of source buffer, but res->mempool of
  type cmdline_fixed_string_t and maybe used somewhere else that
  could also produce truncation.

  *  change the type of source buffer and use size as destination.

what do you think?

> 
> 
> -- 
> David Marchand
> 


Re: [PATCH 0/5] support queue information get operations

2024-07-06 Thread Ferruh Yigit
On 6/19/2024 3:49 AM, Chaoyong He wrote:
> This patch series add the support of queue information get operations
> for both Rx and Tx.
> 
> Long Wu (5):
>   net/nfp: use offload flag to control RSS configuration
>   net/nfp: use offload flag to control VXLAN configuration
>   net/nfp: use offload flag to control IPsec configuration
>   net/nfp: support getting Rx queue information
>   net/nfp: support getting Tx queue information
>

Series applied to dpdk-next-net/main, thanks.


Re: [PATCH] net/*: replace intrinsic header include with rte_vect

2024-07-06 Thread Ferruh Yigit
On 6/21/2024 10:06 PM, Tyler Retzlaff wrote:
> On Thu, Jun 20, 2024 at 01:32:18PM +0100, Bruce Richardson wrote:
>> Rather than having the SSE code in each driver include tmmintrin.h,
>> which often does not contain all needed intrinsics, e.g.
>> _mm_cvtsi128_si64() for 32-bit x86 builds, we can just replace the
>> include of ?mmintrin.h with rte_vect.h for all network drivers.
>>
>> Signed-off-by: Bruce Richardson 
>> ---
> Acked-by: Tyler Retzlaff 
> 

Applied to dpdk-next-net/main, thanks.


Re: [PATCH 0/4] support AVX2 instruction Rx/Tx function

2024-07-06 Thread Ferruh Yigit
On 6/19/2024 3:59 AM, Chaoyong He wrote:
> This patch series add the support of Rx/Tx function using the
> AVX2 instruction.
> 
> Long Wu (4):
>   net/nfp: export more interfaces of NFDk
>   net/nfp: support AVX2 Tx function
>   net/nfp: support AVX2 Rx function
>   net/nfp: vector Rx function supports parsing ptype
>

I got some build errors [1], and I can see CI did not run because of
apply error.
Can you please try with local github actions and send a new version?

[1]
- ubuntu-22.04-gcc-static-i386:
Run-time dependency netcope-common found: NO (tried pkgconfig)

drivers/net/nfp/meson.build:56:8: ERROR: Unknown variable
"static_rte_common_nfp".

- ubuntu-22.04-gcc-static-mingw:
drivers/net/nfp/meson.build:56:8: ERROR: Unknown variable
"static_rte_common_nfp".

A full log can be found at
/home/runner/work/dpdk/dpdk/build/meson-logs/meson-log.txt


Re: [PATCH] net/vmxnet3: fix init logs

2024-07-06 Thread Ferruh Yigit
On 6/25/2024 1:22 PM, David Marchand wrote:
> All logs for this driver are emitted under pmd.net.vmxnet3.driver while
> two logtypes exist.
> This issue comes from the conversion to dynamic logtypes change.
> Redirect PMD_INIT_LOG to pmd.net.vmxnet3.init.
> 
> Fixes: 79daffdcb6ac ("net/vmxnet3: implement dynamic logging")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: David Marchand 
> 

Reviewed-by: Ferruh Yigit 

Applied to dpdk-next-net/main, thanks.



Re: [PATCH 0/6] refactor the service module

2024-07-06 Thread Ferruh Yigit
On 6/19/2024 4:06 AM, Chaoyong He wrote:
> This patch series refactor the service module, remove some specific
> logic and use the service framework logic as much as possible.
> Also add a device argument to control the enable of CPP service.
> 
> Long Wu (6):
>   net/nfp: fix check logic for device arguments
>   net/nfp: remove redundant NFP service code
>   net/nfp: remove the flower service dead loop
>   net/nfp: fix disable CPP service
>   net/nfp: add CPP service enable option
>   net/nfp: add CPP service abnormal exit logic
>

Series applied to dpdk-next-net/main, thanks.


Re: [PATCH] net/gve: fix RSS hash endianness in DQO format

2024-07-06 Thread Joshua Washington
Thank you for your contribution, Shreesh! Let's get this applied as a
stable fix so that it can be applied to 23.11 as well.
https://doc.dpdk.org/guides/contributing/patches.html, section "Patch
for Stable Releases" should have more detail for how this can be done.

> Bugzilla ID: 1441
Please add a "fixes" tag as well. For reference, this is the commit
which is being fixed: net/gve: support basic Rx data path for DQO.

Thanks!


Re: [PATCH] net/tap/bpf: fix meson warning

2024-07-06 Thread Ferruh Yigit
On 6/25/2024 6:17 PM, Bruce Richardson wrote:
> On Tue, Jun 25, 2024 at 10:06:52AM -0700, Stephen Hemminger wrote:
>> Meson was warning that run_command was used without check and
>> the result of that will change in future version.
>>
>> In this case, the command (uname -m) should be checked.
>>
>> Signed-off-by: Stephen Hemminger 
>> ---
> Acked-by: Bruce Richardson 
>

Applied to dpdk-next-net/main, thanks.


Re: [PATCH 00/21] support modify field flow action

2024-07-06 Thread Ferruh Yigit
On 6/19/2024 10:13 AM, Chaoyong He wrote:
> This patch series try to support the modify field flow action
> as much as possible.
> 
> Chaoyong He (21):
>   net/nfp: fix IPv6 TTL and DSCP flow action
>   net/nfp: pack parameters of flow item function
>   net/nfp: pack various flags of flow action
>   net/nfp: refactor flow action calculate function
>   net/nfp: refactor flow action compile function
>   net/nfp: pack various flags of flow item
>   net/nfp: refactor flow item calculate function
>   net/nfp: support modify IPv4 source address
>   net/nfp: support modify IPv4 dest address
>   net/nfp: support modify IPv6 source address
>   net/nfp: support modify IPv6 dest address
>   net/nfp: support modify TCP source port
>   net/nfp: support modify TCP dest port
>   net/nfp: support modify UDP source port
>   net/nfp: support modify UDP dest port
>   net/nfp: support modify IPv4 TTL
>   net/nfp: support modify IPv6 hop limit
>   net/nfp: support modify MAC source address
>   net/nfp: support modify MAC dest address
>   net/nfp: support modify IPv4 DSCP
>   net/nfp: support modify IPv6 DSCP
>

Series applied to dpdk-next-net/main, thanks.


Re: [PATCH] net/netvsc: fix mtu_set in netvsc devices

2024-07-06 Thread Ferruh Yigit
On 6/28/2024 5:35 PM, Alexander Skorichenko wrote:
> Prevent segfault in hn_reinit() caused by changing the MTU for
> an incompletely initialized device.
> 
> Signed-off-by: Alexander Skorichenko 
>

Acked-by: Long Li 
Reviewed-by: Sam Andrew 

  Fixes: 45c83603087e (“net/netvsc: support MTU set”)
  Cc: sta...@dpdk.org


Applied to dpdk-next-net/main, thanks.


Re: [PATCH 0/7] refactor flow validate and create interface

2024-07-06 Thread Ferruh Yigit
On 6/19/2024 10:19 AM, Chaoyong He wrote:
> This patch series refactor the flow validate and create interface,
> make the logic more readable and esaier to extend.
> 
> Chaoyong He (7):
>   net/nfp: remove the unused parameter
>   net/nfp: exit as soon as possible
>   net/nfp: remove the duplicate logic of output action
>   net/nfp: split out the flow item check logic
>   net/nfp: simplify the flow item calculate logic
>   net/nfp: split out the flow action check logic
>   net/nfp: simplify the flow action calculate logic
>

Series applied to dpdk-next-net/main, thanks.


Re: [PATCH v2] net/memif: fix buffer overflow in zero copy Rx

2024-07-06 Thread Ferruh Yigit
On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
> new mbufs to be provided to the sender. The allocated mbuf pointers
> are stored in a ring, but the alloc function doesn't implement index
> wrap-around, so it writes past the end of the array. This results in
> memory corruption and duplicate mbufs being received.
> 

Hi Mihai,

I am not sure writing past the ring actually occurs.

As far as I can see is to keep the ring full as much as possible, when
initially 'head' and 'tail' are 0, it fills all ring.
Later tails moves and emptied space filled again. So head (in modulo) is
always just behind tail after refill. In next run, refill will only fill
the part tail moved, and this is calculated by 'n_slots'. As this is
only the size of the gap, starting from 'head' (with modulo) shouldn't
pass the ring length.

Do you observe this issue practically? If so can you please provide your
backtrace and numbers that is showing how to reproduce the issue?


> Allocate 2x the space for the mbuf ring, so that the alloc function
> has a contiguous array to write to, then copy the excess entries
> to the start of the array.
> 

Even issue is valid, I am not sure about solution to double to buffer
memory, but lets confirm the issue first before discussing the solution.

> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
> Cc: sta...@dpdk.org
> Signed-off-by: Mihai Brodschi 
> ---
> v2:
>  - fix email formatting
> 
> ---
>  drivers/net/memif/rte_eth_memif.c | 10 +-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/memif/rte_eth_memif.c 
> b/drivers/net/memif/rte_eth_memif.c
> index 16da22b5c6..3491c53cf1 100644
> --- a/drivers/net/memif/rte_eth_memif.c
> +++ b/drivers/net/memif/rte_eth_memif.c
> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, 
> uint16_t nb_pkts)
>   ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], 
> n_slots);
>   if (unlikely(ret < 0))
>   goto no_free_mbufs;
> + if (unlikely(n_slots > ring_size - (head & mask))) {
> + rte_memcpy(mq->buffers, &mq->buffers[ring_size],
> + (n_slots + (head & mask) - ring_size) * sizeof(struct 
> rte_mbuf *));
> + }
>  
>   while (n_slots--) {
>   s0 = head++ & mask;
> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>   }
>   mq->buffers = NULL;
>   if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
> + /*
> +  * Allocate 2x ring_size to reserve a contiguous array 
> for
> +  * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
> +  */
>   mq->buffers = rte_zmalloc("bufs", sizeof(struct 
> rte_mbuf *) *
> -   (1 << mq->log2_ring_size), 0);
> +   (1 << (mq->log2_ring_size + 
> 1)), 0);
>   if (mq->buffers == NULL)
>   return -ENOMEM;
>   }



Re: [PATCH] net/gve: fix RSS hash endianness in DQO format

2024-07-06 Thread Shreesh Adiga
Thank you Joshua for your review and suggestions.
Appreciate the help with dpdk process.
I've sent a new patch with "Fixes" tag to sta...@dpdk.org.
I'm not sure if same needs to be sent here to dev@dpdk.org as well.
Please let me know if I've missed anything.

Thanks,
Shreesh


Re: [PATCH v2] net/memif: fix buffer overflow in zero copy Rx

2024-07-06 Thread Mihai Brodschi
Hi Ferruh,

On 07/07/2024 05:12, Ferruh Yigit wrote:
> On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
>> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
>> new mbufs to be provided to the sender. The allocated mbuf pointers
>> are stored in a ring, but the alloc function doesn't implement index
>> wrap-around, so it writes past the end of the array. This results in
>> memory corruption and duplicate mbufs being received.
>>
>
> Hi Mihai,
>
> I am not sure writing past the ring actually occurs.
>
> As far as I can see is to keep the ring full as much as possible, when
> initially 'head' and 'tail' are 0, it fills all ring.
> Later tails moves and emptied space filled again. So head (in modulo) is
> always just behind tail after refill. In next run, refill will only fill
> the part tail moved, and this is calculated by 'n_slots'. As this is
> only the size of the gap, starting from 'head' (with modulo) shouldn't
> pass the ring length.
>
> Do you observe this issue practically? If so can you please provide your
> backtrace and numbers that is showing how to reproduce the issue?

The alloc function writes starting from the ring's head, but the ring's
head can be located at the end of the ring's memory buffer (ring_size - 1).
The correct behavior would be to wrap around to the start of the buffer (0),
but the alloc function has no awareness of the fact that it's writing to a
ring, so it writes to ring_size, ring_size + 1, etc.

Let's look at the existing code:
We assume the ring size is 256 and we just received 32 packets.
The previous tail was at index 255, now it's at index 31.
The head is initially at index 255.

head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);  // head = 255
n_slots = ring_size - head + mq->last_tail; // n_slots = 32

if (n_slots < 32)   // not taken
goto no_free_mbufs;

ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
// This will write 32 mbuf pointers starting at index (head & mask) = 255.
// The ring size is 256, so apart from the first one all pointers will be
// written out of bounds (index 256 .. 286, when it should be 0 .. 30).

I can reproduce a crash 100% of the time with my application, but the output
is not very helpful, since it crashes elsewhere because of mempool corruption.
Applying this patch fixes the crashes completely.

>> Allocate 2x the space for the mbuf ring, so that the alloc function
>> has a contiguous array to write to, then copy the excess entries
>> to the start of the array.
>>
>
> Even issue is valid, I am not sure about solution to double to buffer
> memory, but lets confirm the issue first before discussing the solution.

Initially, I thought about splitting the call to rte_pktmbuf_alloc_bulk in two,
but I thought that might be bad for performance if the mempool is being used
concurrently from multiple threads.

If we want to use only one call to rte_pktmbuf_alloc_bulk, we need an array
to store the allocated mbuf pointers. This array must be of length ring_size,
since that's the maximum amount of mbufs which may be allocated in one go.
We need to copy the pointers from this array to the ring.

If we instead allocate twice the space for the ring, we can skip copying
the pointers which were written to the ring, and only copy those that were
written outside of its bounds.

>> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
>> Cc: sta...@dpdk.org
>> Signed-off-by: Mihai Brodschi 
>> ---
>> v2:
>>  - fix email formatting
>>
>> ---
>>  drivers/net/memif/rte_eth_memif.c | 10 +-
>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/memif/rte_eth_memif.c 
>> b/drivers/net/memif/rte_eth_memif.c
>> index 16da22b5c6..3491c53cf1 100644
>> --- a/drivers/net/memif/rte_eth_memif.c
>> +++ b/drivers/net/memif/rte_eth_memif.c
>> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, 
>> uint16_t nb_pkts)
>>  ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], 
>> n_slots);
>>  if (unlikely(ret < 0))
>>  goto no_free_mbufs;
>> +if (unlikely(n_slots > ring_size - (head & mask))) {
>> +rte_memcpy(mq->buffers, &mq->buffers[ring_size],
>> +(n_slots + (head & mask) - ring_size) * sizeof(struct 
>> rte_mbuf *));
>> +}
>>  
>>  while (n_slots--) {
>>  s0 = head++ & mask;
>> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>>  }
>>  mq->buffers = NULL;
>>  if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
>> +/*
>> + * Allocate 2x ring_size to reserve a contiguous array 
>> for
>> + * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
>> + */
>>  mq->buffers = rte_zmalloc("bufs", sizeof(struct 
>> rte_mbuf *) *
>> -  (1 << mq->

Re: [PATCH v2] net/memif: fix buffer overflow in zero copy Rx

2024-07-06 Thread Mihai Brodschi
Hi Ferruh,

On 07/07/2024 05:12, Ferruh Yigit wrote:
> On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
>> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
>> new mbufs to be provided to the sender. The allocated mbuf pointers
>> are stored in a ring, but the alloc function doesn't implement index
>> wrap-around, so it writes past the end of the array. This results in
>> memory corruption and duplicate mbufs being received.
>>
>
> Hi Mihai,
>
> I am not sure writing past the ring actually occurs.
>
> As far as I can see is to keep the ring full as much as possible, when
> initially 'head' and 'tail' are 0, it fills all ring.
> Later tails moves and emptied space filled again. So head (in modulo) is
> always just behind tail after refill. In next run, refill will only fill
> the part tail moved, and this is calculated by 'n_slots'. As this is
> only the size of the gap, starting from 'head' (with modulo) shouldn't
> pass the ring length.
>
> Do you observe this issue practically? If so can you please provide your
> backtrace and numbers that is showing how to reproduce the issue?

The alloc function writes starting from the ring's head, but the ring's
head can be located at the end of the ring's memory buffer (ring_size - 1).
The correct behavior would be to wrap around to the start of the buffer (0),
but the alloc function has no awareness of the fact that it's writing to a
ring, so it writes to ring_size, ring_size + 1, etc.

Let's look at the existing code:
We assume the ring size is 256 and we just received 32 packets.
The previous tail was at index 255, now it's at index 31.
The head is initially at index 255.

head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);  // head = 255
n_slots = ring_size - head + mq->last_tail; // n_slots = 32

if (n_slots < 32)   // not taken
goto no_free_mbufs;

ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
// This will write 32 mbuf pointers starting at index (head & mask) = 255.
// The ring size is 256, so apart from the first one all pointers will be
// written out of bounds (index 256 .. 286, when it should be 0 .. 30).

I can reproduce a crash 100% of the time with my application, but the output
is not very helpful, since it crashes elsewhere because of mempool corruption.
Applying this patch fixes the crashes completely.

>> Allocate 2x the space for the mbuf ring, so that the alloc function
>> has a contiguous array to write to, then copy the excess entries
>> to the start of the array.
>>
>
> Even issue is valid, I am not sure about solution to double to buffer
> memory, but lets confirm the issue first before discussing the solution.

Initially, I thought about splitting the call to rte_pktmbuf_alloc_bulk in two,
but I thought that might be bad for performance if the mempool is being used
concurrently from multiple threads.

If we want to use only one call to rte_pktmbuf_alloc_bulk, we need an array
to store the allocated mbuf pointers. This array must be of length ring_size,
since that's the maximum amount of mbufs which may be allocated in one go.
We need to copy the pointers from this array to the ring.

If we instead allocate twice the space for the ring, we can skip copying
the pointers which were written to the ring, and only copy those that were
written outside of its bounds.

>> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
>> Cc: sta...@dpdk.org
>> Signed-off-by: Mihai Brodschi 
>> ---
>> v2:
>>  - fix email formatting
>>
>> ---
>>  drivers/net/memif/rte_eth_memif.c | 10 +-
>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/memif/rte_eth_memif.c 
>> b/drivers/net/memif/rte_eth_memif.c
>> index 16da22b5c6..3491c53cf1 100644
>> --- a/drivers/net/memif/rte_eth_memif.c
>> +++ b/drivers/net/memif/rte_eth_memif.c
>> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, 
>> uint16_t nb_pkts)
>>  ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], 
>> n_slots);
>>  if (unlikely(ret < 0))
>>  goto no_free_mbufs;
>> +if (unlikely(n_slots > ring_size - (head & mask))) {
>> +rte_memcpy(mq->buffers, &mq->buffers[ring_size],
>> +(n_slots + (head & mask) - ring_size) * sizeof(struct 
>> rte_mbuf *));
>> +}
>>  
>>  while (n_slots--) {
>>  s0 = head++ & mask;
>> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>>  }
>>  mq->buffers = NULL;
>>  if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
>> +/*
>> + * Allocate 2x ring_size to reserve a contiguous array 
>> for
>> + * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
>> + */
>>  mq->buffers = rte_zmalloc("bufs", sizeof(struct 
>> rte_mbuf *) *
>> -  (1 << mq->