[PATCH] net/nfp: fix one memory access problem

2023-10-08 Thread Chaoyong He
The initial logic do not give a value to the out parameter in the
abnormal logic, which cause an local variable uninitialized problem.

Fixes: 3d6811281392 ("net/nfp: add infrastructure for conntrack flow merge")
Cc: chaoyong...@corigine.com

Signed-off-by: Chaoyong He 
---
 drivers/net/nfp/flower/nfp_conntrack.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/nfp/flower/nfp_conntrack.c 
b/drivers/net/nfp/flower/nfp_conntrack.c
index aacd4d7dd3..7b84b12546 100644
--- a/drivers/net/nfp/flower/nfp_conntrack.c
+++ b/drivers/net/nfp/flower/nfp_conntrack.c
@@ -332,6 +332,7 @@ nfp_flow_item_conf_size_get(enum rte_flow_item_type type,
break;
default:
PMD_DRV_LOG(ERR, "Unsupported item type: %d", type);
+   *size = 0;
return false;
}
 
@@ -1265,9 +1266,9 @@ nfp_ct_merge_item_real(const struct rte_flow_item 
*item_src,
struct rte_flow_item *item_dst)
 {
uint32_t i;
-   size_t size;
char *key_dst;
char *mask_dst;
+   size_t size = 0;
const char *key_src;
const char *mask_src;
 
-- 
2.39.1



RE: [PATCH v2 0/5] support item NSH matching

2023-10-08 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Haifei Luo 
> Sent: Monday, September 25, 2023 5:09 AM
> To: dev@dpdk.org
> Cc: Ori Kam ; Slava Ovsiienko ;
> Raslan Darawsheh ; Xueming(Steven) Li
> ; Haifei Luo 
> Subject: [PATCH v2 0/5] support item NSH matching
> 
> NSH can be matched using the existed item: RTE_FLOW_ITEM_TYPE_NSH.
> NSH fields matching is not supported.
> 
> Add support for configuring VXLAN-GPE's next protocol.
> The CLI is: vxlan-gpe protocol is .
> 
> Add support for matching item NSH. The CLI is: nsh Add support for HCA
> attribute query of NSH.
> 
> Enhance the validation for the case matching item NSH is supported.
> Add NSH support in net/mlx5.
> 
> V2: Add Ack info in commit message.
> 
> Haifei Luo (5):
>   app/testpmd: support for VXLAN-GPE's next protocol
>   common/mlx5: extend HCA attribute query for NSH
>   net/mlx5: enhance the validation for item VXLAN-GPE
>   app/testpmd: support for NSH flow item
>   net/mlx5: add support for item NSH
> 
>  app/test-pmd/cmdline_flow.c  | 26 ++
>  drivers/common/mlx5/mlx5_devx_cmds.c |  3 ++
> drivers/common/mlx5/mlx5_devx_cmds.h |  1 +
>  drivers/common/mlx5/mlx5_prm.h   |  4 ++-
>  drivers/net/mlx5/mlx5_flow.c | 52 
>  drivers/net/mlx5/mlx5_flow.h |  6 
>  drivers/net/mlx5/mlx5_flow_dv.c  | 13 ++-
>  7 files changed, 96 insertions(+), 9 deletions(-)
> 
> --
> 2.27.0

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


RE: [PATCH v4] bus/pci: fix legacy device IO port map in secondary process

2023-10-08 Thread Ma, WenwuX
Hi,

> -Original Message-
> From: David Marchand 
> Sent: 2023年10月3日 18:21
> To: Ma, WenwuX ; nipun.gu...@amd.com;
> chenbo@outlook.com
> Cc: dev@dpdk.org; maxime.coque...@redhat.com; Li, Miao
> ; Ling, WeiX ; sta...@dpdk.org
> Subject: Re: [PATCH v4] bus/pci: fix legacy device IO port map in secondary
> process
> 
> On Wed, Aug 30, 2023 at 7:07 AM Wenwu Ma 
> wrote:
> >
> > When doing IO port mapping for legacy device in secondary process, the
> > region information is missing, so, we need to refill it.
> >
> > Fixes: 4b741542ecde ("bus/pci: avoid depending on private kernel
> > value")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Wenwu Ma 
> >
> > ---
> >  drivers/bus/pci/linux/pci_vfio.c | 43
> > ++--
> >  1 file changed, 41 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/bus/pci/linux/pci_vfio.c
> > b/drivers/bus/pci/linux/pci_vfio.c
> > index e634de8322..5ef26c98d1 100644
> > --- a/drivers/bus/pci/linux/pci_vfio.c
> > +++ b/drivers/bus/pci/linux/pci_vfio.c
> > @@ -1314,6 +1314,27 @@ pci_vfio_ioport_map(struct rte_pci_device
> *dev, int bar,
> > return -1;
> > }
> >
> > +   if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
> > +   struct vfio_device_info device_info = { .argsz = 
> > sizeof(device_info) };
> > +   char pci_addr[PATH_MAX];
> > +   int vfio_dev_fd;
> > +   struct rte_pci_addr *loc = &dev->addr;
> > +   int ret;
> > +   /* store PCI address string */
> > +   snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
> > +   loc->domain, loc->bus, loc->devid,
> > + loc->function);
> > +
> > +   ret = rte_vfio_setup_device(rte_pci_get_sysfs_path(), 
> > pci_addr,
> > +   &vfio_dev_fd,
> > + &device_info);
> 
> From a pci bus API pov, nothing prevents a driver from mixing memory
> mapped with vfio and ioport resources (iow, calls to
> rte_pci_map_resource() and rte_pci_ioport_map()).
> IOW, it may not be the case with the net/virtio driver but, in theory,
> rte_pci_ioport_map()/pci_vfio_ioport_map() may be called after a
> rte_pci_map_resource() call.
> 
> In a similar manner, from the API pov,
> rte_pci_ioport_map()/pci_vfio_ioport_map() may be called for multiple bars.
> 
> In summary, nothing in this patch checks that vfio has been configured already
> and I think we need a refcount to handle those situations.
> 
We call rte_vfio_setup_device just to get device info, we can call 
rte_vfio_release_device as soon as pci_vfio_fill_regions is done.
This avoids reference counting operations, do you think it works?

> Nipun, Chenbo, WDYT?
> 
> 
> > +   if (ret)
> > +   return -1;
> 
> ret value is not used, so there is no need for this variable.
> 
> if (rte_vfio_setup_device(rte_pci_get_sysfs_path(), pci_addr,
> &vfio_dev_fd, &device_info) != 0)
> return -1;
> 
> > +
> > +   ret = pci_vfio_fill_regions(dev, vfio_dev_fd, &device_info);
> > +   if (ret)
> > +   return -1;
> 
> Same here, ret is not needed.
> 
> 
> > +
> > +   }
> > +
> > if (pci_vfio_get_region(dev, bar, &size, &offset) != 0) {
> > RTE_LOG(ERR, EAL, "Cannot get offset of region %d.\n", bar);
> > return -1;
> > @@ -1361,8 +1382,26 @@ pci_vfio_ioport_write(struct rte_pci_ioport *p,
> > int  pci_vfio_ioport_unmap(struct rte_pci_ioport *p)  {
> > -   RTE_SET_USED(p);
> > -   return -1;
> > +   char pci_addr[PATH_MAX] = {0};
> > +   struct rte_pci_addr *loc = &p->dev->addr;
> > +   int ret, vfio_dev_fd;
> > +
> > +   /* store PCI address string */
> > +   snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
> > +   loc->domain, loc->bus, loc->devid,
> > + loc->function);
> > +
> > +   vfio_dev_fd = rte_intr_dev_fd_get(p->dev->intr_handle);
> > +   if (vfio_dev_fd < 0)
> > +   return -1;
> 
> This check is odd and does not seem related.
> 
> 
> > +
> > +   ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
> > + vfio_dev_fd);
> > +   if (ret < 0) {
> > +   RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
> > +   return ret;
> > +   }
> 
> 
> --
> David Marchand



Re: [PATCH v1 1/3] dmadev: add inter-domain operations

2023-10-08 Thread Jerin Jacob
On Sun, Oct 8, 2023 at 8:03 AM fengchengwen  wrote:
>
> Hi Anatoly,
>
> On 2023/8/12 0:14, Anatoly Burakov wrote:
> > Add a flag to indicate that a specific device supports inter-domain
> > operations, and add an API for inter-domain copy and fill.
> >
> > Inter-domain operation is an operation that is very similar to regular
> > DMA operation, except either source or destination addresses can be in a
> > different process's address space, indicated by source and destination
> > handle values. These values are currently meant to be provided by
> > private drivers' API's.
> >
> > This commit also adds a controller ID field into the DMA device API.
> > This is an arbitrary value that may not be implemented by hardware, but
> > it is meant to represent some kind of device hierarchy.
> >
> > Signed-off-by: Vladimir Medvedkin 
> > Signed-off-by: Anatoly Burakov 
> > ---
>
> ...
>
> > +__rte_experimental
> > +static inline int
> > +rte_dma_copy_inter_dom(int16_t dev_id, uint16_t vchan, rte_iova_t src,
> > + rte_iova_t dst, uint32_t length, uint16_t src_handle,
> > + uint16_t dst_handle, uint64_t flags)
>
> I would suggest add more general extension:
> rte_dma_copy*(int16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst,
>   uint32_t length, uint64_t flags, void *param)
> The param only valid under some flags bits.
> As for this inter-domain extension: we could define inter-domain param struct.
>
>
> Whether add in current rte_dma_copy() API or add one new API, I think it 
> mainly
> depend on performance impact of parameter transfer. Suggest more discuss for
> differnt platform and call specification.

Or move src_handle/dst_hanel to vchan config to enable better performance.
Application create N number of vchan based on the requirements.

>
>
> And last, Could you introduce the application scenarios of this feature?

Looks like VM to VM or container to container copy.

>
>
> Thanks.
>