>-----Original Message-----
>From: Gregory Etelson <getel...@nvidia.com>
>Sent: Tuesday, December 8, 2020 4:17 PM
>To: dev@dpdk.org
>Cc: Gregory Etelson <getel...@nvidia.com>; Matan Azrad
><ma...@nvidia.com>; Raslan Darawsheh <rasl...@nvidia.com>; Slava
>Ovsiienko <viachesl...@nvidia.com>; Shahaf Shuler <shah...@nvidia.com>;
>Bing Zhao <bi...@mellanox.com>; Xueming(Steven) Li
><xuemi...@nvidia.com>
>Subject: [PATCH v2] net/mlx5: fix flow descriptor allocation in Direct Verbs
>mode.
>
>Initialize flow descriptor tunnel member during flow creation.
>Prevent access to stale data and pointers when flow descriptor is reallocated
>after release.
>Fix flow index validation.
>
>Fixes: e7bfa3596a0a ("net/mlx5: separate the flow handle resource")
>Fixes: 8bb81f2649b1 ("net/mlx5: use thread specific flow workspace")
>
>Signed-off-by: Gregory Etelson <getel...@nvidia.com>
>Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com>
>---
> drivers/net/mlx5/mlx5_flow_dv.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
>diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
>b/drivers/net/mlx5/mlx5_flow_dv.c index aa21ff9613..8f7085c951 100644
>--- a/drivers/net/mlx5/mlx5_flow_dv.c
>+++ b/drivers/net/mlx5/mlx5_flow_dv.c
>@@ -6232,8 +6232,9 @@ flow_dv_prepare(struct rte_eth_dev *dev,
>                                  "not enough memory to create flow handle");
>               return NULL;
>       }
>-      MLX5_ASSERT(wks->flow_idx + 1 < RTE_DIM(wks->flows));
>+      MLX5_ASSERT(wks->flow_idx < RTE_DIM(wks->flows));
>       dev_flow = &wks->flows[wks->flow_idx++];
>+      memset(dev_flow, 0, sizeof(*dev_flow));
>       dev_flow->handle = dev_handle;
>       dev_flow->handle_idx = handle_idx;
>       /*
>@@ -6245,12 +6246,6 @@ flow_dv_prepare(struct rte_eth_dev *dev,
>        */
>       dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
>                                 MLX5_ST_SZ_BYTES(fte_match_set_misc4);
>-      /*
>-       * The matching value needs to be cleared to 0 before using. In the
>-       * past, it will be automatically cleared when using rte_*alloc
>-       * API. The time consumption will be almost the same as before.
>-       */
>-      memset(dev_flow->dv.value.buf, 0,
>MLX5_ST_SZ_BYTES(fte_match_param));
>       dev_flow->ingress = attr->ingress;
>       dev_flow->dv.transfer = attr->transfer;
>       return dev_flow;
>--
>2.29.2
Reviwed by: Xueming(Steven) Li <xuemi...@nvidia.com>

Reply via email to