Wednesday, July 25, 2018 9:59 AM, Nélio Laranjeiro: > Subject: Re: [PATCH] net/mlx5: fix possible endless loop when clearing flow > flags > > On Tue, Jul 24, 2018 at 09:47:19PM +0000, Yongseok Koh wrote: > > > > > On Jul 23, 2018, at 11:57 PM, Nélio Laranjeiro > <nelio.laranje...@6wind.com> wrote: > > > > > > On Mon, Jul 23, 2018 at 11:27:44AM -0700, Yongseok Koh wrote: > > >> If one of (*priv->rxqs)[] is null, the for loop can iterate > > >> infinitely as idx can't be increased. > > >> > > >> Fixes: cd24d526395e ("net/mlx5: add mark/flag flow action") > > >> Cc: Nelio Laranjeiro <nelio.laranje...@6wind.com> > > >> > > >> Signed-off-by: Yongseok Koh <ys...@mellanox.com> > > >> --- > > >> drivers/net/mlx5/mlx5_flow.c | 8 +++----- > > >> 1 file changed, 3 insertions(+), 5 deletions(-) > > >> > > >> diff --git a/drivers/net/mlx5/mlx5_flow.c > > >> b/drivers/net/mlx5/mlx5_flow.c index 32854198b..c156f01eb 100644 > > >> --- a/drivers/net/mlx5/mlx5_flow.c > > >> +++ b/drivers/net/mlx5/mlx5_flow.c > > >> @@ -2762,22 +2762,20 @@ mlx5_flow_rxq_flags_clear(struct > > >> rte_eth_dev *dev) { > > >> struct priv *priv = dev->data->dev_private; > > >> unsigned int i; > > >> - unsigned int idx; > > >> > > >> - for (idx = 0, i = 0; idx != priv->rxqs_n; ++i) { > > >> + for (i = 0; i != priv->rxqs_n; ++i) { > > >> struct mlx5_rxq_ctrl *rxq_ctrl; > > >> unsigned int j; > > >> > > >> - if (!(*priv->rxqs)[idx]) > > >> + if (!(*priv->rxqs)[i]) > > >> continue; > > >> - rxq_ctrl = container_of((*priv->rxqs)[idx], > > >> + rxq_ctrl = container_of((*priv->rxqs)[i], > > >> struct mlx5_rxq_ctrl, rxq); > > >> rxq_ctrl->flow_mark_n = 0; > > >> rxq_ctrl->rxq.mark = 0; > > >> for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) > > >> rxq_ctrl->flow_tunnels_n[j] = 0; > > >> rxq_ctrl->rxq.tunnel = 0; > > >> - ++idx; > > >> } > > >> } > > >> > > >> -- > > >> 2.11.0 > > > > > > This patch is wrong, (*priv->rxqs)[i] may un-initialised by the > > > application, the number of queues says how are in used, it does not > > > mean they are contiguous in the rxqs arrays and this due to the DPDK > > > API which configure the number of queues with > > > rte_eth_dev_configure() whereas queues are instantiated with > > > rte_eth_rx_queue_setup() which takes an position in the array as > parameter. > > > > > > Indeed this code is wrong, idx should always increase whereas i > > > should only increase if the (*priv->rxqs)[idx] is non null. > > > > I don't understand what you mean. In rte_eth_rx_queue_setup(), > > rx_queue_id is checked against dev->data->nb_rx_queues. > > > > if (rx_queue_id >= dev->data->nb_rx_queues) { > > RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", > rx_queue_id); > > return -EINVAL; > > } > > > > This means the index should be [0, priv->rxqs_n) anyway. There is the > > same check in mlx5_rx_queue_setup(). If user mistakenly doesn't > > configure some of queues, then the corresponding slots could be null > > but indexes are still within the range. > > > > Then, what's your point of having both i and idx? > > I remember I've face some issue while I've re-write the PMD to work on top > of flow API. That's why I've introduce such logic, but it seems not necessary > as it comply with the documentation of the function and the code itself. > > Acked-by: Nelio Laranjeiro <nelio.laranje...@6wind.com>
Applied to next-net-mlx, thanks. > > -- > Nélio Laranjeiro > 6WIND