Hi Ferruh, Thanks for your review and comments, PSB.
> -----Original Message----- > From: Ferruh Yigit <ferruh.yi...@intel.com> > Sent: Friday, November 6, 2020 7:20 PM > To: Bing Zhao <bi...@nvidia.com>; Slava Ovsiienko > <viachesl...@nvidia.com>; Matan Azrad <ma...@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <or...@nvidia.com>; Raslan Darawsheh > <rasl...@nvidia.com>; sta...@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common > header endianness > > External email: Use caution opening links or attachments > > > On 11/3/2020 5:41 AM, Bing Zhao wrote: > > The input header of a RTE flow item is with network byte order. In > the > > host with little endian, the bit field order are the same as the > byte > > order. > > When checking the an eCPRI message type, the wrong field will be > > selected. Right now, since the whole u32 is being checked and for > all > > types, the following implementation is unique. There is no > functional > > risk but it is still an error to fix. > > > > Isn't the 'ecpri_v' filled by application (CPU), why there is an > assumption that it is big endian? > Yes, this is filled by the application SW. I checked the current code of other headers and current implementation. 1. In the testpmd flow example, all the headers of network stack like IPv4 are translated into to be in BE, "ARGS_ENTRY_HTON" 2. All fields are with "rte_be*_t"type, even though only a typedef, it should be considered in BE. 3. RTE flow will just pass the flow items pointer to the PMD drivers, so in the PMD part, the headers should be treated as in BE. So, to my understanding, this is not an assumption but some constraint. Correct me if my understanding is wrong. > And even if it is big endian, it should be broken previously right? > Since it was checking wrong field as 'type' as you said, why there > were no functional risk? In the PMD driver, the first u32 *value containing this type is already used for matching. And a checking is used for the following "sub-headers" matching. Indeed, the first u32 *mask is used to confirm if the following sub-header need to be matched. Since different types will lead to different variants of the packets, the switch-of-type is used. But all the 3 types supported in PMD now almost have the same results (part of the second u32, remaining will be all 0s). So even if the type of the value is always "0" by mistake, the cases are the same and it still works by coincidence. If more types are supported in the future, this will be problematic. > > > > > Fixes: daa38a8924a0 ("net/mlx5: add flow translation of eCPRI > header") > > > > Cc: sta...@dpdk.org > > > > Signed-off-by: Bing Zhao <bi...@nvidia.com> > > Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com> > > --- > > drivers/net/mlx5/mlx5_flow_dv.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c > > b/drivers/net/mlx5/mlx5_flow_dv.c index 01b6e7c..7af01e9 100644 > > --- a/drivers/net/mlx5/mlx5_flow_dv.c > > +++ b/drivers/net/mlx5/mlx5_flow_dv.c > > @@ -7798,6 +7798,7 @@ struct mlx5_hlist_entry * > > struct mlx5_priv *priv = dev->data->dev_private; > > const struct rte_flow_item_ecpri *ecpri_m = item->mask; > > const struct rte_flow_item_ecpri *ecpri_v = item->spec; > > + struct rte_ecpri_common_hdr common; > > void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, > > misc_parameters_4); > > void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, > > misc_parameters_4); @@ -7838,7 +7839,8 @@ struct mlx5_hlist_entry > * > > * Some wildcard rules only matching type field should be > supported. > > */ > > if (ecpri_m->hdr.dummy[0]) { > > - switch (ecpri_v->hdr.common.type) { > > + common.u32 = rte_be_to_cpu_32(ecpri_v- > >hdr.common.u32); > > + switch (common.type) { > > case RTE_ECPRI_MSG_TYPE_IQ_DATA: > > case RTE_ECPRI_MSG_TYPE_RTC_CTRL: > > case RTE_ECPRI_MSG_TYPE_DLY_MSR: > >