On Thu, Apr 4, 2019 at 4:46 AM Gal Pressman <galpr...@amazon.com> wrote:
>
> On 03-Apr-19 02:03, Saeed Mahameed wrote:
> > From: Tariq Toukan <tar...@mellanox.com>
> >
> > Soften the memory barrier call of mb() by a sufficient wmb() in the
> > consumer index update of the event queues.
> >
> > Signed-off-by: Tariq Toukan <tar...@mellanox.com>
> > Signed-off-by: Saeed Mahameed <sae...@mellanox.com>
> > ---
> >  drivers/net/ethernet/mellanox/mlx5/core/eq.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c 
> > b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > index 46a747f7c162..e9837aeb7088 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > @@ -707,7 +707,7 @@ void mlx5_eq_update_ci(struct mlx5_eq *eq, u32 cc, bool 
> > arm)
> >
> >       __raw_writel((__force u32)cpu_to_be32(val), addr);
> >       /* We still want ordering, just not swabbing, so add a barrier */
> > -     mb();
> > +     wmb();
>
> Shouldn't this barrier be placed prior to __raw_writel()?

same effect in both cases, we just want a fence between every two
consecutive writes.

Reply via email to