On Tue, 2019-04-23 at 20:24 +0200, Jesper Dangaard Brouer wrote:
> On Tue, 23 Apr 2019 10:27:32 -0700
> Alexander Duyck <alexander.du...@gmail.com> wrote:
> 
> > On Tue, Apr 23, 2019 at 9:42 AM Saeed Mahameed <sae...@mellanox.com
> > > wrote:
> > > On Tue, 2019-04-23 at 08:21 -0700, Alexander Duyck wrote:  
> > > > On Tue, Apr 23, 2019 at 6:23 AM Jesper Dangaard Brouer
> > > > <bro...@redhat.com> wrote:  
> > > > > On Mon, 22 Apr 2019 19:46:47 -0700
> > > > > Jakub Kicinski <jakub.kicin...@netronome.com> wrote:
> > > > >  
> > > > > > On Mon, 22 Apr 2019 15:32:53 -0700, Saeed Mahameed wrote:  
> > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > > > > > > b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > > > > > > index 51e109fdeec1..6147be23a9b9 100644
> > > > > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > > > > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > > > > > > @@ -50,6 +50,7 @@
> > > > > > >  #include <net/xdp.h>
> > > > > > >  #include <linux/net_dim.h>
> > > > > > >  #include <linux/bits.h>
> > > > > > > +#include <linux/prefetch.h>
> > > > > > >  #include "wq.h"
> > > > > > >  #include "mlx5_core.h"
> > > > > > >  #include "en_stats.h"
> > > > > > > @@ -986,6 +987,22 @@ static inline void
> > > > > > > mlx5e_cq_arm(struct
> > > > > > > mlx5e_cq *cq)
> > > > > > >     mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map,
> > > > > > > cq-  
> > > > > > > > wq.cc);  
> > > > > > >  }
> > > > > > > 
> > > > > > > +static inline void mlx5e_prefetch(void *p)
> > > > > > > +{
> > > > > > > +   prefetch(p);
> > > > > > > +#if L1_CACHE_BYTES < 128
> > > > > > > +   prefetch(p + L1_CACHE_BYTES);
> > > > > > > +#endif
> > > > > > > +}
> > > > > > > +
> > > > > > > +static inline void mlx5e_prefetchw(void *p)
> > > > > > > +{
> > > > > > > +   prefetchw(p);
> > > > > > > +#if L1_CACHE_BYTES < 128
> > > > > > > +   prefetchw(p + L1_CACHE_BYTES);
> > > > > > > +#endif
> > > > > > > +}  
> > > > > > 
> > > > > > All Intel drivers do the exact same thing, perhaps it's
> > > > > > time to
> > > > > > add a
> > > > > > helper fot this?
> > > > > > 
> > > > > > net_prefetch_headers()
> > > > > > 
> > > > > > or some such?  
> > > > > 
> > > > > I wonder if Tariq measured any effect from doing this?
> > > > > 
> > > > > Because Intel CPUs will usually already prefetch the next
> > > > > cache-
> > > > > line,
> > > > > as described in [1], you can even read (and modify) this MSR
> > > > > 0x1A4
> > > > > e.g. via tools in [2].  Maybe Intel guys added it before this
> > > > > was
> > > > > done
> > > > > in HW, and never cleaned it up?
> > > > > 
> > > > > [1]
> > > > > https://software.intel.com/en-us/articles/disclosure-of-hw-prefetcher-control-on-some-intel-processors
> > > > >   
> > > > 
> > > > The issue is the adjacent cache line prefetcher can be on or
> > > > off and
> > > > a
> > > > network driver shouldn't really be going through and twiddling
> > > > those
> > > > sort of bits. In some cases having it on can result in more
> > > > memory
> > > > being consumed then is needed. The reason why I enabled the
> > > > additional
> > > > cacheline prefetch for the Intel NICs is because most TCP
> > > > packets are
> > > > at a minimum 68 bytes for just the headers so there was an
> > > > advantage
> > > > for TCP traffic to make certain we prefetched at least enough
> > > > for us
> > > > to process the headers.
> > > >  
> > > 
> > > So if L2 adjacent cache line prefetcher bit is enabled then this
> 
> Nitpick: is it the DCU prefetcher bit that "Fetches the next cache
> line
> into L1-D cache" in the link[1].
> 
> > > additional prefetch step is redundant ? what is the performance
> > > cost in
> > > this case ?  
> > 
> > I don't recall. I don't think it would be anything too significant
> > though.
> 
> I tried to measure this (approx 1 year ago), a prefetch that is not
> needed, and AFAICR the overhead was below 1 nanosec, approx 0.333 ns.
> (but anyone claiming to be able to measure below 2 ns variation
> accuracy should be questioned...)
> 
> > > > As far as Jakub comment about combining the functions I would
> > > > be okay
> > > > with that. We just need to make it a static inline function
> > > > available
> > > > to all the network drivers.
> > > >  
> > > 
> > > Agreed, will drop this patch for now and Tariq will address, in
> > > next
> > > version.
> 
> I don't mind the patch, and Alex provided a good argument why is
> still
> makes sense.
> 

Sure but it is better to have one helper static inline function that is
used across all drivers as Jakub and Alex suggested, one day it might
become arch/cacheline dependent and all drivers will benefit of any
change to it.

Reply via email to