On Wed, 23 May 2018 11:34:22 +0200 Daniel Borkmann <dan...@iogearbox.net> wrote:
> > +int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp) > > +{ > > + struct net_device *dev = dst->dev; > > + struct xdp_frame *xdpf; > > + int err; > > + > > + if (!dev->netdev_ops->ndo_xdp_xmit) > > + return -EOPNOTSUPP; > > + > > + xdpf = convert_to_xdp_frame(xdp); > > + if (unlikely(!xdpf)) > > + return -EOVERFLOW; > > + > > + /* TODO: implement a bulking/enqueue step later */ > > + err = dev->netdev_ops->ndo_xdp_xmit(dev, xdpf); > > + if (err) > > + return err; > > + > > + return 0; > > The 'err' is just unnecessary, lets just do: > > return dev->netdev_ops->ndo_xdp_xmit(dev, xdpf); > > Later after the other patches this becomes: > > return bq_enqueue(dst, xdpf, dev_rx); I agree, I'll fix this up in V5. After this patchset gets applied, there are also other opportunities to do similar micro-optimizations. I have a branch (on top of this patchset) which does such micro-optimizations (including this) plus I've looked at the resulting asm-code layout. But my benchmarks only show a 2 nanosec improvement for all these micro-optimizations (where the focus is to reduce the asm-code I-cache size of xdp_do_redirect). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer