On Tue, 10 Oct 2017 11:29:33 -0400
Willem de Bruijn <willemdebruijn.ker...@gmail.com> wrote:

> On Mon, Oct 9, 2017 at 11:52 PM, David Miller <da...@davemloft.net> wrote:
> > From: Willem de Bruijn <willemdebruijn.ker...@gmail.com>
> > Date: Fri,  6 Oct 2017 18:25:13 -0400
> >  
> >> From: Willem de Bruijn <will...@google.com>
> >>
> >> Add zerocopy transfer statistics to the vhost_net/tun zerocopy path.
> >>
> >> I've been using this to verify recent changes to zerocopy tuning [1].
> >> Sharing more widely, as it may be useful in similar future work.
> >>
> >> Use ethtool stats as interface, as these are defined per device
> >> driver and can easily be extended.
> >>
> >> Make the zerocopy release callback take an extra hop through the tun
> >> driver to allow the driver to increment its counters.
> >>
> >> Care must be taken to avoid adding an alloc/free to this hot path.
> >> Since the caller already must allocate a ubuf_info, make it allocate
> >> two at a time and grant one to the tun device.
> >>
> >>  1/3: introduce ethtool stats (`ethtool -S $DEV`) for tun devices
> >>  2/3: add zerocopy tx and tx_err counters
> >>  3/3: convert vhost_net to pass a pair of ubuf_info to tun
> >>
> >> [1] http://patchwork.ozlabs.org/patch/822613/  
> >
> > This looks mostly fine to me, but I don't know enough about how vhost
> > and tap interact to tell whether this makes sense to upstream.  
> 
> Thanks for taking a look. The need for monitoring these stats has
> come up in a couple of patch evaluation discussions, so I wanted
> to share at least one implementation to get the data.
> 
> Because the choice to use zerocopy is based on heuristics and
> there is a cost if it mispredicts, I think we even want to being able
> to continuously monitor this in production.
> 
> The implementation is probably not ready for that as is.

Another alternative is to use tracepoints for this.
If you need statistics in production then per-cpu (or per-queue) stats
would have less impact. Tracepoints have no visible impact unless used.

Reply via email to