On Fri, Dec 01, 2017 at 03:30:01PM -0800, Stephen Hemminger wrote:
> On Fri,  1 Dec 2017 12:11:56 -0800
> Stephen Hemminger <step...@networkplumber.org> wrote:
> 
> > This is another way of addressing the GSO maximum performance issues for
> > containers on Azure. What happens is that the underlying infrastructure uses
> > a overlay network such that GSO packets over 64K - vlan header end up cause
> > either guest or host to have do expensive software copy and fragmentation.
> > 
> > The netvsc driver reports GSO maximum settings correctly, the issue
> > is that containers on veth devices still have the larger settings.
> > One solution that was examined was propogating the values back
> > through the bridge device, but this does not work for cases where
> > virtual container network is done on L3.
> > 
> > This patch set punts the problem to the orchestration layer that sets
> > up the container network. It also enables other virtual devices
> > to have configurable settings for GSO maximum.
> > 
> > Stephen Hemminger (2):
> >   rtnetlink: allow GSO maximums to be passed to device
> >   veth: allow configuring GSO maximums
> > 
> >  drivers/net/veth.c   | 20 ++++++++++++++++++++
> >  net/core/rtnetlink.c |  2 ++
> >  2 files changed, 22 insertions(+)
> > 
> 
> I would like a confirmation from Intel that is doing Docker testing
> that this works for them before merging.

This change and its iproute2 counterpart allow creating veth pairs with
specific gso_max{size,segs}. Thanks.

However, the docker code that sets up veth pairis is go-compiled in
their libnetwork. End-users won't be able to tweak gso settings at veth
creation. In this case, we would need to add ioctl (ip/iplink.c:do_set)
support to allow changes after veth is created.

Reply via email to