On Fri,  1 Dec 2017 12:11:56 -0800
Stephen Hemminger <step...@networkplumber.org> wrote:

> This is another way of addressing the GSO maximum performance issues for
> containers on Azure. What happens is that the underlying infrastructure uses
> a overlay network such that GSO packets over 64K - vlan header end up cause
> either guest or host to have do expensive software copy and fragmentation.
> 
> The netvsc driver reports GSO maximum settings correctly, the issue
> is that containers on veth devices still have the larger settings.
> One solution that was examined was propogating the values back
> through the bridge device, but this does not work for cases where
> virtual container network is done on L3.
> 
> This patch set punts the problem to the orchestration layer that sets
> up the container network. It also enables other virtual devices
> to have configurable settings for GSO maximum.
> 
> Stephen Hemminger (2):
>   rtnetlink: allow GSO maximums to be passed to device
>   veth: allow configuring GSO maximums
> 
>  drivers/net/veth.c   | 20 ++++++++++++++++++++
>  net/core/rtnetlink.c |  2 ++
>  2 files changed, 22 insertions(+)
> 

I would like a confirmation from Intel that is doing Docker testing
that this works for them before merging.

Reply via email to