From: "K. Y. Srinivasan" <k...@microsoft.com>
Date: Sat,  2 Aug 2014 10:42:02 -0700

> Intel did some benchmarking on our network throughput when Linux on Hyper-V
> is as used as a gateway. This fix gave us almost a 1 Gbps additional 
> throughput
> on about 5Gbps base throughput we hadi, prior to increasing the sendbuf size.
> The sendbuf mechanism is a copy based transport that we have which is clearly
> more optimal than the copy-free page flipping mechanism (for small packets).
> In the forwarding scenario, we deal only with MTU sized packets,
> and increasing the size of the senbuf area gave us the additional performance.
> For what it is worth, Windows guests on Hyper-V, I am told use similar sendbuf
> size as well.
> 
> The exact value of sendbuf I think is less important than the fact that it 
> needs
> to be larger than what Linux can allocate as physically contiguous memory.
> Thus the change over to allocating via vmalloc().
> 
> We currently allocate 16MB receive buffer and we use vmalloc there for 
> allocation.
> Also the low level channel code has already been modified to deal with 
> physically
> dis-contiguous memory in the ringbuffer setup.
> 
> Based on experimentation Intel did, they say there was some improvement in 
> throughput
> as the sendbuf size was increased up to 16MB and there was no effect on 
> throughput
> beyond 16MB. Thus I have chosen 16MB here.
> 
> Increasing the sendbuf value makes a material difference in small packet 
> handling
> 
> In this version of the patch, based on David's feedback, I have added
> additional details in the commit log.
> 
> 
> Signed-off-by: K. Y. Srinivasan <k...@microsoft.com>

APplied.
_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to