On Sun, Apr 27, 2025 at 07:50:29PM +0300, Nir Soffer wrote:
> Like macOS we have similar issue on Linux. For TCP socket the send
> buffer size is 2626560 bytes (~2.5 MiB) and we get good performance.
> However for unix socket the default and maximum buffer size is 212992
> bytes (208 KiB) and we see poor performance when using one NBD
> connection, up to 4 times slower than macOS on the same machine.
> 

> +++ b/io/channel-socket.c
> @@ -39,12 +39,13 @@
>  #define SOCKET_MAX_FDS 16
>  
>  /*
> - * Testing shows that 2m send buffer gives best throuput and lowest cpu 
> usage.
> - * Changing the receive buffer size has no effect on performance.
> + * Testing shows that 2m send buffer is optimal. Changing the receive buffer
> + * size has no effect on performance.
> + * On Linux we need to increase net.core.wmem_max to make this effective.

How can we reliably inform the user of the need to tweak this setting?
Is it worth a bug report to the Linux kernel folks asking them to
reconsider the default cap on this setting, now that modern systems
tend to have more memory than when the cap was first introduced, and
given that we have demonstrable numbers showing why it is beneficial,
especially for parity with TCP?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org


Reply via email to