> On 29 Apr 2025, at 0:37, Eric Blake <ebl...@redhat.com> wrote:
> 
> On Sun, Apr 27, 2025 at 07:50:29PM +0300, Nir Soffer wrote:
>> Like macOS we have similar issue on Linux. For TCP socket the send
>> buffer size is 2626560 bytes (~2.5 MiB) and we get good performance.
>> However for unix socket the default and maximum buffer size is 212992
>> bytes (208 KiB) and we see poor performance when using one NBD
>> connection, up to 4 times slower than macOS on the same machine.
>> 
> 
>> +++ b/io/channel-socket.c
>> @@ -39,12 +39,13 @@
>> #define SOCKET_MAX_FDS 16
>> 
>> /*
>> - * Testing shows that 2m send buffer gives best throuput and lowest cpu 
>> usage.
>> - * Changing the receive buffer size has no effect on performance.
>> + * Testing shows that 2m send buffer is optimal. Changing the receive buffer
>> + * size has no effect on performance.
>> + * On Linux we need to increase net.core.wmem_max to make this effective.
> 
> How can we reliably inform the user of the need to tweak this setting?

Maybe log a warning (or debug message) if net.core.wmem_max is too small?

For example libkrun does this:
https://github.com/containers/libkrun/blob/main/src/devices/src/virtio/net/gvproxy.rs#L70

If we document this some users that read the docs can tune their system in a 
better way.

What is the best place to document this?

> Is it worth a bug report to the Linux kernel folks asking them to
> reconsider the default cap on this setting, now that modern systems
> tend to have more memory than when the cap was first introduced, and
> given that we have demonstrable numbers showing why it is beneficial,
> especially for parity with TCP?

Makes sense.

What is the best place to discuss this or file a bug?


Reply via email to