On 4/08/2016 11:55 p.m., brendan kearney wrote:
> At what point does buffer bloat set in? I have a linux router with the
> below sysctl tweaks load balancing with haproxy to 2 squid instances. I
> have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
> TX to 1024 on all interf
On 08/04/2016 10:08 AM, Heiler Bemerguy wrote:
Sorry Amos, but I've tested with modifying JUST these two sysctl parameters and
the difference is huge.
Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed, and
with a 8MB kernel buffer I got a 9.5MB/s download speed (via squ
Sorry Amos, but I've tested with modifying JUST these two sysctl
parameters and the difference is huge.
Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed,
and with a 8MB kernel buffer I got a 9.5MB/s download speed (via squid,
of course).
I think it has to do with the
At what point does buffer bloat set in? I have a linux router with the
below sysctl tweaks load balancing with haproxy to 2 squid instances. I
have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
TX to 1024 on all interfaces.
The squid servers run with almost the same hardwa
On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
>
> I think it doesn't really matter how much squid sets its default buffer.
> The linux kernel will upscale to the maximum set by the third option.
> (and the TCP Window Size will follow that)
>
> net.ipv4.tcp_wmem = 1024 32768 8388608
> net.ipv4.tc
On 08/03/2016 10:27 AM, Amos Jeffries wrote:
On 3/08/2016 9:45 p.m., Marcus Kool wrote:
On 08/03/2016 12:30 AM, Amos Jeffries wrote:
If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap
I think it doesn't really matter how much squid sets its default buffer.
The linux kernel will upscale to the maximum set by the third option.
(and the TCP Window Size will follow that)
net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608
--
Best Regards,
Heiler Be
On 3/08/2016 9:45 p.m., Marcus Kool wrote:
>
>
> On 08/03/2016 12:30 AM, Amos Jeffries wrote:
>
>
>> If thats not fast enough, you may also wish to patch in a larger value
>> for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
>> read_ahead_gap in squid.conf. That has had som
On 08/03/2016 12:30 AM, Amos Jeffries wrote:
If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assert
On 3/08/2016 2:42 p.m., Heiler Bemerguy wrote:
>
> in /etc/sysctl.conf, add:
>
> net.core.rmem_max = 8388608
> net.core.wmem_max = 8388608
> net.core.wmem_default = 32768
> net.core.rmem_default = 32768
> net.ipv4.tcp_wmem = 1024 32768 8388608
> net.ipv4.tcp_rmem = 1024 32768 8388608
>
PLease
in /etc/sysctl.conf, add:
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.wmem_default = 32768
net.core.rmem_default = 32768
net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608
--
Best Regards,
Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-48
Hi All,
We've been running Squid for many years. Recently we upgraded our
internet link to a 1Gbps link, but we are finding that squid is not able
to drive this link to its full potential (previous links have been
30Mbps or 100Mbps).
Currently running squid 3.5.1, but have tried 3.4, 3.3, 3.2
12 matches
Mail list logo