Bryan,

Thanks for the response. Good reminder about the tranmission buffer limiting 
the TCP tranmission window
which needs to be sized for the bandwidth delay product. I am a L2/L3 guy so 
not a TCP expert :-).
However, I don't know whether the default (I believe 1MB in our RHEL release) 
causes any problems for this
with our situation.

We were more focused on whether a smaller TCP transmission buffer size required 
more
frequent servicing by ATS, and if ATS had problems keeping the buffer from 
emptying while data still needed
to be sent. We did some straces of ATS behavior, and we found that sometimes 
the delay between successive
writev() following a EAGAIN event was fairly long (long enough it could 
jeopardize time constraints for delivery
of streaming data). In my development box, I saw the delay between EAGAIN and 
retry vary between 8ms to 1300ms.
I believe Jeremy saw a situation in the lab where it was 3-seconds!

My setup was just a single VM with ATS 7.1.4 with curl (rate limited option set 
for 1MB) with a previously cached 10MB
data file. I took a look at the code, and it seems on Linux ATS should be using 
the epoll_wait mechanism
(10 ms time-out), which is driven by a polling continuation. I did not see 
anything
there that should cause delay in retries of 1+ seconds. Any thoughts?

Thanks,
Peter

-----Original Message-----
From: Bryan Call <bc...@apache.org> 
Sent: Thursday, September 12, 2019 9:24 AM
To: dev <dev@trafficserver.apache.org>
Subject: Re: TCP socket buffer size.

I have seen issues where you can’t reach the max throughput of the network 
connection without increasing the TCP buffers, because it effects the max TCP 
window size (bandwidth-delay product).  Here is a calculator I have used before 
to figure out what your buffer size should be: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.switch.ch_network_tools_tcp-5Fthroughput_&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=8c5kS62dKm3-obVyLvkwkc-kTTgV1vAsbxSPwL-yi3o&m=s_oHKVI9KE_kwjG4HQVPq3HQoi5fh_uBbjpB2xeOjYU&s=PTsYs3JTguDKsu9dHpHXoAIQpp7hyH0UXHky9R1rwww&e=
 
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.switch.ch_network_tools_tcp-5Fthroughput_&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=8c5kS62dKm3-obVyLvkwkc-kTTgV1vAsbxSPwL-yi3o&m=s_oHKVI9KE_kwjG4HQVPq3HQoi5fh_uBbjpB2xeOjYU&s=PTsYs3JTguDKsu9dHpHXoAIQpp7hyH0UXHky9R1rwww&e=>

Theoretically there should be some latency difference between having a small 
buffer size vs a larger one (up to some limit), but my guess is it would be 
hard to measure because it would be so small.

-Bryan


> On Sep 11, 2019, at 11:50 AM, Chou, Peter <pbc...@labs.att.com> wrote:
> 
> Hi all,
> 
> Sometimes we see lots of EAGAIN result codes from ATS trying to write to the 
> TCP socket file descriptor. I presume this is typically due to congestion or 
> rate mis-match between client and ATS. Is there any benefit to increasing the 
> TCP socket buffer size which would reduce the number of these write 
> operations? Specifically, should we expect any kind of latency difference as 
> there is some concern about how long it takes ATS to re-schedule that 
> particular VC for another write attempt?
> 
> Thanks,
> Peter

Reply via email to