Cool, thanks for clarification!

--a

> On 24 Jul 2019, at 18:37, Florin Coras <fcoras.li...@gmail.com> wrote:
> 
> Pretty much. We advertise whatever space we have left in the fifo as opposed 
> to 0 and as result linux backs off.
> 
> TCP_NODELAY can force what looks like the problematic packet out sooner. 
> However, because of the small rx fifo, next rcv window will zero and the 
> whole transfer will stall there until the other side of the proxy consumes 
> the data. 
> 
> Florin
> 
>> On Jul 24, 2019, at 9:12 AM, Andrew đź‘˝ Yourtchenko <ayour...@gmail.com> wrote:
>> 
>> Just reading the description and not having peeked into the sniffer
>> trace, I wondered if is it this behavior a side effect of mitigation
>> of [1], consequently, are the linux side sockets marked as no_delay ?
>> [2]
>> 
>> [1]: https://en.wikipedia.org/wiki/Silly_window_syndrome
>> 
>> [2]: 
>> https://stackoverflow.com/questions/17842406/how-would-one-disable-nagles-algorithm-in-linux
>> 
>> --a
>> 
>>> On 7/24/19, Florin Coras <fcoras.li...@gmail.com> wrote:
>>> Hi,
>>> 
>>> It seems that linux is reluctant to send a segment smaller than the mss, so
>>> it probably delays sending it. Since there’s little fifo space, that’s
>>> pretty much unavoidable.
>>> 
>>> Still, note that as you increase the number of sessions, if all send traffic
>>> at the same rate, then their fair share will be considerably lower than the
>>> maximum you can achieve on your interfaces. If you expect some sessions to
>>> be “elephant flows”, you could solve the issue by growing their fifos (see
>>> segment_manager_grow_fifo) from the app. The builtin tcp proxy does not
>>> support this at this time, so you’ll have to do it yourself.
>>> 
>>> Florin
>>> 
>>>> On Jul 24, 2019, at 1:34 AM, max1976 via Lists.Fd.Io
>>>> <max1976=mail...@lists.fd.io> wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> Experimenting with the size of fifo, I saw a problem. The smaller the size
>>>> of the fifo, the more often tcp window overflow errors occur (Segment not
>>>> in receive window in vpp terminology). In the dump [1], is shown the data
>>>> exchange between the vpp tcp proxy (192.168.0.1) and the nginx server
>>>> under Linux (192.168.0.200), the size of the rx fifo in the vpp is set to
>>>> 8192 bytes. The red arrow indicates that the vpp is waiting for the latest
>>>> data to fill the buffer. The green arrow indicates that the Linux host
>>>> stack is sending data with a significant delay.
>>>> This behavior significantly reduces throughput. I plan to use a large
>>>> number of simultaneous sessions, so I can not set the size of the fifo too
>>>> large. How can I solve this problem?
>>>> 
>>>> Thanks.
>>>> [1] https://monosnap.com/file/XfDjcqvpofIR7fJ6lEXgoyCB17LdfY
>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>> Links: You receive all messages sent to this group.
>>>> 
>>>> View/Reply Online (#13555): https://lists.fd.io/g/vpp-dev/message/13555
>>>> Mute This Topic: https://lists.fd.io/mt/32582078/675152
>>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
>>>> [fcoras.li...@gmail.com]
>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> 
>>> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13570): https://lists.fd.io/g/vpp-dev/message/13570
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to