Re: [vpp-dev] TCP host stack & small size fifo

2019-07-28 Thread Florin Coras
Hi Max, Inline. > On Jul 28, 2019, at 10:47 AM, Max A. wrote: > > Hi Florin, > > I simplified the application. It sends the request and reads all the data > from the server using the 8 KB buffer. The fifo size is set to 8 KB. In the > attached dump [1] you can see that in packet number 14 th

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-28 Thread Max A. via Lists.Fd.Io
Hi Florin, I simplified the application. It sends the request and reads all the data from the server using the 8 KB buffer. The fifo size is set to 8 KB. In the attached dump [1] you can see that in packet number 14 there will be an overflow of the size of the tcp window. My application reports

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-26 Thread Florin Coras
Hi Max, By the looks of it, the result is different, although not perfect. That is, you can now see multiple packets (more than the 14k) exchanged before window goes to zero. How are you reading the data in vcl, i.e., how large is your read buffer? I hope it’s at least around 8-14kB. Also,

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-26 Thread Max A. via Lists.Fd.Io
Hi Florin, > >That’s an important difference because in case of the proxy, you cannot >dequeue the data from the fifo before you send it to the actual destination >and it gets acknowledged. That means, you need to wait at least one rtt (to >the final destination) before you can make space in t

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Florin Coras
Hi Max, > On Jul 25, 2019, at 6:09 AM, Max A. wrote: > > Hi Florin, > > I tried to increase the buffer size to 128k. The problem still arises, only > less often [1]. The smaller the buffer, the more often the problem occurs. Yup, for maximum throughput, fifo size needs to be large enough to

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Florin Coras
Hi Max, > On Jul 25, 2019, at 5:51 AM, Max A. wrote: > > Hi Florin, > > > As explained above, as long as the sender is faster, this will happen. Still, > out of curiosity, can you try this [1] to see if it changes linux’s behavior > in any way? Although, I suspect the linux’s window probe t

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Max A. via Lists.Fd.Io
Hi Florin, I tried to increase the buffer size to 128k. The problem still arises, only less often [1]. The smaller the buffer, the more often the problem occurs. Thanks. [1] https://drive.google.com/open?id=1KVSzHhPscpSNkdLN0k2gddPJwccpguoo   -- Max A. -=-=-=-=-=-=-=-=-=-=-=- Links: You rec

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Max A. via Lists.Fd.Io
Hi Florin, >As explained above, as long as the sender is faster, this will happen. Still, >out of curiosity, can you try this [1] to see if it changes linux’s behavior >in any way? Although, I suspect the linux’s window probe timer, after a zero >window, is not smaller than min rto (which is t

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Florin Coras
Hi Max, Note how whenever acks go from 192.168.0.1 to 192.168.0.200, the window is constantly 8k, it never drops, i.e., that buffer never fills. If you pick a segment, say seq 18825 -> 20273, it’s sent at time 0.000545 and it’s acked at 0.000570. So, the ack went out after 25us and by that tim

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Max A. via Lists.Fd.Io
Hi Florin, > >Well, the question there is how large are the rx buffers. If you never see a >zero rcv window advertised to the sender, I suspect the rx buffer is large >enough to sustain the throughput.  Using the reference [1], you can view a dump of downloading the same file from the same lin

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Florin Coras
Hi Max, Inline. > On Jul 24, 2019, at 10:09 AM, Max A. wrote: > > > Hi Florin, > > I made a simple epoll tcp proxy (using vcl) and saw the same behavior. > > I increased the fifo size to 16k, but I got exactly the same effect. A full > dump for a session with a buffer size of 16k can be ob

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Max A. via Lists.Fd.Io
Hi Florin, I made a simple epoll tcp proxy (using vcl) and saw the same behavior. I increased the fifo size to 16k, but I got exactly the same effect.  A full dump for a session with a buffer size of 16k can be obtained by reference [1] (192.168.0.1 is the interface on vpp, 192.168.0.200 is t

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Andrew Yourtchenko
Cool, thanks for clarification! --a > On 24 Jul 2019, at 18:37, Florin Coras wrote: > > Pretty much. We advertise whatever space we have left in the fifo as opposed > to 0 and as result linux backs off. > > TCP_NODELAY can force what looks like the problematic packet out sooner. > However, b

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Florin Coras
Pretty much. We advertise whatever space we have left in the fifo as opposed to 0 and as result linux backs off. TCP_NODELAY can force what looks like the problematic packet out sooner. However, because of the small rx fifo, next rcv window will zero and the whole transfer will stall there unti

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Andrew Yourtchenko
Just reading the description and not having peeked into the sniffer trace, I wondered if is it this behavior a side effect of mitigation of [1], consequently, are the linux side sockets marked as no_delay ? [2] [1]: https://en.wikipedia.org/wiki/Silly_window_syndrome [2]: https://stackoverflow.c

Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Florin Coras
Hi, It seems that linux is reluctant to send a segment smaller than the mss, so it probably delays sending it. Since there’s little fifo space, that’s pretty much unavoidable. Still, note that as you increase the number of sessions, if all send traffic at the same rate, then their fair share

[vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread max1976 via Lists.Fd.Io
Hello, Experimenting with the size of fifo, I saw a problem. The smaller the size of the fifo, the more often tcp window overflow errors occur (Segment not in receive window in vpp terminology). In the dump [1], is shown the data exchange between the vpp tcp proxy (192.168.0.1) and the nginx se