As mentioned previously, that is not supported for connects currently.
Regards,
Florin
> On Mar 31, 2022, at 9:44 PM, weizhen9...@163.com wrote:
>
> Hi,
> I describe our scene in detail. We use nginx in vpp host stack as a proxy.
> And we add some features in nginx. For example, nginx close u
Hi,
I describe our scene in detail. We use nginx in vpp host stack as a proxy. And
we add some features in nginx. For example, nginx close upstream tcp links
actively and this causes a lot of TIME_WAIT states in nginx proxy when we test
the performance of nginx using vpp host stack. So we config
Hi,
Given that 20k connections are being actively opened, those on main thread, and
40k are established, those on the workers, suggests that tcp runs out of ports
for connects. If possible either increase the number of destination IPs for
nginx or try “tcp src-address ip1-ip2” and pass in a ran
Hi,
>From this test results, can you see some errors? Why is the performance of
>nginx using vpp low?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21174): https://lists.fd.io/g/vpp-dev/message/21174
Mute This Topic: https://lists.fd.io/
TCP accepts connections in time-wait (see here [1]) but won’t reuse ports of
connection in time-wait for connects. If you expect lots of active opens from
nginx to only one destination ip and you have multiple source ips, you could
try to use "tcp src-address” cli.
Regards,
Florin
[1] https:/
I didn’t mean you should switch to envoy, just that throughput is pretty low
probably because of some configuration. What that configuration is is not
obvious unfortunately.
Regarding the kernel parameters, we have time wait reuse enabled (equivalent to
tcp_tw_reuse) but that should not matter
Hi,
The version of vpp is 22.06. The configs are in the attachment.
The nginx using vpp host stack is low.
The most important test indicators is RPS.
How can we increase the performance of nginx using vpp host stack?
nginx.conf
Description: Binary data
startup.conf
Description: Binary data
v
Hi,
Could you provide a bit more info about the numbers you are seeing and the type
of test you are doing? Also some details about your configs and vpp version?
As for tcp_tw_recycle, I believe that was removed in Linux 4.12 because it was
causing issues for nat-ed connections. Did you mean t