I didn’t mean you should switch to envoy, just that throughput is pretty low probably because of some configuration. What that configuration is is not obvious unfortunately.
Regarding the kernel parameters, we have time wait reuse enabled (equivalent to tcp_tw_reuse) but that should not matter unless nginx establishes a new connection for each request. Does it, as part of the features you’ve added? Could you do: - clear run; show run - in vpp when under load - show sessions - to see the number of session in vpp Also, could you reduce the number of workers to maybe 1-2 in nginx and 1 in vpp, to get some base performance readings? Regards, Florin [1] https://wiki.fd.io/images/0/08/Using_vpp_as_envoys_network_stack.pdf <https://wiki.fd.io/images/0/08/Using_vpp_as_envoys_network_stack.pdf> > On Mar 31, 2022, at 8:56 AM, weizhen9...@163.com wrote: > > Hi, > > I'm doing proxying with nginx. And we develop some new functions in nginx. > The performance of nginx in kernel host stack is higher than nginx using vpp > host stack. When testing the nginx in kernel host stack, we modify the kernel > parameters in kernel host stack. When using vpp host stack, how can we modify > the similar parameters? > > Besides, we don't use the envoy. We develop some functions in nginx. >> >> Hi, >> >> Spoke too soon. Noticed you’re doing proxying with nginx. >> >> What does clear run; show run report in vpp when under load? >> >> Side note, with envoy I’m seeing much better numbers. See for instance slide >> 12 here [1]. So I suspect this is a configuration issue but can’t tell >> upfront what it is. >> >> Regards, >> Florin
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21168): https://lists.fd.io/g/vpp-dev/message/21168 Mute This Topic: https://lists.fd.io/mt/90157992/21656 Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-