Hi Jie, 

Thanks for the info! And yes, it is possible that after a certain threshold 
(and 30k seems more reasonable than 500) other main thread processes might be 
affected by the proxy's connects. This is definitely an area that needs to be 
optimized going forward. Nonetheless, I should also note that the CPU you’re 
using is almost 10 years old at this point (product page says 8 cores 16 
threads, so I wonder if there wasn’t some typo there). 

A couple of additional comments:
- can CPS go even higher, in spite of the CLI issues? 
- could you try changing max_connects in session_mq_handle_connects_rpc to 
something like 64? does it change anything? 

Regards, 
Florin

> On Dec 23, 2021, at 11:29 PM, Jie Deng <deng...@webray.com.cn> wrote:
> 
> Hi Florin,
> 
> Thanks for your timely reply.
> I’m doing testing with things like envoy and at 200 CPS cli is not affected. 
> Can be an issue related to configuration or maybe hardware. 
> This is really caused by my environment. I did this test on the local virtual 
> machine.
> 
> When I test with a real physical environment and a Spirent tester,Low speed 
> CPS,  i.e. 1K req/s, will not affect cli. But when the CPS reaches 30000 
> req/s, the VPP cli command line becomes very stuck and cannot execute 
> commands normally.
> 
> My test machine is Intel (R) Xeon (R) CPU e5-2670, there are 24 cores.
> top - 14:44:52 up  3:06,  4 users,  load average: 5.65, 4.96, 3.95
> Tasks: 311 total,   6 running, 305 sleeping,   0 stopped,   0 zombie
> %Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 96.2 id,  0.0 wa,  0.0 hi,  3.8 si,  0.0 
> st
> %Cpu1  : 99.7 us,  0.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu2  : 99.7 us,  0.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu5  :  0.3 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu6  : 79.1 us, 20.9 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu7  : 75.8 us, 24.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu8  : 75.7 us, 24.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu9  : 77.2 us, 22.8 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu10 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu11 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu12 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu13 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu14 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu15 :  0.7 us,  0.0 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu16 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu17 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu18 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu19 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu20 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu21 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu22 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu23 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> cpu1 is vpp main core,and cpu2 is vpp worker.cpu6-9 is nginx worker.
> 
> 
> 
> Pressed send a bit too soon. I also tried a simple proxy scenario locally 
> with -c 500 and envoy and got about 490-500 CPS. Then did the same with -c 
> 1000 and got close to 1k req/s. 
>  
> I’m guessing performance is capped by how fast we manage to establish a 
> connection as the proxy will introduce a decent amount of latency and ab 
> needs to wait until the requests in flight are answered.
>  
> Regards, 
> Florin
> 
> On Dec 23, 2021, at 8:57 PM, Florin Coras <fcoras.li...@gmail.com 
> <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Jie, 
>  
> Inline.
> 
> 
> On Dec 23, 2021, at 6:34 PM, deng...@webray.com.cn 
> <mailto:deng...@webray.com.cn> wrote:
> Hi Florin,
> Thanks for your reply! My intention was to ask you on vpp-dev, but because I 
> am not familiar with vpp-dev, replay was selected as private email in this 
> conversation 
> <https://lists.fd.io/g/vpp-dev/topic/81698902?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3Arelevance%2C%2C%23hoststack%2C20%2C2%2C0%2C81698902%2Cct%3D1&ct=1>.
>  
> Have you registered with the mailing list? Maybe that’s why you’re hitting 
> issues. 
> 
> The VPP version I use is 21.10. When the RPS of HTTP is more than 200, the 
> utilization rate of the main core will be very high. I think active opens is 
> in the main thread, and each version should have this issues.
>  
> I’m doing testing with things like envoy and at 200 CPS cli is not affected. 
> Can be an issue related to configuration or maybe hardware. 
> 
> In our usage scenario (using VPP's host stack to speed up nginx reverse 
> proxy), since TCP active opens is generated by main thread, there will be 
> some problems under a large number of HTTP requests.
> The CPU utilization of main core is close to 100%, which will cause the 
> execution of CLI commands to be very slow.
> It’s 100% because main thread switches to polling mode. It’s not that all cpu 
> is consumed just that main thread is not sleeping. 
> Because active opens is in the main thread and barrier sync, the scalability 
> of multi-core performance is not very good.
> Connects do indeed rely on barrier syncs but they are paced so they should 
> not starve the workers. 
> 
> Regards,
> Jie
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20668): https://lists.fd.io/g/vpp-dev/message/20668
Mute This Topic: https://lists.fd.io/mt/87933432/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to