Hi,
I try adding to session stanza in startup.conf: session { poll-main }. But the
performance is still exists.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21381): https://lists.fd.io/g/vpp-dev/message/21381
Mute This Topic: https://list
Hi Florin,
Thanks for the clear explanation ! I do remember that I consulted with you about this procedure, and it has been improved before. But i am failed to find the patch two days ago.Regards,yacan
Hi,
That’s also a source for slowdowns. Try configuring one worker.
Regards,
Florin
> On May 5, 2022, at 6:40 PM, weizhen9...@163.com wrote:
>
> Hi,
>We have only one numa node. Just as the following picture show.
>
> vpp# sh hardware-interfaces verbose
> Name
Hi,
We have only one numa node. Just as the following picture show.
vpp# sh hardware-interfaces verbose
Name Idx Link Hardware
ens1f0 1 up ens1f0
Link speed: 10 Gbps
RX Queues:
queue thread mode
0 main (0) polling
1 main (0)
Hi Yacan,
Currently rpcs from first worker to main are done through session layer and are
processed by main in batches. Session queue node runs on main in interrupt mode
so first worker will set an interrupt when the list of pending connects goes
non-empty and main will switch to polling in th
Hi,
Make sure that nginx and vpp are on the same numa, in case you run on a
multi-socket host. Check that with lscpu and “show hardware verbose”. Also make
sure that nginx and vpp's cpus don’t overlap, i.e., run nginx with taskset.
Regarding the details of your change, normally we recommend n
Hi Florin, weizhen9612: I'm not sure whether rpc for connects will be executed immediately by the main thread in the current implementation, or will it wait for the epoll_pwait in linux_epoll_input_inline() to time out.
Regards,yacan
On 5/5/2022 16:19, wro
Hi,
Now I configure main-core to 2 and corelist-workers to 0, and find that the
performance has improved significantly.
When I execute the following conmand, I find that vpp have main thread only.
#show threads
What does this situation show?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive al
Hi,
Just as you see, I check the patch.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21368): https://lists.fd.io/g/vpp-dev/message/21368
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubsc
Can you check if [1] patch merged?
[1] https://gerrit.fd.io/r/c/vpp/+/33496
Regards,
Hanlin
| |
汪翰林
|
|
hanlin_w...@163.com
|
On 5/5/2022 10:51, wrote:
Hi,
When I set a long connection, the performance of vpp proxy is higher than
before. But we need to set a short connection between vpp
Hi,
When I set a long connection, the performance of vpp proxy is higher than
before. But we need to set a short connection between vpp and upstream server.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21362): https://lists.fd.io/g/vpp-de
You can try to set the long downstream and upstream connections in nginx
configruation like this:
http {
...
keepalive_timeout 65;
keepalive_requests 100;
upstream backend{
...
keepalive 3;
}
}
Regards,
Hanlin
| |
汪翰林
|
|
hanlin_w...@163.com
|
Next step then. What’s segment-size and add-segment-size in vcl.conf? Could you
set them to something large like 40? Also event-queue-size 100,
just to make sure mq and fifo segments are not a limiting factor. In vpp under
session stanza, set event-queue-length 20.
Try also to
Hi,
I test the performance of upstream server.
Just as you see, the performance of upstream is more higher than vpp proxy. In
addition, I don't find any drops.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21359): https://lists.fd.io/g/vp
As mentioned previously, is the upstream server handling the load? Do you see
drops between vpp and the upstream server?
Regards,
Florin
> On May 4, 2022, at 9:10 AM, weizhen9...@163.com wrote:
>
> Hi,
>According to your suggestion, I config the src-address.
>
>
> But the performance is
What’s the result prior to multiple addresses? Also, can you give it the whole
/24? No need to configure the ips, just tcp src-address
192.168.6.6-192.168.6.250
Forgot to ask before but is the server that’s being proxied for handling the
load? It will also need to accept a lot of connections.
According to your suggestion, I test with multiple source ips. But the
performance is still low.
The ip is as follows.
vpp#tcp src-address 192.168.6.6-192.168.6.9
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21354): https://lists.fd.io/
Hi,
That shouldn’t be the issue. Half-opens are on main because connection
establishment needs locks before it sends out a syn packet. Handshakes are not
completed on main but on workers. VPP with one worker + main should be able to
handle 100k+ CPS with warmed up pools.
Long term we’ll swit
Hi,
Is this the reason for the low performance? In general, the main threads
handles management functions(debug CLI, API, stats collection) and one or more
worker threads handle packet processing from input to output of the packet. Why
does the main core handle the session? Does the condition in
Hi,
Those are half-open connects. So yes, they’re expected if nginx opens a new
connection for each request.
Regards,
Florin
> On May 4, 2022, at 6:48 AM, weizhen9...@163.com wrote:
>
> Hi,
> When I use wrk to test the performance of nginx proxy using vpp host
> stack, I execute the com
Hi,
When I use wrk to test the performance of nginx proxy using vpp host stack, I
execute the command "show session" by vppctl. The result is following.
The main core has most of sessions. Is this normal? If not, what should I DO?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages
Hi,
Unfortunately, I have not, partly because I didn’t expect too much out of the
test due to the issues you’re hitting. What’s the difference between linux and
vpp with and without tcp_max_tw_bucket?
Regards,
Florin
> On May 3, 2022, at 3:28 AM, weizhen9...@163.com wrote:
>
> Hi,
>
>
Hi,
I wanted to ask if you have tested the performance of nginx proxy using vpp
host stack as a short connection, i.e. after vpp send GET request to upstream
server, vpp close the connection. If yes, please tell me the result.
Thank you for your suggestion about adding multiple source IPs. But w
Hi,
That indeed looks like an issue due to vpp not being able to recycle
connections fast enough. There are only 64k connections available between vpp
and the upstream server, so recycling them as fast as possible, i.e., with 0
timeout as the kernel does after tcp_max_tw_buckets threshold is h
Hi,
The short link means that after the client send GET request, the client send
tcp FIN packet. Instead, the long link means that after the client send GET
request, the client send next http GET request by using the same link and
don't need to send syn packet.
We found that when vpp and the up
Hi,
As per this [1], after tcp_max_tw_buckets threshold is hit timewait time is 0
and this [2] explains what will go wrong. Assuming you’re hitting the
threshold, 1s timewait-time in vpp will probably not be enough to match
performance.
Not sure what you mean by “short link”. If you can’t us
Hi,
I set the timewait_time which is equal to 1s in tcp's configuration. But the
performance of nginx proxy using vpp host stack is still lower than nginx proxy
using kernel host stack.
Now I want to know what can I do to improve the performance? And does nginx
proxy using vpp host stack support
Hi,
Understood. See the comments in my previous reply regarding timewait-time
(tcp_max_tw_bucket practically sets time-wait to 0 once threshold is passed)
and tcp-src address.
Regards,
Florin
> On Apr 30, 2022, at 10:08 AM, weizhen9...@163.com wrote:
>
> Hi,
> I test nginx proxy using RPS.
Hi,
I test nginx proxy using RPS. And nginx proxy only towards one IP.
Now I test the performance of nginx proxy using vpp host stack and by
configuring nginx, it is a short connection between the nginx reverse proxy and
the upstream server. The result of test show that the performance of nginx
Hi,
What is performance in this case, CPS? If yes, does nginx proxy only towards
one IP, hence the need for tcp_max_tw_bucket?
You have the option to reduce time wait time in tcp by setting timewait-time in
tcp’s startup.conf stanza. I would not recommend reducing it too much as it can
lead
Hi,
Now I use nginx which uses vpp host stack as a proxy to test the performance.
But I find the performance of nginx using vpp host stack is lower than nginx
using kernel host stack. The reason is that I config the tcp_max_tw_bucket in
kernel host stack. So does the vpp stack support the sett
31 matches
Mail list logo