Hi Team,
We are trying to build VPP on ubuntu 20.04 with kernel version:5.4.x.
We cloned the gerrit code: https://gerrit.fd.io/r/vpp
When we try to build with : /build-root/vagrant/build.sh, we see below issues:
ubuntu@vpp:~/vpp/build-root$ cat
/home/ubuntu/vpp/build-root/build-vpp-nativ
Hello Expert,
our user application (User_App) communicates to VPP via memif. user
application is configured in Slave mode.
User_App sends 100msg/sec to VPP and VPP is coded to respond back to each
packet. we see some weird drops at memif at ring boundary. when rx packet
count on memif interface r
Can you check if [1] patch merged?
[1] https://gerrit.fd.io/r/c/vpp/+/33496
Regards,
Hanlin
| |
汪翰林
|
|
hanlin_w...@163.com
|
On 5/5/2022 10:51, wrote:
Hi,
When I set a long connection, the performance of vpp proxy is higher than
before. But we need to set a short connection between vpp
Hi,
When I set a long connection, the performance of vpp proxy is higher than
before. But we need to set a short connection between vpp and upstream server.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21362): https://lists.fd.io/g/vpp-de
You can try to set the long downstream and upstream connections in nginx
configruation like this:
http {
...
keepalive_timeout 65;
keepalive_requests 100;
upstream backend{
...
keepalive 3;
}
}
Regards,
Hanlin
| |
汪翰林
|
|
hanlin_w...@163.com
|
Next step then. What’s segment-size and add-segment-size in vcl.conf? Could you
set them to something large like 40? Also event-queue-size 100,
just to make sure mq and fifo segments are not a limiting factor. In vpp under
session stanza, set event-queue-length 20.
Try also to
Hi,
I test the performance of upstream server.
Just as you see, the performance of upstream is more higher than vpp proxy. In
addition, I don't find any drops.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21359): https://lists.fd.io/g/vp
Hi all,
Just a small reminder we are 3 weeks away from RC1 milestone for VPP 22.06
--a // your friendly 22.06 release manager
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21358): https://lists.fd.io/g/vpp-dev/message/21358
Mute This Topic: https
As mentioned previously, is the upstream server handling the load? Do you see
drops between vpp and the upstream server?
Regards,
Florin
> On May 4, 2022, at 9:10 AM, weizhen9...@163.com wrote:
>
> Hi,
>According to your suggestion, I config the src-address.
>
>
> But the performance is
What’s the result prior to multiple addresses? Also, can you give it the whole
/24? No need to configure the ips, just tcp src-address
192.168.6.6-192.168.6.250
Forgot to ask before but is the server that’s being proxied for handling the
load? It will also need to accept a lot of connections.
According to your suggestion, I test with multiple source ips. But the
performance is still low.
The ip is as follows.
vpp#tcp src-address 192.168.6.6-192.168.6.9
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21354): https://lists.fd.io/
Hi,
That shouldn’t be the issue. Half-opens are on main because connection
establishment needs locks before it sends out a syn packet. Handshakes are not
completed on main but on workers. VPP with one worker + main should be able to
handle 100k+ CPS with warmed up pools.
Long term we’ll swit
Hi,
Is this the reason for the low performance? In general, the main threads
handles management functions(debug CLI, API, stats collection) and one or more
worker threads handle packet processing from input to output of the packet. Why
does the main core handle the session? Does the condition in
Hi,
Those are half-open connects. So yes, they’re expected if nginx opens a new
connection for each request.
Regards,
Florin
> On May 4, 2022, at 6:48 AM, weizhen9...@163.com wrote:
>
> Hi,
> When I use wrk to test the performance of nginx proxy using vpp host
> stack, I execute the com
Hi,
When I use wrk to test the performance of nginx proxy using vpp host stack, I
execute the command "show session" by vppctl. The result is following.
The main core has most of sessions. Is this normal? If not, what should I DO?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages
15 matches
Mail list logo