Hi List,
We have been using policer_add_del and classify_add_del_session in single
threaded VPP (ie one main thread only) and both API were giving decent
performance, but after switching to multi thread VPP the performance seems be
drastically less.
To test this out a small test program was wr
y - could you try an image with
> https://gerrit.fd.io/r/c/vpp/+/31368 in it and see if you still have a
> similarly large difference or does it make things faster for you ?
>
> --a
>
>> On 25 Feb 2021, at 16:20, Xuo Guoto via lists.fd.io
>> wrote:
>
>>
>>
Hi,
Just checking if some thing is happening in this direction!
X.
‐‐‐ Original Message ‐‐‐
On Tuesday, February 23, 2021 5:50 PM, Klement Sekera via lists.fd.io
wrote:
> Hey,
>
> just a heads up - there is a similar request to yours which came from a
> different direction. I’ll be ma
; GitHub](https://github.com/FDio/vpp/commit/dc243ee2bcc4926ec23e71a687bb62b5c52c2fbb#diff-f4fd03f6bc31f1be823a391dcdfbb0024f0ee820a1128ef9d56091cb73e77b57)
>
> Thanks once more.
>
> ---
>
> От: vpp-dev@lists.fd.io от имени Xuo Guoto
Hello all,
While running vpp 19.01(on ubuntu 18.04), I have observed that occasionally
nat-det-expire-walk which is supposed to run every 10 sec, does not run. This
causes the expired sessions to pile up.
I am checking it by running the following command:
vppctl sh run | grep -E 'Name|nat-det-
Hello,
While going through the nat configuration of latest VPP, I find that max
translations per user is missing and is kind of replaced by "nat44 enable
sessions 40 endpoint-dependent" which limit max translations per thread.
Is there any equivalent config of max translations per user in l
at44 ed commands will be also prefixed with ed in the near future.
>
> Best regards,
>
> Filip Varga
>
> From: vpp-dev@lists.fd.io On Behalf Of Xuo Guoto via
> lists.fd.io
> Sent: Monday, March 29, 2021 5:48 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] nat-ed and
4-expire-walk event wait 0 0 1 8.47e3 0.00
Checking to understand why this happens and if some thing could be done to
restart the threads?
X.
‐‐‐ Original Message ‐‐‐
On Monday, March 29, 2021 10:47 AM, Xuo Guoto via lists.fd.io
wrote:
> Hello all,
>
> While running vpp 19.01
Hello list,
Can I get VPP uptime from stats seg? I am using vpp-prometheus to fetch data
from stats seg and found some promising entries like:
_sys_last_update
_sys_heartbeat
but not sure of its precise meaning.
This is for a grafana based dashboard for VPP. I am also open to suggestions
for
Hello list,
I have a query related to the stat_client. Its query is related to
client/stat_client.c, part of shared library vppapiclient.
In src/vpp-api/client/stat_client.c (master branch):
stat_segment_data_t *res = 0; ==> LINE 1
...
...
for (i = 0; i < vec_len (stats); i++)
{
/* Collect coun
On Tuesday, July 20th, 2021 at 9:09 PM, wrote:
> Yes. Thanks for spotting this.
> Would you mind submitting a patch?
Will be happy to offer a patch. I can think of 3 ways to fix this:
1. Call `stat_segment_data_free()` to free the memory in the vppapiclient
library code itself in the case of e
Hello all,
I am reading the news at fd.io about 1Tb/s VPP benchmark at
https://fd.io/latest/singles/terabit_ipsec/
Saw the news item and a video and slides, but not much about the hardware specs
(CPU/RAM and NICs used) or testing methods (Details of TG and DUT etc). Was
stock VPP being used or
Trying to understand more about the benchmarks, couple of questions I have are:
1. In slide 6 of 1Tb/s presentation[1] it says Intel® Xeon® 6338N 32 Cores, 2.2
GHz, 42 MB Cache, but Intel arc[2] shows a cache of 48MB. I presume this is a
typo in the presentation slide.
[1] https://fd.io/present
Hello vpp-dev,
I am trying to pin down an elusive memory leak and from previous messages its
understood that memory-trace is the best tool for the job. From the docs it
uses two commands:
$ vppctl memory-trace on main-heap
$
vppctl show memory-trace on main-heap
Thread 0 vpp_main
base 0x7ff
Pim,
Finally got some time to watch this excellent presentation. Thanks for the
presentation and for sharing it here!
If I may, I have a tangential question. How was the graph shown in "Demo 1 -
L2XC Performance" slide which shows Cycles/packet by node generated? That looks
really cool and and
Thanks for your reply, just trying to understand a bit more.
My understanding is that the show runtime shows the total clock cycles a node
executed when the command was issued, to get the cycles/packet, we need to
divide it by number of packets the node processed. How that information is
derive
16 matches
Mail list logo