Thanks for bringing up the discussion
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Thomas
> Monjalon via Lists.Fd.Io
> Sent: Monday, December 2, 2019 4:35 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] efficient use of DPDK
>
> Hi all,
>
> V
Hi THomas!
Inline...
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
> Hi all,
>
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
We are benchmarking bo
Hi all,
VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?
I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
Wh
Hi Yang.L,
I just tried out nginx + debug vcl and ldp + debug vpp. Everything seems to be
working fine.
Once you start nginx, do you get any errors in /var/log/syslog. What does “show
sessions verbose” return? There might be some issues with your config.
Thanks,
Florin
> On Dec 2, 2019, at
Hi Paul,
Are you thinking about using tcp to generate packets? If yes, probably you want
to take a look at the iperf vcl tests (test/test_vcl.py).
Regards,
Florin
> On Dec 2, 2019, at 9:57 AM, Paul Vinciguerra
> wrote:
>
> There was a brief discussion on the community call about using hosts
There was a brief discussion on the community call about using hoststack as
an alternative to running tests as a privileged user for a tap interface.
Is there any documentation on this, or is there a representative test case
I can reference?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages s
Hi Guys,
Don’t know if anyone has already experienced this, but it seems that something
goes wrong with deploying VPP and networking-vpp in a clean devstack setup:
2019-12-02 17:25:34.194 | +functions-common:service_check:1622 for
service in ${ENABLED_SERVICES//,/ }
2019-12-02 17:25:34.198
Paul,
The VPP continuous integration system utilizes code primarily from the
following three repos:
https://git.fd.io/ci-management/ (Note: this uses git submodule for
global-jjb)
https://git.fd.io/csit/
https://git.fd.io/vpp/
Changes to any of these can cause "the wheels to fall off the b
How can we make this process more transparent? I saw this issue on
Friday. I searched all the repo's for recently merged changes, but found
nothing. Is there somewhere else to look that I'm not aware of?
On Mon, Dec 2, 2019 at 8:25 AM Jerome Tollet via Lists.Fd.Io wrote:
> Hi Dave,
>
> I ju
You are right, The clib_mem_init call mmap to allocate virtual memory, but
I got random SIGSEGV for the client program When the running machine has
low memory ( about 2G ). So I changed client.c and memory_client.c to
allocate lower virtual memory and only when I call connect_to_vlib.
The changes
Hi Dave,
I just sent a private email to Dave, Andrew and Ed 😉 (see enclosed).
Thanks for you help.
Jerome
De : au nom de "Dave Barach via Lists.Fd.Io"
Répondre à : "Dave Barach (dbarach)"
Date : lundi 2 décembre 2019 à 14:24
À : "Ed Kern (ejk)" , "Andrew Yourtchenko (ayourtch)"
Cc : "vpp-dev
Please have a look... Thanks... Dave
+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ make json-api-files
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
Traceback (most recen
Hi Filip
Simple NAT.
Regards Yurii
От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco)
Отправлено: 2 декабря 2019 г. 13:21:26
Кому: Юрий Иванов ; vpp-dev@lists.fd.io
Копия: Ole Troan (otroan)
Тема: RE: [vpp-dev] NAT stops processing for big amount of use
Hi Florin,
When nginx configuration item worker_processes = 1, everything is normal;
when nginx configuration item worker_processes> 1, the above situation will
occur.
Can you explain the above problem?
thanks,
Yang.L
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
Vi
Emma,
> The function vac_client_constructor allocates 1 GB of memory in every binary
> which linked to the vlibmemoryclient library.
> I have limited memory in my test machine. Is there any way to resolve this
> issue?
Firstly this is virtual memory.
If I recall correctly the API client uses t
15 matches
Mail list logo