Hi ,
We have a use case, where we receive packets in a tunnel, and the inner packet
may be fragments.
If we want to reassemble the inner fragments and get one single packet, does
VPP already have a framework that has this functionality.
If it's already there, we can make use of it.
I saw MAP pl
Hi Murthy,
yes it does. Code is in ip4_sv_reass.c or ip4_full_reass.c. First one is
shallow reassembly (as in, know 5 tuple for all fragments with having to
actually reassemble them), second one is full reassembly.
Regards,
Klement
> On 1 Jul 2020, at 14:41, Satya Murthy wrote:
>
> Hi ,
>
>
Hi Team,
I am trying to explore tcp user space in my vm using vpp hoststack.
I install and brought up vpp with attached startup1 conf and vpp1 conf.
1. executed the vpp with start config :
sudo
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp
-c /etc/vpp/startup.conf
[Edited Message Follows]
Hi Team,
I am trying to explore tcp user space in my vm using vpp hoststack.
I install and brought up vpp with attached startup1 conf and vpp1 conf.
1. executed the vpp with start config :
sudo
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp
Thanks a lot Klement for this quick info.
This will serve our purpose.
--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#16853): https://lists.fd.io/g/vpp-dev/message/16853
Mute This Topic: https://lists.fd.io/mt/75234196/2
Greetings,
This is my first post to the forum, so if this is not the right place for this
post please let me know.
I had a question on VPP performance. We are running two testcases, we limit it
to single threaded and just using one core in order to reduce as many variables
as we can. In the tw
In order for the statistics to be accurate, please be sure to do the following:
Start traffic... “clear run”... wait a while to accumulate data... “show run”
Otherwise, the statistics will probably include a huge amount of dead airtime,
data from previous runs, etc.
HTH... Dave
From: vpp-dev@l
Hi,
You're seeing this issue because vpp_echo is trying to use the old
connection mechanism. The configuration I typically run with is the
following [1]
And launching vpp_echo with :
./vpp_echo socket-name /var/run/vpp.sock server uri tcp://10.0.1.1/1234
./vpp_echo socket-name /var/run/vpp.sock cl
thank you very much, looks like a version issue - what version are you using? i
built using the master branch
i tried that but i am getting this error :
-bash-4.2$ sudo
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp
-c /etc/vpp/startup1.conf
vnet_feature_arc_init
Hi,
It seems that clib_mem_create_fd is not working in your vm. What kernel are you
running?
Regards,
Florin
> On Jul 1, 2020, at 8:33 AM, sadhanakesa...@gmail.com wrote:
>
> thank you very much, looks like a version issue - what version are you using?
> i built using the master branch
> i
also i tried to get use-svm-api in existing kernel asa workaround to establish
the tcp ,any fix suggestions?
-bash-4.2$ sudo
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
server uri tcp://10.0.0.1/24 use-svm-api
vl_map_shmem:582: region init fail
connect_t
As replied on the private thread, vpp-echo needs to know about the api prefix
you’ve configured (vpp1). Try adding “chroot prefix vpp1” to the list of
options.
Regards,
Florin
> On Jul 1, 2020, at 10:46 AM, sadhanakesa...@gmail.com wrote:
>
> also i tried to get use-svm-api in existing kernel
LF is doing reviewing our INFRA inventory. We'd like to know if the
community actively uses OpenGrok?
Thank you,
Vanessa
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#16861): https://lists.fd.io/g/vpp-dev/message/16861
Mute This Topic: https://lis
Hi Team,
I realized that memif and corresponding sockets are tightly coupled.
Means for example memif1 --> /run/vpp/memif1.sock --> application.
We can not have multiple sockets for memif1, is my understanding correct?
memif1 --> /run/vpp/memif1.sock if its already mapped, then we can not create
m
Hi,
I am using vpp stable/2005 code. I want to enable UDP checksum offload for tx .
I changed the vpp startup.cfg file -
## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload
## Enable UDP / TCP TX checksum
It seems this commit causes the issue.
https://gerrit.fd.io/r/c/vpp/+/27675
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#16864): https://lists.fd.io/g/vpp-dev/message/16864
Mute This Topic: https://lists.fd.io/mt/75247302/21656
Group Owner: vpp-d
Hi,
I am unable to test if my server is up, could I be missing any configs in
interfaces/ vpp hoststack setup :
I configured vpp1 server with the following commands :
1./auto/home.nas04/skesavan/ *vpp* /build-root/install- *vpp* _debug-native/
*vpp* /bin/ *vpp* -c /etc/ *vpp* /startup.conf
2. -b
17 matches
Mail list logo