Hello all,
I am getting a core dump when adding MACIP ACL using API (using
honeycomb). My observation is that I can reproduce this core dump
reliably if I add about 300 MACIP ACL. I am on v18.10-27~ga0005702c
I did some debugging and my observations is:
In the function:
void
vl_msg_api_handler_
Sounds like a memory corruption.
I am out of office for another week, so in the meantime if you might collect
few postmortem dumps with reproductions, I will look at it when I return.
(I aim to try to reproduce by adding the necessary api calls to
https://github.com/vpp-dev/apidump2py - because
Hi,
I’m checking the implementation of GTPU performance enhancement with
bypass ip-lookup after gtpu_encap.
I started up a VPP with hardware interface configuration only, and do following
configuration:
ip route add 0.0.0.0/0 via 18.1.0.1
create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in
Hi lolita,
What GTPU code are you running. Your test case does not work for me on master:
DBGvpp# create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4
create gtpu tunnel: parse error: 'teid-in 3 teid-out 4'
to answer your questions:
1) You should be able to delete the default
On Wed, Mar 6, 2019 at 2:52 PM Andrew 👽 Yourtchenko wrote:
>
> Sounds like a memory corruption.
> I am out of office for another week, so in the meantime if you might collect
> few postmortem dumps with reproductions, I will look at it when I return.
Sure, I will get some dumps with problem repr
Hi all,
I am trying to get the stats from VPP SHM. If I understand correctly,
I need to open VPP stats Unix domain socket and read from the
corresponding memory mapped segment.
My problem is that the stats socket file is missing. I added the
following line in startup.conf but of no avail.
stats
Folks,
I am creating the 19.01.1 Maintenance Release today. Last call for
patches for 19.01.1 expires at 1800 UTC.
VPP Committers, Please do not merge any patches until the maintenance
release is complete.
Thanks,
-daw-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this g
Hi Raj,
statseg { … }
There is a C API in vpp-api/client/stat_client.h you can use.
Or a higher level Go, Python or C++ binding too.
Cheers,
Ole
> On 6 Mar 2019, at 14:27, Raj wrote:
>
> Hi all,
>
> I am trying to get the stats from VPP SHM. If I understand correctly,
> I need to open VPP s
Hi,
I've added support to the NAT plugin for Paired-Address-Pooling (PAP) and
wanted to see if there is interest for me to submit it as a patch for review?
The changes modify the behaviour of user creation, address allocation, and
address management. Fundamentally it pairs a NAT user with an ex
> On 6 Mar 2019, at 09:20, Raj wrote:
>
>> On Wed, Mar 6, 2019 at 2:52 PM Andrew 👽 Yourtchenko
>> wrote:
>>
>> Sounds like a memory corruption.
>> I am out of office for another week, so in the meantime if you might collect
>> few postmortem dumps with reproductions, I will look at it when
Hi John,
> I've added support to the NAT plugin for Paired-Address-Pooling (PAP) and
> wanted to see if there is interest for me to submit it as a patch for review?
>
> The changes modify the behaviour of user creation, address allocation, and
> address management. Fundamentally it pairs a NAT
Hi,Neale.
Sorry that I have do the test on my own branch yesterday.
I have retry the test on main with “* master f940f8a
[origin/master] session: use transport custom tx for app transports”
And try the test on vxlan and gtp, it is reproduced. Please check the backtrace.
Dear vpp devs:
I met a bug while adding 802.1q subif interface into bridge domain via python
api. BVI interface must be the first number of bridge domain, but after I added
a subif into bd, subif took the first position of bd, and the l2 flooding
didn't run well:
here is the infomation
Hi Joe,
Thank you very much for catching this bug. I took a look at your patch which
looks to be the right fix to this problem. Without this fix, I suppose the
work around is to always add BVI interface to a BD last, after all other
interfaces are added in the BD.
Can you push your patch to
Hi,
I see that there is a node called 'interface-output'
And there is a feature arc called 'interface-output'
My understanding is that if I send a packet to the node
interface-output then that will further send the packet to the device
specific node to accomplish the actual output.
If I make a n
Hello everyone,
I am getting invalid address while I am doing "pool_alloc_aligned" of 1 M
sessions for 10 workers on release build.
But when I do the same with debug build , Process getting crash at below
ASSERT
always_inline mheap_elt_t *
mheap_elt_at_uoffset (void *v, uword uo)
{
ASSERT (
Hi,
I want to send some events from VPP to client over shared memory. The
trigger for these events are being detected on my worker threads.
Can I directly send them from the worker threads or do I need to send them
to the main thread first from where they will be forwarded over shared
memory ?
R
17 matches
Mail list logo