Hi Sam,
Please upgrade to at least 19.04. a lot of work went into IPSec for this
release.
/neale
De : au nom de "Caffeine Coder via Lists.Fd.Io"
Répondre à : "caffeine.co...@yahoo.com"
Date : mercredi 14 août 2019 à 00:16
À : Vpp-dev
Cc : "vpp-dev@lists.fd.io"
Objet : [vpp-dev] ipsec pac
Your VPP configs look fine. I can only guess at general network issues.
My first guess would be that the DHCP process did not complete, yet.
For my second guess, this:
vpp# ping 8.8.8.8 source wan1
means take the source address from wan1, but this:
ip route add 0.0.0.0/0 via 172.78.10.1
Hi all,
I am using VPP version 19.04.
I recently migrated to this VPP version from an older version and I am
facing some issues during VPP startup. I have a plugin which I initialize
using VLIB_INIT_FUNCTION. During the init, I use
'vlib_thread_main->n_threads' to fetch the number of workers and
Hi,
We have 2 Worker threads. Each thread is pinned to one Core, so total 2 Cores.
All are in One NUMA node, with Socket 0.We have also below config.
1.
socket-mem 2048
num-rx-queues 1
num-tx-queues 3
num-mbufs 61000
2.
With IPSEC,GCM128 algoritm
Reassembly Buffer 8192 .
Traffic profile whic
I am assuming wan1 is also connected to same network as wan0, is that correct?
Curious, what is your use case for wanting to have two interface connected to
same network?
Also, check to see if you got an address from DHCP and try to ping the next hop
first.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You re
Hi,
I'm working on using libmemif in OVS.
https://patchwork.ozlabs.org/patch/1140858/
While using the API, one question I have is, when I call
err = memif_rx_burst(dev->handle, qid, dev->rx_bufs, NETDEV_MAX_BURST, &recv);
There are 'recv' number of packets are at dev->rx_bufs, do I have to
cop
Hi all,
I have created the v19.08-rc2 tag on stable/1908 and verified that
19.08-rc2 build artifacts are on packagecloud.
VPP 19.08 Release Milestone RC2 is complete!
As a reminder, the VPP 19.08 Release is next Wednesday August 21, 2019.
https://wiki.fd.io/view/Projects/vpp/Release_Plans/Relea
Do you have a preference as to dates?
Ed
On Tue, Aug 13, 2019 at 7:44 PM Ni, Hongjun wrote:
> Hi Ed,
>
>
>
> Thank you for scheduling the proposal review.
>
>
>
> I think a TSC meeting after Aug 21 is better, to follow the FD.io TSC
> policy.
>
>
>
> Thanks,
>
> Hongjun
>
>
>
> *From:* Ed Warni
Sorry for not being clear.
Each of the interfaces is connected to a different network (ISP). The
scenario is of dual WAN.
One ISP is providing static address and other is providing DHCP.
wan1 is receiving DHCP.
If I ONLY have:
ip route add 0.0.0.0/0 via 172.78.10.158 wan0
then I am able to ping
Hi all,
I am trying to use DNS server and on "ping google.com" VPP is crashing
Aug 13 21:31:10 test1-vpp vnet[853]: unknown input `add_del 8.8.8.8
Aug 13 21:31:28 test1-vpp vnet[853]: dns cache: add / del / clear required..
Aug 13 21:31:36 test1-vpp vnet[853]:
vl_api_dns_resolve_name_reply_t_hand
VPP is not crashing anymore. I didn't change anything.
VPP is caching DNS queries
[P] DNS query: id 18
no-recur recur-des no-trunc non-auth
2 queries, 0 answers, 0 name-servers, 0 add'l recs
Queries:
Name: www.apple.com: type A
Name: www.apple.com: type
But LAN device is not a
Did a packet trace and I noticed two things:
dns4-request: DNS pkts pending upstream name resolution
nat44-out2in: no translation
Packet 8
00:28:11:659028: dpdk-input
lan1 rx queue 0
buffer 0x8aeef: current data 0, length 89, buffer-pool 0, ref-count
1, totlen-nifb 0, trace 0x5
The setup is a machine running VPP 19.01, configured using NAT44, with two
physical interfaces. One interface is inside, other one is outside.
The unit was translating and passing traffic just fine, and then all of a
sudden stopped passing anything through the outside interface. Any packet which
Hi William,
You do not need to copy the packets out of memif. Once you finish
processing all these packets, call memif_refill_queue() function to free these
buffers. Let's say if you receive 32 packets in memif_rx_burst function, after
processing, call memif_refill_queue with count=32 to fr
14 matches
Mail list logo