Hey Stanislav, Thanks for the help! Adding explicit endpoints does not seem to help - for completeness here's the current configuration:
vpp1# comment { public-key V7PzYlAh+CkOrfnoJfuRGQS/D/4VgQcDVX4LeXE5V1A= } vpp1# wireguard create listen-port 50869 src 192.168.10.0 private-key ML1APdl/AAAAAAAAAAAAAEi+QD3ZfwAA2Adootl/AAA= vpp1# set int state wg0 up vpp1# set int mtu packet 1420 wg0 vpp1# set interface ip address wg0 10.0.123.1/24 vpp1# set interface ip address wg0 2001:db8::1/64 vpp1# wireguard peer add wg0 public-key qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= allowed-ip 10.0.123.0/24 port 50869 endpoint 192.168.10.3 vpp1# show wireguard peer [0] endpoint:[192.168.10.0:50869->192.168.10.3:50869] wg0 keep-alive:0 flags: 0, api-clients count: 0 adj: key:qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= a99cfa5cfc2daeb109c36ae7cc51d76029b92869bbf8273d725a683fe07a9107 allowed-ips: 10.0.123.0/24 vpp1# ping 10.0.123.2 Statistics: 5 sent, 0 received, 100% packet loss And VPP2: vpp2# comment { public-key qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= } vpp2# wireguard create listen-port 50869 src 192.168.10.3 private-key ALDP4ZEScaLRjU4We8vwyu1UGPFmz7XjMHSFZ7D7GG0= vpp2# set int state wg0 up vpp2# set int mtu packet 1420 wg0 vpp2# set interface ip address wg0 10.0.123.2/24 vpp2# set interface ip address wg0 2001:db8::2/64 vpp2# wireguard peer add wg0 public-key V7PzYlAh+CkOrfnoJfuRGQS/D/4VgQcDVX4LeXE5V1A= allowed-ip 10.0.123.0/24 port 50869 endpoint 192.168.10.0 vpp2# show wireguard peer [0] endpoint:[192.168.10.3:50869->192.168.10.0:50869] wg0 keep-alive:0 flags: 0, api-clients count: 0 adj: key:V7PzYlAh+CkOrfnoJfuRGQS/D/4VgQcDVX4LeXE5V1A= 57b3f3625021f8290eadf9e825fb911904bf0ffe15810703557e0b7971395750 allowed-ips: 10.0.123.0/24vpp2 vpp2# ping 10.0.123.1 Statistics: 5 sent, 0 received, 100% packet loss As expected, I'm seeing UDP handshake packets in both directions: pim@hvn0:~$ sudo tcpdump -evni vpp1-vpp2 port 50869 tcpdump: listening on vpp1-vpp2, link-type EN10MB (Ethernet), snapshot length 262144 bytes 18:40:09.727385 52:54:00:01:10:02 > 52:54:00:02:10:01, ethertype IPv4 (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto UDP (17), length 176) *192.168.10.0.50869 > 192.168.10.3.50869*: UDP, length 148 18:40:14.873725 52:54:00:01:10:02 > 52:54:00:02:10:01, ethertype IPv4 (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto UDP (17), length 176) 192.168.10.0.50869 > 192.168.10.3.50869: UDP, length 148 18:40:16.162158 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto UDP (17), length 176) *192.168.10.3.50869 > 192.168.10.0.50869*: UDP, length 148 18:40:20.183312 52:54:00:01:10:02 > 52:54:00:02:10:01, ethertype IPv4 (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto UDP (17), length 176) 192.168.10.0.50869 > 192.168.10.3.50869: UDP, length 148 18:40:21.445446 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto UDP (17), length 176) 192.168.10.3.50869 > 192.168.10.0.50869: UDP, length 148 VPP is routing the UDP packets into wg4-input, which looks good: pim@vpp0-0:~$ vppctl show ru | grep wg wg-timer-manager any wait 0 0 15988 3.13e4 0.00 wg4-input active 20 20 0 1.48e4 1.00 pim@vpp0-3:~$ vppctl show run | grep wg wg-timer-manager any wait 0 0 13327 3.98e4 0.00 wg4-input active 9 9 0 9.90e5 1.00 And after a few seconds, flags becomes WG_PEER_ESTABLISHED = 0x2 which is encouraging: vpp1# show wireguard peer [0] endpoint:[192.168.10.0:50869->192.168.10.3:50869] wg0 keep-alive:0 *flags: 2, *api-clients count: 0 adj: key:qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= a99cfa5cfc2daeb109c36ae7cc51d76029b92869bbf8273d725a683fe07a9107 allowed-ips: 10.0.123.0/24 But I can't ping the neighbor (or, as a matter of fact, myself): vpp1# ping 10.0.123.1 Statistics: 5 sent, 0 received, 100% packet loss vpp1# ping 10.0.123.2 Statistics: 5 sent, 0 received, 100% packet loss vpp1# show ip fib 10.0.123.1 ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ] 10.0.123.1/32 fib:0 index:122 locks:2 interface refs:1 entry-flags:connected,local, src-flags:added,contributing,active, cover:119 path-list:[58] locks:2 flags:local, uPRF-list:48 len:0 itfs:[] path:[84] pl-index:58 ip4 weight=1 pref=0 receive: oper-flags:resolved, cfg-flags:local, [@0]: dpo-receive: 10.0.123.1 on wg0 forwarding: unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:124 buckets:1 uRPF:48 to:[0:0]] [0] [@13]: dpo-receive: 10.0.123.1 on wg0 vpp1# show ip fib 10.0.123.2 ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ] 10.0.123.0/24 fib:0 index:119 locks:2 interface refs:1 entry-flags:connected,attached, src-flags:added,contributing,active, cover:-1 path-list:[55] locks:2 uPRF-list:43 len:1 itfs:[11, ] path:[81] pl-index:55 ip4 weight=1 pref=0 attached: oper-flags:resolved, cfg-flags:glean, wg0 forwarding: unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:121 buckets:1 uRPF:43 to:[5:480]] [0] [@0]: dpo-drop ip4 I was not sure how neighbor discovery works on the wireguard, so I set a static to the peer on both machines: *vpp2# ip route add 10.0.123.1/32 <http://10.0.123.1/32> via wg0* *vpp1# ip route add 10.0.123.2/32 <http://10.0.123.2/32> via wg0* vpp1# ping 10.0.123.2 116 bytes from 10.0.123.2: icmp_seq=1 ttl=64 time=4.5987 ms 116 bytes from 10.0.123.2: icmp_seq=2 ttl=64 time=5.6347 ms 116 bytes from 10.0.123.2: icmp_seq=3 ttl=64 time=5.6518 ms 116 bytes from 10.0.123.2: icmp_seq=4 ttl=64 time=5.6929 ms 116 bytes from 10.0.123.2: icmp_seq=5 ttl=64 time=5.6257 ms Statistics: 5 sent, 5 received, 0% packet loss Stanislav, if you have the dynamic endpoint behavior, I'd be very interested. As the remote might be behind NAT or DHCP, and the central wireguard concentrator is the only machine with a known IPv4/IPv6 address, it'd be very useful to be able to omit the 'endpoint' argument to the peer. groet, Pim On Tue, Jan 11, 2022 at 5:02 PM Stanislav Zaikin <zsta...@gmail.com> wrote: > Hi Pim, > > IIRC you should specify the endpoint on vpp1. At least it was true about a > year ago. > I even prepared a patch to enable this dynamic endpoint updating, but I > don't remember what went wrong (there should be a reason why I didn't send > it upstream). > > Could you give it a try by specifying the endpoint on vpp1 while I'm > trying to find that patch? > > On Mon, 10 Jan 2022 at 13:01, Pim van Pelt <p...@ipng.nl> wrote: > >> Hoi folks, >> >> On a reasonably recent VPP, I'm trying to create a Wireguard tunnel, it's >> not working for me and I have a few questions (and found a few small bugs >> along the way) - I'm hoping you can help me further :) >> This is on two instances of VPP running on a hypervisor (KVM so >> interfaces are virtio) and the version is *vpp v22.02-rc0~347-gb28df767d >> built by pim on hippo at 2021-11-30T15:22:48* >> >> After bringing up basic connectivity, machine vpp1 has a >> loopback 192.168.10.0 and vpp2 has a loopback 192.168.10.3 -- they can >> reach each other fine, and there is no additional configuration (like ACLs >> and such) -- >> >> vpp1# ping 192.168.10.3 >> >> 116 bytes from 192.168.10.3: icmp_seq=1 ttl=62 time=5.7011 ms >> >> 116 bytes from 192.168.10.3: icmp_seq=2 ttl=62 time=4.4814 ms >> >> 116 bytes from 192.168.10.3: icmp_seq=3 ttl=62 time=4.4962 ms >> >> 116 bytes from 192.168.10.3: icmp_seq=4 ttl=62 time=23.2758 ms >> >> 116 bytes from 192.168.10.3: icmp_seq=5 ttl=62 time=5.6050 ms >> >> >> Statistics: 5 sent, 5 received, 0% packet loss >> >> >> vpp1# wireguard create listen-port 50869 src 192.168.10.0 generate-key >> >> vpp1# set int state wg0 up >> >> vpp1# set int mtu packet 1420 wg0 >> >> vpp1# set interface ip address wg0 10.0.123.1/24 >> >> vpp1# set interface ip address wg0 2001:db8::1/64 >> >> >> vpp1# show wireguard interface >> >> [0] wg0 src:192.168.10.0 port:50869 >> private-key:CJ5whwpgaWQRFGfU6PzJXYs06ix8IOfrE63iKDSl9lU= >> 089e70870a606964111467d4e8fcc95d8b34ea2c7c20e7eb13ade22834a5f655 >> public-key:x3ULwpplNvNRq5vl0ejj9ixlA5vEMLjip5M89Jvv3F0= >> c7750bc29a6536f351ab9be5d1e8e3f62c65039bc430b8e2a7933cf49befdc5d mac-key: >> ce323661f94c40e14e6efcfd5ca4827e5d4ea53cdc3cd4c3b0413462de99b539 >> >> >> vpp1# wireguard peer add wg0 public-key >> qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= allowed-ip 10.0.123.2/32 >> >> vpp1# show wireguard peer >> >> [0] endpoint:[192.168.10.0:50869->202c:8103:ab7f:0:ff00:::0] wg0 >> keep-alive:0 flags: 0, api-clients count: 0 >> >> adj: >> >> key:qZz6XPwtrrEJw2rnzFHXYCm5KGm7+Cc9clpoP+B6kQc= >> a99cfa5cfc2daeb109c36ae7cc51d76029b92869bbf8273d725a683fe07a9107 >> >> allowed-ips: 10.0.123.2/32 >> >> I noticed right off the bat that the endpoint seems >> weird: 192.168.10.0:50869->202c:8103:ab7f:0:ff00:::0 is off considering >> nothing has been configured on machine vpp2 yet. That sounds like a >> formatting bug to me, so I continued with the other machine: >> >> vpp2# wireguard create listen-port 50869 src 192.168.10.3 generate-key >> >> vpp2# set int state wg0 up >> >> vpp2# set int mtu packet 1420 wg0 >> >> vpp2# set interface ip address wg0 10.0.123.2/24 >> >> vpp2# set interface ip address wg0 2001:db8::2/64 >> >> >> vpp2# wireguard peer add wg0 public-key >> x3ULwpplNvNRq5vl0ejj9ixlA5vEMLjip5M89Jvv3F0= allowed-ip 10.0.123.0/24 >> port 50869 endpoint 192.168.10.0 >> >> vpp2# show wireguard peer >> >> [0] endpoint:[192.168.10.3:50869->192.168.10.0:50869] wg0 keep-alive:0 >> flags: 0, api-clients count: 0 >> >> adj: >> >> key:x3ULwpplNvNRq5vl0ejj9ixlA5vEMLjip5M89Jvv3F0= >> c7750bc29a6536f351ab9be5d1e8e3f62c65039bc430b8e2a7933cf49befdc5d >> >> allowed-ips: 10.0.123.0/24 >> >> Observations: >> * On vpp2, the relationship seems correct to me 192.168.10.3:50869-> >> 192.168.10.0:50869 but on vpp1, the relationship is >> still 192.168.10.0:50869->202c:8103:ab7f:0:ff00:::0 >> * If I don't specify an "allowed-ip" argument, VPP crashes. It seems we >> can catch that and return an error instead. >> * The usage of 'wireguard peer add' claims the argument is 'dst-port', >> but it's "port" instead. There's also a formatting error there (between >> <pub_key_other> and endpoint, missing space: >> >> wireguard peer add <wg_int> public-key <pub_key_other>endpoint <ip4_dst> >> allowed-ip <prefix>dst-port [port_dst] persistent-keepalive >> [keepalive_interval] >> >> The tunnel is not functional, If I look at the connection between vpp1 >> and vpp2, I do see that vpp2 is sending handshake packets: >> >> pim@hvn0:/srv/kvm$ sudo tcpdump -evni vpp1-vpp2 udp and port 50869 >> >> tcpdump: listening on vpp1-vpp2, link-type EN10MB (Ethernet), snapshot >> length 262144 bytes >> >> 12:51:28.929809 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 >> (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto >> UDP (17), length 176) >> >> 192.168.10.3.50869 > 192.168.10.0.50869: UDP, length 148 >> >> 12:51:33.945859 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 >> (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto >> UDP (17), length 176) >> >> 192.168.10.3.50869 > 192.168.10.0.50869: UDP, length 148 >> >> 12:51:39.216163 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 >> (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto >> UDP (17), length 176) >> >> 192.168.10.3.50869 > 192.168.10.0.50869: UDP, length 148 >> >> 12:51:44.357225 52:54:00:02:10:01 > 52:54:00:01:10:02, ethertype IPv4 >> (0x0800), length 190: (tos 0x0, ttl 62, id 0, offset 0, flags [none], proto >> UDP (17), length 176) >> >> 192.168.10.3.50869 > 192.168.10.0.50869: UDP, length 148 >> >> But vpp1 is not receiving them. Show errors and show logging gives me no >> reasonable leads. Can somebody help me figure this out ? >> >> groet, >> Pim >> -- >> Pim van Pelt <p...@ipng.nl> >> PBVP1-RIPE - http://www.ipng.nl/ >> >> >> >> > > -- > Best regards > Stanislav Zaikin > -- Pim van Pelt <p...@ipng.nl> PBVP1-RIPE - http://www.ipng.nl/
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20699): https://lists.fd.io/g/vpp-dev/message/20699 Mute This Topic: https://lists.fd.io/mt/88321357/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-