Pim:
My configuration is kind of much more complex to use gre over wireguard to
connect home and office. Here are the vpp configuration of pc2 which
has the trouble.  (PC 1 is the same configuration except a public wireguard
address)

Basically,  loop1 ---> gre ---> wirguard (office) ------> wireguard(home)
--> grp-->loop1
ospfd listened on the lcp loop1 on each end.

set interface state TenGigabitEthernet3/0/0 up
set interface ip address TenGigabitEthernet3/0/0 192.168.1.249/24
ip route add 0.0.0.0/0 via 192.168.1.1
wireguard create listen-port 51000 private-key ****** src 192.168.1.249
set interface state wg0 up
set interface ip address wg0 172.16.0.200/16
wireguard peer add wg0 public-key ********* endpoint x.x.x.x allowed-ip
172.16.0.100/32 dst-port 51000 persistent-keepalive 25

create loopback interface mac 2a:ab:3c:4d:5e:6f instance 1
set int mtu 1360 loop1
set int l2 learn loop1 disable
set int state loop1 up
set int ip addr loop1 10.10.0.200/31

create gre tunnel src 172.16.0.200 dst 172.16.0.100 teb
set int state gre0 up

create bridge-domain 100 learn 1 forward 1 uu-flood 1 flood 1 arp-term 0
set int l2 bridge loop1 100 bvi

set int l2 bridge gre0  100 1

lcp lcp-sync on
lcp lcp-auto-subint on

lcp create TenGigabitEthernet3/0/0 host-if ensf0
lcp create loop1 host-if loop1
ip route add 192.168.230.0/24 via 10.10.0.201
ip route add 10.0.0.0/24 via 10.10.0.201

I attached the ospfd.conf in the previous email. Here is the bird.conf
The bird.conf
protocol ospf v2 ospf4 {
  debug all;
  ipv4 { export where source = RTS_DEVICE; import all; };
  area 0 {
   interface "lo" { stub yes; };
   interface "loop1" { type broadcast; cost 5; };
 };
}

Best,
Chunhui

On Sat, Mar 5, 2022 at 12:09 AM Pim van Pelt <p...@ipng.nl> wrote:

> Hoi,
>
> As an aside, you'll probably want an interface type of pointopoint (as
> opposed to broadcast) on a /31 OSPF link, as there can only be two
> participants.
> I don't understand how you configured VPP. Can you share the VPP commands
> you used to create the topology?
>
> This little snippet of VPP configuration implements the topology you
> described: OSPF and OSPFv3 over a GRE v4 underlay:
>
> vpp# lcp create GigabitEthernet10/0/1 host-if e1
>
> vpp# set interface state GigabitEthernet10/0/1 up
>
> vpp# set interface ip address GigabitEthernet10/0/1
> 2001:678:d78:200:0:0:1:01/124
>
> vpp# set interface ip address GigabitEthernet10/0/1 192.168.10.17/31
>
> vpp# create gre tunnel src 192.168.10.17 dst 192.168.10.16
>
> gre0
>
> vpp# set interface state gre0 up
>
> vpp# set interface ip address gre0 10.0.0.1/31
>
> vpp# lcp create gre0 host-if gre0 tun
>
> And then in Linux for BIRD2:
>
> protocol ospf v2 ospf4 {
>
>   ipv4 { export where source = RTS_DEVICE; import all; };
>
>   area 0 {
>
>     interface "loop0" { stub yes; };
>
>     interface "gre0" { type pointopoint; cost 5; bfd off; };
>
>   };
>
> }
>
>
> protocol ospf v3 ospf6 {
>
>   ipv6 { export where source = RTS_DEVICE; import all; };
>
>   area 0 {
>
>     interface "loop0" { stub yes; };
>
>     interface "gre0" { type pointopoint; cost 5; bfd off; };
>
>   };
>
> }
>
>
> root@vpp0-1:/etc/bird# ip -br a
>
> lo               UNKNOWN        127.0.0.1/8 ::1/128
>
> loop0            UP             192.168.10.1/32 2001:678:d78:200::1/128
> fe80::dcad:ff:fe00:0/64
>
> e0               DOWN
>
> e1               UP             192.168.10.17/31
> 2001:678:d78:200::1:1/124 fe80::5054:ff:fe01:1001/64
>
> e2               UP             192.168.10.18/31
> 2001:678:d78:200::2:1/124 fe80::5054:ff:fe01:1002/64
>
> e3               DOWN
>
> gre0             UP             10.0.0.1/31 fe80::fd16:4fa7:d382:6eed/64
>
>
> root@vpp0-1:/etc/bird# ping 10.0.0.0
>
> PING 10.0.0.0 (10.0.0.0) 56(84) bytes of data.
>
> 64 bytes from 10.0.0.0: icmp_seq=1 ttl=64 time=2.82 ms
>
> 64 bytes from 10.0.0.0: icmp_seq=2 ttl=64 time=3.90 ms
>
> 64 bytes from 10.0.0.0: icmp_seq=3 ttl=64 time=3.64 ms
>
> 64 bytes from 10.0.0.0: icmp_seq=4 ttl=64 time=1.83 ms
>
> ^C
>
> --- 10.0.0.0 ping statistics ---
>
> 4 packets transmitted, 4 received, 0% packet loss, time 3003ms
>
> rtt min/avg/max/mdev = 1.833/3.048/3.897/0.806 ms
>
>
> root@vpp0-1:/etc/bird# birdc show ospf nei ospf4
>
> BIRD 2.0.7 ready.
>
> ospf4:
>
> Router ID       Pri          State      DTime   Interface  Router IP
>
> 192.168.10.0      1     Full/PtP        36.196  gre0       10.0.0.0
>
> root@vpp0-1:/etc/bird# birdc show ospf nei ospf6
>
> BIRD 2.0.7 ready.
>
> ospf6:
>
> Router ID       Pri          State      DTime   Interface  Router IP
>
> 192.168.10.0      1     Full/PtP        35.241  gre0
> fe80::9045:a0b1:9634:358c
>
> root@vpp0-1:/etc/bird#
>
> groet,
> Pim
>
> On Sat, Mar 5, 2022 at 4:23 AM Chunhui Zhan <chun...@emmuni.com> wrote:
>
>> Note that if I dont use the  loopback -- lcp --tunnel interfaces, just
>> use the plain physical interface to connect the two routers, both frr and
>> bird are working ok.
>>
>> It smells a little bit fishy here.
>>
>> On Fri, Mar 4, 2022 at 4:27 PM Chunhui Zhan <chun...@emmuni.com> wrote:
>>
>>> Hi, Pim,
>>> I disable the ping_plugin.so, now the icmp passed through the interface.
>>>
>>> I could not make frr work, so tried bird2, but got same results as frr.
>>> The ospf hello packet was not picked up one of the peer router.
>>> Here is my test topology
>>>
>>> pc1   loop1 ---lcp---vpp1 loopback === gre tunnel ==== vpp2
>>> loopback---lcp---loop1 pc2
>>>              10.10.0.201/31
>>>                    10.10.0.200/31
>>>
>>> on pc1 tcpdump: both the hello packet are in and the bird show the state
>>> as Init
>>> bird> show ospf neighbors
>>> ospf4:
>>> Router ID   Pri     State     DTime Interface  Router IP
>>> 127.0.0.200  1 Init/Other 34.845 loop1      10.10.0.200
>>>
>>> on pc2, tcpdump show both hello packet are send and receive from the
>>> interface, but the bird log only show send the hello packet, not recv any.
>>> so on pc2,  the neighor is empty, the bird log contradict with the
>>> tcpdump.
>>>
>>> Any idea here?
>>> Thanks.
>>> Chunhui
>>>
>>> pc2 bird log, only send, no recv packets
>>> 2022-03-05 00:20:21.998 <TRACE> ospf4: HELLO packet sent via loop1
>>> 2022-03-05 00:20:31.997 <TRACE> device1: Scanning interfaces
>>> 2022-03-05 00:20:31.999 <TRACE> ospf4: HELLO packet sent via loop1
>>> 2022-03-05 00:20:41.997 <TRACE> device1: Scanning interfaces
>>> 2022-03-05 00:20:41.998 <TRACE> kernel4: Scanning routing table
>>> 2022-03-05 00:20:41.998 <TRACE> kernel4: Pruning table master4
>>> 2022-03-05 00:20:41.998 <TRACE> kernel6: Pruning table master6
>>> 2022-03-05 00:20:41.998 <TRACE> ospf4: HELLO packet sent via loop1
>>>
>>> but on pc2(10.10.0.200)  tcpdump, clearly show the hello packets send
>>> also recv hello from pc1(10.10.0.201).
>>> 00:10:01.999053 2a:ab:3c:4d:5e:6f (oui Unknown) > 01:00:5e:00:00:05 (oui
>>> Unknown), ethertype IPv4 (0x0800), length 78: (tos 0xc0, ttl 1, id 30153,
>>> offset 0, flags [none], proto OSPF (89), length 64)
>>>     10.10.0.200 > ospf-all.mcast.net: OSPFv2, Hello, length 44
>>> Router-ID 127.0.0.200, Backbone Area, Authentication Type: none (0)
>>> Options [External]
>>>  Hello Timer 10s, Dead Timer 40s, Mask 255.255.255.254, Priority 1
>>>
>>> 00:10:09.994781 2a:ab:3c:4d:5e:7f (oui Unknown) > 01:00:5e:00:00:05 (oui
>>> Unknown), ethertype IPv4 (0x0800), length 82: (tos 0xc0, ttl 1, id 63898,
>>> offset 0, flags [none], proto OSPF (89), length 68)
>>>     10.10.0.201 > ospf-all.mcast.net: OSPFv2, Hello, length 48
>>> Router-ID 127.0.0.201, Backbone Area, Authentication Type: none (0)
>>> Options [External]
>>>  Hello Timer 10s, Dead Timer 40s, Mask 255.255.255.254, Priority 1
>>>  Designated Router 10.10.0.201
>>>  Neighbor List:
>>>    127.0.0.200
>>>
>>>
>>> the bird.conf basically are same here.
>>> protocol ospf v2 ospf4 {
>>>   debug all;
>>>   ipv4 { export where source = RTS_DEVICE; import all; };
>>>   area 0 {
>>>    interface "lo" { stub yes; };
>>>    interface "loop1" { type broadcast; cost 5; };
>>>  };
>>> }
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Mar 4, 2022 at 1:01 AM Pim van Pelt <p...@ipng.nl> wrote:
>>>
>>>> +vpp-dev
>>>>
>>>> I wasn't aware of a mailinglist outage, but I'm sure it'll solve itself
>>>> soon enough :-) putting the list back on CC.
>>>>
>>>> VPP has a ping plugin, which you are recommended to turn off when using
>>>> Linux controlplane - see the note all the way at the bottom here:
>>>>
>>>> https://s3-docs.fd.io/vpp/22.06/developer/plugins/lcp.html?highlight=ping
>>>>
>>>> Leaving the ping plugin on will allow VPP to respond to pings itself
>>>> (ie not punt them into the TAP device for Linux to see), but as you
>>>> observed, higher level tools, like FRR, will not receive the packets in
>>>> this case.
>>>> You didn't specify it very clearly, but for other readers, I assume
>>>> when you said 'running FRR, ... only see the hello broadcast packets' ,
>>>> that you meant to run OSPF and you saw hello multicast packets.
>>>> Incidentally, I don't know why FRR insists on pinging its neighbors before
>>>> establishing an OSPF adjacency - it seems unnecessary, and even undesirable
>>>> to me.
>>>>
>>>> groet,
>>>> Pim
>>>>
>>>> On Fri, Mar 4, 2022 at 1:15 AM Chunhui Zhan <chun...@emmuni.com> wrote:
>>>>
>>>>> Hi, Pim,
>>>>> The vpp-dev mail group is down, so I DM you here:
>>>>>
>>>>> I am using vpp 21.10 plus your private lcp repo
>>>>> github.com/pimvanpelt/lcpng.git/
>>>>>
>>>>> I have a loopback interface 10.10.0.200/31 as bvi on two different
>>>>> boxes, and gre tunnel them together. The loopback interfaces are  lcp to
>>>>> the hosts.
>>>>>
>>>>> I could ssh from one host loopback to another box, icmp ping works
>>>>> too. But the icmp reply is directly coming from the loopback on the vpp,
>>>>> the icmp packet was not forwarded to the host interface(verified through
>>>>> tcpdump).
>>>>> Running frr on the lcp host interface failed, only see the hello
>>>>> broadcast packets.
>>>>>
>>>>> Does the lcp not work on the loopback interface.
>>>>>
>>>>> Thanks.
>>>>> Chunhui
>>>>>
>>>>
>>>>
>>>> --
>>>> Pim van Pelt <p...@ipng.nl>
>>>> PBVP1-RIPE - http://www.ipng.nl/
>>>>
>>>
>
> --
> Pim van Pelt <p...@ipng.nl>
> PBVP1-RIPE - http://www.ipng.nl/
>
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20956): https://lists.fd.io/g/vpp-dev/message/20956
Mute This Topic: https://lists.fd.io/mt/89555183/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to