Hi, Neale
Thanks for help.
Below is a script showing how I set up the network:
#!/bin/bash
# start two busybox containers, with none network
docker run -tid --name=con1 --network=none busybox
docker run -tid --name=con2 --network=none busybox

# remove the existing files
[ ! -d /var/run/netns/ ] && mkdir -p /var/run/netns
rm -Rf /var/run/netns/con1
rm -Rf /var/run/netns/con2

# expose the netns of a container to the host
pid=`docker inspect -f '{{.State.Pid}}' con1`
ln -s /proc/$pid/ns/net /var/run/netns/con1
# add a new veth pair, the host-side is vpp1, container-side is eth0
ip link add name eth0 type veth peer name vpp1
ip link set dev vpp1 up
ip link set dev eth0 up netns con1
# set the route in container
ip netns exec con1 ip addr add 192.168.1.1/32 dev eth0
ip netns exec con1 ip route replace 169.254.1.1 dev eth0
ip netns exec con1 ip route replace default via 169.254.1.1 dev eth0

# same to con1
pid=`docker inspect -f '{{.State.Pid}}' con2`
ln -s /proc/$pid/ns/net /var/run/netns/con2
ip link add name eth0 type veth peer name vpp2
ip link set dev vpp2 up
ip link set dev eth0 up netns con2
ip netns exec con2 ip addr add 192.168.1.2/32 dev eth0
ip netns exec con2 ip route replace 169.254.1.1 dev eth0
ip netns exec con2 ip route replace default via 169.254.1.1 dev eth0

vppctl create host-interface name vpp1
vppctl create host-interface name vpp2
vppctl set int state host-vpp1 up
vppctl set int state host-vpp2 up
vppctl set ip arp proxy 169.254.1.1 - 169.254.1.1
vppctl set int proxy-arp host-vpp1 enable
vppctl set int proxy-arp host-vpp2 enable
vppctl ip route add 192.168.1.1/32 via host-vpp1
vppctl ip route add 192.168.1.2/32 via host-vpp2


There is a little difference between different editions in the last part. In 
VPP16.06, I created af_packet interfaces via the (/etc/vpp/startup.conf). But 
in VPP17.01, I used “sreate host-interface” instead according to 
https://wiki.fd.io/index.php?title=VPP/Configure_VPP_As_A_Router_Between_Namespaces&oldid=4046

Then if I try
docker exec con1 ping -c5 192.168.1.2
VPP will restart and clean all my previous configuration.

Thanks a lot
Xiao Pan
From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, March 8, 2017 4:17 PM
To: Pan, Xiao <xiao....@intel.com>; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] issues when enable proxy-arp

Hi Pan,

I see nothing untoward there. Could you please send me instructions on how to 
re-create your setup. Then I can investigate why VPP restarts.

Thanks,
neale

From: "Pan, Xiao" <xiao....@intel.com<mailto:xiao....@intel.com>>
Date: Wednesday, 8 March 2017 at 02:34
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] issues when enable proxy-arp

Hi, Neale
Thanks for help.
 In VPP16.06, there is no command like “sh adj” and “sh adj nbr N”, so I use 
VPP17.01 to get the two outputs.
And after my first ping from cont2(192.168.1.2) to cont1(192.168.1.1), no 
packet was received by cont1. And vpp will restart.

Below is the result:
# vppctl sh adj
[@0]
[@1] arp-ipv4: via 192.168.1.1 host-vpp1
[@2] arp-ipv4: via 192.168.1.2 host-vpp2
# vppctl sh adj nbr 1
[@1] arp-ipv4: via 192.168.1.1 host-vpp1
locks:4 node:[178]:ip4-arp next:[1]:host-vpp1-output
children:
  {path:11}
# vppctl sh adj nbr 2
[@2] arp-ipv4: via 192.168.1.2 host-vpp2
locks:4 node:[178]:ip4-arp next:[2]:host-vpp2-output
children:
  {path:12}

# vppctl sh ip fib 192.168.1.1
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
192.168.1.1/32 fib:0 index:11 locks:2
  src:CLI  refs:1
    index:11 locks:2 proto:ipv4 flags:shared, uPRF-list:11 len:1 itfs:[4, ]
      index:11 pl-index:11 ipv4 weight=1 attached-nexthop:  oper-flags:resolved,
       192.168.1.1 host-vpp1
          [@0]: arp-ipv4: via 192.168.1.1 host-vpp1

forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [index:12 buckets:1 uRPF:11 to:[0:0]]
[0] [@3]: arp-ipv4: via 192.168.1.1 host-vpp1

# vppctl sh ip fib 192.168.1.2
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
192.168.1.2/32 fib:0 index:12 locks:2
  src:CLI  refs:1
    index:12 locks:2 proto:ipv4 flags:shared, uPRF-list:12 len:1 itfs:[5, ]
      index:12 pl-index:12 ipv4 weight=1 attached-nexthop:  oper-flags:resolved,
       192.168.1.2 host-vpp2
          [@0]: arp-ipv4: via 192.168.1.2 host-vpp2

forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [index:13 buckets:1 uRPF:12 to:[0:0]]
    [0] [@3]: arp-ipv4: via 192.168.1.2 host-vpp2


Thanks a lot!
Xiao Pan


From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, March 7, 2017 6:13 PM
To: Pan, Xiao <xiao....@intel.com<mailto:xiao....@intel.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues when enable proxy-arp


Hi Pan,

Could you please collect;
sh adj
sh adj nbr 5
sh ip fib 192.168.1.1

both before and after the first ping.

Thanks,
neale

From: <vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on 
behalf of "Pan, Xiao" <xiao....@intel.com<mailto:xiao....@intel.com>>
Date: Tuesday, 7 March 2017 at 09:00
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] issues when enable proxy-arp

Hi, all
  I met an issue when enable proxy-arp in VPP.
  I have two containers, for instance, cont1 and cont2. Then set two VETH pairs 
for the two containers, 192.168.1.1 for cont1, 192.168.1.2 for cont2. The 
default route is below. The 169.254.1.1 is a dummy address.
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0                  UG    0         0      
  0     eth0
169.254.1.1     0.0.0.0         255.255.255.255  UH    0         0        0     
eth0

  Then in VPP, I attach to the two veth interface over the linux AF_PACKET 
interface.
vpp# show interface
              Name               Idx       State          Counter          Count
af_packet0                        5         up
af_packet1                        6         up

  Then config the proxy-arp and FIB:
vpp# set ip arp proxy 169.254.1.1 - 169.254.1.1
vpp# set int proxy-arp af_packet0 enable
vpp# set int proxy-arp af_packet1 enable
vpp# ip route add 192.168.1.1/32 via af_packet0
vpp# ip route add 192.168.1.2/32 via af_packet1

After that if I ping from cont1 to cont2, the cont2 will receive the packet and 
drop them all right away. I try to tcpdump the packet, get that:
[cid:image001.png@01D29828.D27F7D50]
At the picture above, the left part is the packet that container1 sent to 
container2, and the right part is what container2 received.
You can see that the first 14 bytes are dropped by VPP, in which the first 12 
bytes is src and dst MAC, and the 13-14 bytes “0800” represents the type is 
VETH.

I met this problems in VPP16.06. And in VPP17.01, after my configuring and 
trying to ping, the vpp will restart itself and clean all my previous 
configuration☹.

I use “vppctl trace add dpdk-input 1” to detect the change of the packet in 
VPP. And ping from cont2(192.168.1.2) to cont1(192.168.1.1), get the result as 
below.

When the packet is sent out by the node “af_packet0-tx”, the length of the 
packet is 84 bytes.
I wonder why such operation will cut the first 14 bytes

00:54:30:521206: dpdk-input
  af_packet1 rx queue 0
  buffer 0x17af304: current data 0, length 98, free-list 0, totlen-nifb 0, 
trace 0x9
  PKT MBUF: port 1, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0,
    packet_type 0x0
  IP4: 3e:ca:93:00:28:bd -> 02:fe:79:66:eb:9a
  ICMP: 192.168.1.2 -> 192.168.1.1
    tos 0x00, ttl 64, length 84, checksum 0x6b24
    fragment id 0x4c31, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x29a5
00:54:30:521207: ethernet-input
  IP4: 3e:ca:93:00:28:bd -> 02:fe:79:66:eb:9a
00:54:30:521208: ip4-input
  ICMP: 192.168.1.2 -> 192.168.1.1
    tos 0x00, ttl 64, length 84, checksum 0x6b24
    fragment id 0x4c31, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x29a5
00:54:30:521208: ip4-lookup
  fib 0 adj-idx 5 : af_packet0 flow hash: 0x00000000
  ICMP: 192.168.1.2 -> 192.168.1.1
    tos 0x00, ttl 64, length 84, checksum 0x6b24
    fragment id 0x4c31, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x29a5
00:54:30:521209: ip4-rewrite-transit
  tx_sw_if_index 5 adj-idx 5 : af_packet0 flow hash: 0x00000000
  0xc0a8: 40:00:3f:01:6c:24 -> 45:00:00:54:4c:31
00:54:30:521209: af_packet0-output
  af_packet0
  0xc0a8: 40:00:3f:01:6c:24 -> 45:00:00:54:4c:31
00:54:30:521210: af_packet0-tx
  af_packet0 tx queue 0
  buffer 0x17af304: current data 14, length 84, free-list 0, totlen-nifb 0, 
trace 0x9
  0xc0a8: 40:00:3f:01:6c:24 -> 45:00:00:54:4c:31

Best Regards
Pan, Xiao

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to