Hi Pierre,
I have configured gre on my backend server, and this does not solve my
issue. I probably did not do it right.

On the client, I did "telnet or ping 10.99.99.101" this did not reach the
backend server.
On the VPP LB node, vppctl show trace gave me "No packets in the trace
buffer:.

I think I am lost. Maybe you can do a quick review on my setup below and
see where my understanding of the VPP LB and gre was wrong.... :)

John




VIP is on 10.99.99.0/24 network  (virtual)

VPP LB Client:
     IP: 10.145.207.151
      ip route add ${VIP}/24 via 10.145.207.166 dev eth0  --  route VIP
request to VPP LB node
            eth0 is on 10.145.207.0/24 (both client, VPP LB, and backend
servers are all on this subnet)
       telnet or ping 10.99.99.101 does not work
VPP LB
    IP: 10.145.207.166
     lb conf ip4-src-address 10.145.207.1 timeout 10

     lb vip ${VIP}/24 encap gre4 new_len 1024
     # 3 backend servers
    lb as ${VIP}/24 10.145.207.168   10.145.207.141  10.145.207.142
On backend servers (141, 142, 168)
    I have one of the VIP (10.99.99.101) configured on dummy0 device
      ip tunnel add netVIP mode gre remote any local 10.145.207.168
      ip link set netVIP up


On Thu, Oct 19, 2017 at 11:55 PM, Pierre Pfister (ppfister) <
ppfis...@cisco.com> wrote:

> http://lartc.org/howto/lartc.tunnel.gre.html
>
> Le 20 oct. 2017 à 08:47, John Wei <johnt...@gmail.com> a écrit :
>
> No, I did not configure gre tunnel on backend server. Since I was able to
> make connection on the VPP LB host to backend server, I thought that is not
> needed. So far, I just configured VIP on the dummy device on the backend
> servers.
>
> Can you give me instructions on how to configure gre tunnel? I saw some
> write-up, but not sure if they are applicable to VPP LB.
>
> John
>
> On Thu, Oct 19, 2017 at 11:27 PM, Pierre Pfister (ppfister) <
> ppfis...@cisco.com> wrote:
>
>> Can you do a trace (trace add dpdk-input 4 , show trace ) to see what VPP
>> is doing ?
>> Did you configure the gre tunnel on backend servers ?
>> Can you show the full tcpdump capture on the backend server (with max
>> verbosity) ?
>>
>> Thanks,
>>
>> - pierre
>>
>> Le 20 oct. 2017 à 04:25, John Wei <johnt...@gmail.com> a écrit :
>>
>> Hi Pierre,
>> Thanks for the info, I was able to bring up the VPP LB node, with 3
>> backend servers. I used ping and telnet to verify that backend servers can
>> be reached using telnet or ping on the VPP LB node. I'll deal with DPDK
>> later.
>> Then I tried to do the same thing on a client, VPP LB node got the
>> request, but did not perform the forwarding to backend server.
>> Below are the relevant information, let me know if you can see where did
>> I do wrong.
>>
>> VIP is on 10.99.99.0/24 network
>>
>> VPP LB Client:
>>      10.145.207.151
>>       I have setup route to reach VIP subnet through VPP LB node
>>             ip route add ${VIP}/24 dev eth0
>>       eth0 is on 10.145.207.0 (both client, VPP LB, and backend servers
>> are all on this subnet)
>> VPP LB
>>     10.145.207.166
>>      lb conf ip4-src-address 10.145.207.1 timeout 10
>>
>>      # configure VIP
>>      lb vip ${VIP}/24 encap gre4 new_len 1024
>>
>>      # 3 backend servers
>>      lb as ${VIP}/24 10.145.207.141
>>      lb as ${VIP}/24 10.145.207.142
>>      lb as ${VIP}/24 10.145.207.168
>>
>> On backend servers (141, 142, 168)
>>     In addition to its own host IP. I have one of the VIP (10.99.99.101)
>> configured on dummy device.
>>
>> As mentioned, on VPP LB node, I can telnet or ping 10.99.99.101. But, I
>> was not able to do that on the client node.
>> On the VPP LB node, I used "tcpdump -n -i eth0 port 23" and can see the
>> income request from 10.145.207.151, wanting to reach 10.99.99.101.
>> But. no further response.
>> I wonder why VPP LB is not doing the forwarding from request coming from
>> the outside the node?
>>
>> John
>>
>>
>>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to