On 06/22/2016 03:42 AM, Priyanka wrote:
Hi,

We have a Openstack Juno setup with 1 controller+neutron node and 3 compute
nodes. 1 VM (LB) has ipvsadm installed and two VMs act as back end server.

On the server with ipvsadm I have eth0:0 IP as 192.168.1.21 which acts as
application IP. The ipvsadm uses round robin scheme. This is done using commands
as below:

sudo ipvsadm -A -t 192.168.1.21:6000 -s rr
sudo ipvsadm -a -t 192.168.1.21:6000 -r 192.168.1.77:6000 -g
sudo ipvsadm -a -t 192.168.1.21:6000 -r 192.168.1.79:6000 -g

where 192.168.1.77 and 192.168.1.79 are back end server VM IP.

The problem is that the packets go out of the LB VM but never reach the back end
server.

You had asked a similar question last week, and I had asked why you just weren't using Neutron LBaaS to do this? Seems you are trying to implement your own load-balancer inside a tenant VM.

Also, Juno is very old, using a newer release would give you access to Octavia (LBaaS v2) that has more advanced features.

In the tcpdumps on various interfaces show that the packet reach till qbr of the
LB VM but donot reach the qvo interface of LB VM. Are there any rules that get
applied here which block these packets. The packets from the client VM are sent
to back end server by the LB VM by changing the destination MAC of the packets.
  The packets that leave LB VM to reach back end VM have source as the client VM
IP and destination IP as 192.168.1.21 (application IP) and the src MAC of LB VM
and dst MAC of backend server VM. Is this the reason for the packets to be
blocked. Is there any way to allow these packets to flow to the back end server?

There are anti-spoofing rules installed that are most likely causing the packets to get dropped.

-Brian

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to