On 06/23/2016 03:23 AM, Priyanka wrote:
Hi,

We want direct routing LB and LVS supports it. So we were trying that option.
Can we add some rule in neutron-openvswi chain of the LB VM on compute node to
prevent the drop of these packets? If yes please guide us on how can we
configure such a rule. As i can see a drop rule in the chain which drops
anything other than packet with IP and MAC of the LB VM. But our packet has a
different IP. The rule addition would be required at the backend server VM
neutron-openvswi chain as well?

Doing it on your own is fine, but do know that there's only so much support we can give when you're not using the built-in tools that already exist. Luckily you have all the Neutron source code, especially since you're running an unsupported release like Juno.

Here are two things you can look at:

1) Allowed address pairs

2) Remove the port security feature on certain ports:

2a) remove the security group from the port
2b) neutron port-update $port --port-security-enabled=False

Typically you'd create the port first, then pass it along during the nova boot phase, but you should be able to update it afterwards.

-Brian

On Wednesday 22 June 2016 08:16 PM, Brian Haley wrote:
On 06/22/2016 03:42 AM, Priyanka wrote:
Hi,

We have a Openstack Juno setup with 1 controller+neutron node and 3 compute
nodes. 1 VM (LB) has ipvsadm installed and two VMs act as back end server.

On the server with ipvsadm I have eth0:0 IP as 192.168.1.21 which acts as
application IP. The ipvsadm uses round robin scheme. This is done using commands
as below:

sudo ipvsadm -A -t 192.168.1.21:6000 -s rr
sudo ipvsadm -a -t 192.168.1.21:6000 -r 192.168.1.77:6000 -g
sudo ipvsadm -a -t 192.168.1.21:6000 -r 192.168.1.79:6000 -g

where 192.168.1.77 and 192.168.1.79 are back end server VM IP.

The problem is that the packets go out of the LB VM but never reach the back end
server.

You had asked a similar question last week, and I had asked why you just
weren't using Neutron LBaaS to do this?  Seems you are trying to implement
your own load-balancer inside a tenant VM.

Also, Juno is very old, using a newer release would give you access to Octavia
(LBaaS v2) that has more advanced features.

In the tcpdumps on various interfaces show that the packet reach till qbr of the
LB VM but donot reach the qvo interface of LB VM. Are there any rules that get
applied here which block these packets. The packets from the client VM are sent
to back end server by the LB VM by changing the destination MAC of the packets.
  The packets that leave LB VM to reach back end VM have source as the client VM
IP and destination IP as 192.168.1.21 (application IP) and the src MAC of LB VM
and dst MAC of backend server VM. Is this the reason for the packets to be
blocked. Is there any way to allow these packets to flow to the back end server?

There are anti-spoofing rules installed that are most likely causing the
packets to get dropped.

-Brian



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to