[openstack-dev] guest instance and L2 based network failover

2013-11-11 Thread John Gruber
I found other posts which deal with the HA topic in general, but I did not
find one that strictly discussed the specifics or guidance on guest
instance network failover mechanisms.

I'm currently developing against neutron Grizzly using the OVS plugin with
VLANs and GRE tunnelling.   Flooding is working on both.  Tell us to move
to Havana is not a show stopper, but will require work.

I have guest instances which want to use their own L2 failover mechanism.
They are clustered VMs which migrate L3 fixed_ip addresses based on a
triggered failover, which can happen for many different reasons.  On
typical dynamic learning Ethernet, the VMs send out GARPs which takes care
of the network update.

In neutron I can make port updates to move the fixed_ips from one port to
another, but that takes time to let everything catch up and delays the
failover process significantly.  I know what ports should be allowed
traffic for specific fixed_ips on a failover event, so it would be great if
I could allow everything I need before a failover is triggered.  Currently
the ip_spoofing_rule in the iptables firewall is getting in the way as it
will only let traffic originate from fixed_ips associated with a port. I
would love to be able to associate a specific fixed_ip with multiple ports
which would adjust the iptables rule, but that's a pretty fundamental
change seeing that IPAllocation is a foreign key to port in the data model.
For that matter on the engress rule, I would also like to allow multiple
MAC addresses in the destination filter, but that not a requirement to make
this work quickly.

Anyone have a convenient way to augment the iptable ip_spoofing_rule to
allow for my failover  without waiting on port updates to the controller to
migrate fixed_ips between ports? I have a mechanism to allow each fixed_ip
address to have its own port (MAC address) if that helps, but it
complicates the orchestration of both the guest instance setup and failover.

Has there been any discussion around secondary_fixed_ips or
clustered_fixed_ips which can be associated with more than one port at a
time that I've missed on the mailing list?

Thanks for your help everyone.

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Problem with nova add-fixed-ip or quantum port-update

2013-07-27 Thread John Gruber
Forwarding to -dev from -operators.

Any know why when a fixed-ip gets added to an external network guest port,
all connectivity on all fixedips for the guest on the external network get
block outbound on the compute node?

John

-- Forwarded message --
From: John Gruber 
Date: Fri, Jul 26, 2013 at 4:39 PM
Subject: Problem with nova add-fixed-ip or quantum port-update
To: openstack-operat...@lists.openstack.org


I am using Grizzly and I have a mix of both provider external networks
(VLANs) and tenant GRE tunnels.  The provider networks are obviously setup
as public, so VMs can start with interfaces on them.

I can start VMs just fine and get addresses via the dhcp_agent on both
external and tenant networks.

Everything is working well... until I need to add additional fixed_ips to
existing VM vif on external networks.

While I can get commands of the form:

nova add-fixed-ip vm-uuid net-uuid
repeat for each fixed-ip needed

and

quantum port-update port-uuid -- --fixed_ips type=dict list=true
ip_address='10.1.1.6' ip_address='10.1.1.7'


to execute correctly, and can see the fixed_ip addresses either allocate
from the network allocation pool (using nova command) or my explicitly
define addresses (using quantum command) associate with my vm just fine, I
have a problem with security groups.

I've simplified my security groups to just one 'default' where everything
is allowed.  I can start ICMP ping test to my VM and show them working,
until I run the commands to provision addition fixed IPs. Once the command
takes effect on the compute node, all traffic to the vm interface hosting
the network stops.

Interestingly adjacent hosts can see the ARP entries with the correct MAC
address for the added fixed_ips, but I can not make any connections to
them. If I tcpdump on the VM, I see TCP SYN requests and the VM answer with
the SYN+ACK.  On the network outside the VM (trunked to the compute node) I
see the TCP SYN request enter the compute node, and no SYN+ACK emerges. The
problem is somewhere with allowing the VM to send packets to the external
network.

Can anyone tell me how to 'HUP' the security group to allow traffic to my
new list of fixed_ips?

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem with nova add-fixed-ip or quantum port-update

2013-07-27 Thread John Gruber
So I got it work, but I need guidance from the OVS iptables gang on what
the reasoning was and how I fix it in a 'compliant' manner.

Q.  Why are the iptables rules on the OVS output chains for the interfaces
written as if the vif should only have ONE IP address assign where quantum
can assign multiple fixedips?

For the example where IP address 10.0.60.20 was assigned to my guest VM on
an external interface and assign at boot, and then I added 10.0.60.22 via
nova --add-fixed-ip vm-uuid net-uuid...

Here is what I had in my iptables rules after adding the second fixedip:

iptables -L quantum-openvswi-o8a508818-0 --line-numbers
Chain quantum-openvswi-o8a508818-0 (2 references)
num  target prot opt source   destination
1DROP   all  --  anywhere anywhere MAC !
FA:16:3E:41:6B:15
2RETURN udp  --  anywhere anywhere udp
spt:bootpc dpt:bootps
*3DROP   all  -- !10.0.62.20   anywhere
4DROP   all  -- !10.0.62.22   anywhere
*5DROP   udp  --  anywhere anywhere udp
spt:bootps dpt:bootpc
6DROP   all  --  anywhere anywhere state
INVALID
7RETURN all  --  anywhere anywhere state
RELATED,ESTABLISHED
8RETURN all  --  anywhere anywhere
9quantum-openvswi-sg-fallback  all  --  anywhere
anywhere


This obviously will not work.  The rules shadow each other and cut off all
outbound access from the guest VM on that network.  Which is exactly what I
was observing..

Running: iptables -D quantum-openvswi-o8a508818-0 4

And my access to 10.0.62.20 came back...

Running iptables -D quantum-openvswi-o8a508818-0 3

And my access to 10.0.62.22 started working...


Please tell me we did not intend to create a cloud where quantum has no
problems assigning multiple fixed IPs to a port, but iptables will eat them
all up!  Oh the humanity...

I know how to make it work and can hunt down the iptables root wrapper
command, but what should we do for this? I could not find an existing bug..

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev