On Thu, Sep 22, 2011 at 5:12 PM, Salvatore Orlando <
salvatore.orla...@eu.citrix.com> wrote:

> Hi, ****
>
> ** **
>
> I have installed an Openstack testbed with Quantum (OVS Plugin) and
> standard nova IPAM. Hypervisor backend is XenServer.****
>
> Although I managed to get a couple of VMs for two different tenants up and
> running on the appropriate networks, I had to do things that I feel I
> shouldn’t have done.****
>
> I hope you can help me to understand where I got my setup wrong:****
>
> ** **
>
> **-          **I installed Quantum following the instructions on the
> README file. Everything went quite smoothly.
> A few things on the README:****
>
> **o   **It might be worth mentioning that OVS plugin  needs MySQL-python
> on the hypervisor for the agent to run****
>
> **o   **For XenServer, README should mention that the xenapi_vif_driver
> flag should be set to the Class for the OVS driver (default is linux bridge)
> ****
>
> **o   **Also, pointing out that if Quantum is not running on the same node
> as nova-network, there are some flags specific to the Quantum manager to
> configure****
>
> **-          **I then integrate Quantum with Nova, and nova-network
> started fine. The problems started when I launched an instance:****
>
> **o   **QuantumManager.allocate_for_instance was failing because the JSON
> response from Quantum was not being serialized. I then looked at
> network.quantum.client.py and found that if the response code was 202 the
> response was not deserialized. 202 is exactly the code returned by the API
> for create ops (there was a bug fixed before rbp on this). I believe the
> client shipped with quantum avoids deserialization for error code 204. To
> work around the issue, I changed the code in client.py. ****
>
> **o   **I then found _*get*_instance_nw_info was failing in the
> ‘inefficient’ cycle, for a KeyError referred to the attachment element id. I
> found out that this was happening because the network had active ports
> without attachments, and in that case show_attachment returns an empty
> attachment element. I therefore changed get_port_by_attachment in
> network.quantum.quantum_connection, replacing
> port_get_resdict[“attachment”][“id”]
> with
> port_get_resdict[“attachment”].get(‘id’, None)****
>
> ** **
>
> After doing these changes, with my extreme pleasure, I saw instances
> happily running on Quantum networks J****
>
> Is there something I could have done to avoid this changes? Do you think
> there might be something wrong with my setup?****
>
> ** **
>
> Finally, I noticed something weird in the network_info for the instance:**
> **
>
> |[[{u'injected': True, u'cidr': *u'192.168.101.0/24'*, u'multi_host':
> False}, {u'broadcast': u'192.168.101.255', u'ips': [{*u'ip': u'10.0.0.6'*,
> u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac':
> u'02:16:3e:77:d0:c8', u'vif_uuid': u'0805ff2c-f15e-425f-a94b-3a8ab3c15638',
> u'dns': [u'192.168.101.1'], u'dhcp_server': u'192.168.101.1', u'gateway':
> u'192.168.101.1'}]]|****
>
> ** **
>
> This is a rather old IPAM issue with nova, as the networks table has no int
> ref with the fixed_ip table. This mean that if you delete your networks
> using nova-manage make sure all the ips for the network you have deleted are
> gone in fixed_ips, otherwise IPAM might associated your instance with the
> wrong address!
>

That is an odd quirk indeed :)  Hopefully the nova folks will merge melange
quickly.  QuantumManager already supports using melange for IPAM, though
currently you need to fetch melange from a separate nova branch.

Thanks for the testing Salvatore!

Dan




> ****
>
> ** **
>
> Cheers,****
>
> Salvatore****
>
> --
> Mailing list: https://launchpad.net/~netstack
> Post to     : netstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~netstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
Sr. Product Manager
cell: 650-906-2650
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- 
Mailing list: https://launchpad.net/~netstack
Post to     : netstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to