On Thu, Sep 22, 2011 at 5:25 PM, Brad Hall <b...@nicira.com> wrote:
>
>  >
> > o   QuantumManager.allocate_for_instance was failing because the JSON
> > response from Quantum was not being serialized. I then looked at
> > network.quantum.client.py and found that if the response code was 202
> the
> > response was not deserialized. 202 is exactly the code returned by the
> API
> > for create ops (there was a bug fixed before rbp on this). I believe the
> > client shipped with quantum avoids deserialization for error code 204. To
> > work around the issue, I changed the code in client.py.
>
> This is fixed in a branch proposed here:
> https://code.launchpad.net/~bgh/nova/qmanager-rbp-trunk
>
> Unfortunately, it didn't get enough reviews and didn't make it into
> nova in the diablo timeframe.
>


That's really unfortunate.  I'm guessing this is due to the changes in this
branch?
https://code.launchpad.net/~salvatore-orlando/quantum/bug834013/+merge/73788
It likes like this branch came in to Quantum just a day after the
QuantumManager code went in to Nova :(

My understanding is that this branch changes the return code for API create
operations from 200 (will be deserialized by client.py in Nova) to 202 (will
not be deserialized by client.py in Nova).

It would be unfortunate to release a version of Quantum for diablo that
doesn't work with the Quantum integration in the Nova diablo release,
especially given that documentation for how to run Quantum is built around
running it with the QuantumManager.  Given that a lot of people install nova
from packages, it does not seem wise to have them patch the client.py in
nova or install from an alternative branch.

Since Brad's branch with the fix didn't get into nova, our two options seem
to be:

a) live with it.
b) tweak Quantum to work with diablo nova.  Looking at the patch it seems
that this would be pretty simple, however, I believe the motivation for
changing the code from returning 200 to 202 was that 202 was more inline
with other OpenStack APIs, which seems like a good goal.  Hence, we would
also need to update the 1.0 API doc to specify 200, then switch to using 202
in v1.1 of the API.

I think we can reasonably delay the release a day to get people's input on
this.

My bias is toward having working code to minimize the problems folks have
when first trying to play with Nova + Quantum, even if it comes at the
expense of API purity, but others may not agree.  Thoughts?

Please try to get your thoughts in by noon tomorrow, so we can make a
decision and release by 5pm Pacific tomorrow.  If necessary, we may want to
hop on IRC and have a chat, but by default we'll just have the conversation
on the email list.

Thanks!

Dan

p.s.  and to think:  some people thought we weren't going to have a last
minute release emergency :)

p.p.s I guess another option would be to consider packaging a patched
version QuantumManager with Quantum itself (or completely separately), then
have the nova.conf reference it using the network_manager flag.  This would
probably be fairly error prone though, as the nova admin would have to make
sure PYTHONPATH was set correctly whenever they invoke nova-manage or
nova-network (and again, probably wouldn't work well for people running from
packages).  I'm not a fan of this, but wanted to through it out there as an
option.





>
> > o   I then found _get_instance_nw_info was failing in the ‘inefficient’
> > cycle, for a KeyError referred to the attachment element id. I found out
> > that this was happening because the network had active ports without
> > attachments, and in that case show_attachment returns an empty attachment
> > element. I therefore changed get_port_by_attachment in
> > network.quantum.quantum_connection, replacing
> > port_get_resdict[“attachment”][“id”]
> > with
> > port_get_resdict[“attachment”].get(‘id’, None)
>

Yeah, definitely a bug.  I'm much less worried about this one though, as
QuantumManager handles the creation/plugging/unplugging/deletion of ports
itself, so in the standard use case the situation of having an unattached
port never happens.




> >
>
> That sounds like something went wrong during the attachment .. the
> branch referenced above may fix that as well but its hard to tell.
>
> >
> > After doing these changes, with my extreme pleasure, I saw instances
> happily
> > running on Quantum networks J
> >
> > Is there something I could have done to avoid this changes? Do you think
> > there might be something wrong with my setup?
> >
>
> Doubt it, unfortunately :)
>
> >
> >
> > Finally, I noticed something weird in the network_info for the instance:
> >
> > |[[{u'injected': True, u'cidr': u'192.168.101.0/24', u'multi_host':
> False},
> > {u'broadcast': u'192.168.101.255', u'ips': [{u'ip': u'10.0.0.6',
> u'netmask':
> > u'255.255.255.0', u'enabled': u'1'}], u'mac': u'02:16:3e:77:d0:c8',
> > u'vif_uuid': u'0805ff2c-f15e-425f-a94b-3a8ab3c15638', u'dns':
> > [u'192.168.101.1'], u'dhcp_server': u'192.168.101.1', u'gateway':
> > u'192.168.101.1'}]]|
> >
> >
> >
> > This is a rather old IPAM issue with nova, as the networks table has no
> int
> > ref with the fixed_ip table. This mean that if you delete your networks
> > using nova-manage make sure all the ips for the network you have deleted
> are
> > gone in fixed_ips, otherwise IPAM might associated your instance with the
> > wrong address!
> >
> >
> >
> > Cheers,
> >
> > Salvatore
> >
>
> Thanks,
> Brad
>
> --
> Mailing list: https://launchpad.net/~netstack
> Post to     : netstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~netstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
Sr. Product Manager
cell: 650-906-2650
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- 
Mailing list: https://launchpad.net/~netstack
Post to     : netstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to