-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

If the problem is too many round trips our the interaction being too chatty

This is a good point: is the main issue that we feel the interaction is too 
chatty, or that it’s too slow?  I seem to hear people gravitating toward one or 
the other when this topic comes up, and the two issues may have somewhat 
different solutions.  After all: in my experience, two Italians talking about 
football can say an awful lot in a very small window of time. =)  If the 
problem is chattiness, we may look at bulk ops, intent-based metadata, 
pipelining, etc.  If it’s slowness, then we probably want a deeper look at what 
bits of the operation are slow.  Mark McClain and Ian and I were chatting about 
this the other day and suspect something like offline token validation or token 
windowing (e.g. attempting to prune out roundtrips to keystone) could go a long 
way on that front.

At Your Service,

Mark T. Voelker

On Jan 28, 2015, at 12:52 AM, Brent Eagles <beag...@redhat.com> wrote:

On 25/01/2015 11:00 PM, Ian Wells wrote:
Lots of open questions in here, because I think we need a long conversation
on the subject.

On 23 January 2015 at 15:51, Kevin Benton <blak...@gmail.com> wrote:

It seems like a change to using internal RPC interfaces would be pretty
unstable at this point.

Can we start by identifying the shortcomings of the HTTP interface and see
if we can address them before making the jump to using an interface which
has been internal to Neutron so far?


I think the protocol being used is a distraction from the actual
shortcomings.

Firstly, you'd have to explain to me why HTTP is so much slower than RPC.
If HTTP is incredibly slow, can be be sped up?  If RPC is moving the data
around using the same calls, what changes?  Secondly, the problem seems
more that we make too many roundtrips - which would be the same over RPC -
and if that's true, perhaps we should be doing bulk operations - which is
not transport-specific.

I agree. If the problem is too many round trips our the interaction being too 
chatty, I would expect moving towards more service oriented APIs - where HTTP 
tends to be appropriate. I think we should focus on better separation of 
concerns, and approaches such as bulk operations using notifications where 
cross process synchronization for a task is required. Exploring transport 
alternatives seems premature until after we are satisfied that our house is in 
order architecture-wise.

Furthermore, I have some "off-the-cuff" concerns over claims that HTTP is 
slower than RPC in our case. I'm actually used to arguing that RPC is faster 
than HTTP but based on how our RPCs work, I find such an argument 
counter-intuitive. Our REST API calls are direct client->server requests with 
GET's returning results immediately. Our RPC calls involve AMQP and a messaging 
queue server, with requests and replies encapsulated in separate messages. If 
no reply is required, then the RPC *might* be dispatched more quickly from the 
client side as it is simply a message being queued. The actual servicing of the 
request (server side dispatch or "upcall" in broker-parlance) happens "some 
time later", meaning possibly never. If the RPC has a return value, then the 
client must wait for the return reply message, which again involves an AMQP 
message being constructed, published and queued, then finally consumed. At the 
very least, this implies latency for dependent on the relative location and 
availability of the queue server.

As an aside (meaning you might want to skip this part), one way our RPC 
mechanism might be "better" than REST over HTTP calls is in the cost of 
constructing and encoding of requests and replies. However, this is more of a 
function of how requests are encoded and less how the are sent. Changing how 
request payloads are constructed would close that gap. Again reducing the 
number of requests required to "do something" would reduce the significance of 
any differences here. Unless the difference between the two methods were 
enormous (like double or an order of magnitude) then reducing the number of 
calls to perform a task still has more gain than switching methods. Another 
difference might be in how well the "transport" implementation scales. I would 
consider disastrous scaling characteristics a pretty compelling argument.

I absolutely do agree that Neutron should be doing more of the work, and
Nova less, when it comes to port binding.  (And, in fact, I'd like that we
stopped considering it 'Nova-Neutron' port binding, since in theory another
service attaching stuff to the network could request a port be bound; it
just happens at the moment that it's always Nova.)

One other problem, not yet raised,  is that Nova doesn't express its needs
when it asks for a port to be bound, and this is actually becoming a
problem for me right now.  At the moment, Neutron knows, almost
psychically, what binding type Nova will accept, and hands it over; Nova
then deals with whatever binding type it receives (optimisitically
expecting it's one it will support, and getting shirty if it isn't).  The
problem I'm seeing at the moment, and other people have mentioned, is that
certain forwarders can only bind a vhostuser port to a VM if the VM itself
has hugepages enabled.  They could fall back to another binding type but at
the moment that isn't an option: Nova doesn't tell Neutron anything about
what it supports, so there's no data on which to choose.  It should be
saying 'I will take these binding types in this preference order'.  I
think, in fact, that asking Neutron for bindings of a certain preference
type order, would give us much more flexibility - like, for instance, not
having to know exactly which binding type to deliver to which compute node
in multi-hypervisor environments, where at the moment the choice is made in
Neutron.

I scanned through the etherpad and I really like Salvatore's idea of adding
a service plugin to Neutron that is designed specifically for interacting
with Nova. All of the Nova notification interactions can be handled there
and we can add new API components designed for Nova's use (e.g. syncing
data, etc). Does anyone have any objections to that approach?


I think we should be leaning the other way, actually - working out what a
generic service - think a container management service, or an edge network
service - would want to ask when it wanted to connect to a virtual network,
and making an Neutron interface that supports that properly *without* being
tailored to Nova.  The requirements are similar in all cases, so it's not
clear that a generic interface would be any more complex.

Notifications on data changes in Neutron to prevent orphaning is another
example of a repeating pattern.  It's probably the same for any service
that binds to Neutron, but right now Neutron has Nova-specific code in it.
Broadening the scope, it's also likely the same in Cinder, and in fact it's
also pretty similar to the problem you get when you delete a project in
Keystone and all your resources get orphaned.  Is a Nova-Neutron specific
solution the right thing to do?

I have reservations. It all depends on what it is going to do. Referring to it 
as nova-centric might also be a distraction. As part of scoping out the work 
for refactoring the nova.network.neutronv2.API code (you all know about that, 
right?), I discussed refactoring the neutronclient to make it more usable as 
client library with several people. This refactoring would prioritize nova's 
requirements when making changes, but the changes would still be generally 
usable. Instead of approaching this directly, I decided to start by wrapping 
the client in nova - working around shortcomings and rationalizing the API 
somewhat. I always saw this eventually being pushed out of nova into an 
independent client library and in cases where the neutron API itself is being 
"worked around", relevant changes made in the neutron API itself. In that sense 
it is not unlike what Salvatore proposes but the approach is different and 
ultimately not nova-specific at all.

Cheers,

Brent Eagles


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJUyQy7AAoJELUJLUWGN7Cb6g8QAJipwJju95pANeHYCU7fi3iP
S13hzIvfsemSpfNW/f0eVioZnOJge4NBPJQOzJ1SdykQ7G+FixM9SPCRHy+ywuqk
OENyJ+tL+p2EVPLlhbLQ95gx6NJMFITe+vDHkYq9ZsCc0V3jSgaUAcWkctf3ln+f
3Fi7SWfmCP9mQmm1hIAAQP7RvRYHwOR38odnDDtPpnjpDXr+vlHbWhRA8lDheUKN
1rjhscn//GY0UI0IrirE8RymR1oRmHEaJ6x1+0XM5rsl9apmF9OkqzMTPdjoUPzq
7r8kzju9azO3Uxqkk7HMRAuGPY28jKLGFK5LFO3QRq1EiadUlFNI/Njk0uJUCE4A
vHOxoieS0meLeBFCdskNsk9WcBxmpMZxICyi8cB9/VCeGi9liFbLxwuowh1HnfeV
3wh2YEciLH9Zm6Rwy3Z24oXg4k3zDBRV6dRUECYzWSZCbLzl9+OAL75RfSt8Pbkj
1ez9bnq8R50B7tfnClO1WOpbWhkMTY+jLl6t2QB5RniTm/EBrFta+omXrA6Rl048
ZfPMdtYT/Xpjcs5aCYobzdHzFu5hifZXCFrAyaT9LXouPu6uXiXuiMoEmQoGAxoV
NyJvB+NEjKU/fwG9kVVgEe0rIOCD8XzMBJ+o/829GNxINKDa/TL+jws9Fjgj5ALQ
Zw2r7XHows9MQ7Ec4LfQ
=IgzU
-----END PGP SIGNATURE-----
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to