On 5/23/14, 10:54 PM, Armando M. wrote:
On 23 May 2014 12:31, Robert Kukura <kuk...@noironetworks.com> wrote:
On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
Hi Armando:
Those are good points. I will let Bob Kukura chime in on the specifics of
how we intend to do that integration. But if what you see in the
prototype/PoC was our final design for integration with Neutron core, I
would be worried about that too. That specific part of the code
(events/notifications for DHCP) was done in that way just for the prototype
- to allow us to experiment with the part that was new and needed
experimentation, the APIs and the model.
That is the exact reason that we did not initially check the code to gerrit
- so that we do not confuse the review process with the prototype process.
But we were requested by other cores to check in even the prototype code as
WIP patches to allow for review of the API parts. That can unfortunately
create this very misunderstanding. For the review, I would recommend not the
WIP patches, as they contain the prototype parts as well, but just the final
patches that are not marked WIP. If you such issues in that part of the
code, please DO raise that as that would be code that we intend to upstream.
I believe Bob did discuss the specifics of this integration issue with you
at the summit, but like I said it is best if he represents that side
himself.
Armando and Mandeep,
Right, we do need a workable solution for the GBP driver to invoke neutron
API operations, and this came up at the summit.
We started out in the PoC directly calling the plugin, as is currently done
when creating ports for agents. But this is not sufficient because the DHCP
notifications, and I think the nova notifications, are needed for VM ports.
We also really should be generating the other notifications, enforcing
quotas, etc. for the neutron resources.
I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.
Armando, I am agreeing with you! The code you saw was a proof-of-concept
implementation intended as a learning exercise, not something intended
to be merged as-is to the neutron code base. The approach for invoking
resources from the driver(s) will be revisited before the driver code is
submitted for review.
We could just use python-neutronclient, but I think we'd prefer to avoid the
overhead. The neutron project already depends on python-neutronclient for
some tests, the debug facility, and the metaplugin, so in retrospect, we
could have easily used it in the PoC.
I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.
The overhead of using python-neutronclient is that unnecessary
serialization/deserialization are performed as well as socket
communication through the kernel. This is all required between
processes, but not within a single process. A well-defined and efficient
mechanism to invoke resource APIs within the process, with the same
semantics as incoming REST calls, seems like a generally useful addition
to neutron. I'm hopeful the core refactoring effort will provide this
(and am willing to help make sure it does), but we need something we can
use until that is available.
One lesson we learned from the PoC is that the implicit management of
the GP resources (RDs and BDs) is completely independent from the
mapping of GP resources to neutron resources. We discussed this at the
last GP sub-team IRC meeting, and decided to package this functionality
as a separate driver that is invoked prior to the mapping_driver, and
can also be used in conjunction with other GP back-end drivers. I think
this will help improve the structure and readability of the code, and it
also shows the applicability of the ML2-like nature of the driver API.
You are certainly justified in raising the question of whether the ML2
driver API model is appropriate for the GP plugin. I raised two issues
with this in the sub-team's PoC post-mortem discussion. One was whether
calling multiple drivers is useful. The case above seems to justify
this, as well as potentially supporting heterogeneous deployments
involving multiple ways to enforce policy. The other was whether the
precommit() methods are useful. I think the jury is still out on this.
With the existing REST code, if we could find the
neutron.api.v2.base.Controller class instance for each resource, we could
simply call create(), update(), delete(), and show() on these. I didn't see
an easy way to find these Controller instances, so I threw together some
code similar to these Controller methods for the PoC. It probably wouldn't
take too much work to have neutron.manager.NeutronManager provide access to
the Controller classes if we want to go this route.
The core refactoring effort may eventually provide a nice solution, but we
can't wait for this. It seems we'll need to either use python-neutronclient
or get access to the Controller classes in the meantime.
Any thoughts on these? Any other ideas?
I am still not sure why do you even need to go all the way down to the
Controller class. After all it's almost like GP could be a service in
its own right that makes use of Neutron to map the application centric
abstractions on top of the networking constructs; this can happen via
the REST interface. I don't think there is a dependency on the core
refactoring here: the two can progress separately, so long as we break
the tie, from an implementation perspective, that GP and Core plugins
need to leave in the same address space. Am I missing something?
Because I still cannot justify why things have been coded the way they
have.
I completely agree that we should try to avoid a hard architectural
requirement that the GP and core plugins have to be in the same address
space, and agree that if we were to use separate address spaces, using
python-neutronclient would be the obvious solution. Certain back-end
drivers for the GP plugin may be more tightly coupled with corresponding
core plugins or ML2 mechanism drivers, with both cooperating to control
the same underlying fabric, so we don't want to preclude putting them in
the same address space either.
In the PoC, I attempted to structure things so we could easily change
the mechanism used for these calls. I can't really justify why we didn't
just use python-neutronclient for the PoC, but remember, it was just a
prototype intended for learning and to facilitate these sorts of
discussions.
So, as long as we do plan to package GP as a service plugin within
neutron-server, is the overhead of going through python-neutronclient
within that process acceptable? Are there any other issues with this? If
its workable, I think we can go with python-neutronclient for now, and
look at better alternatives as the core refactoring progresses.
Thanks,
-Bob
Thanks,
Armando
Thanks,
-Bob
Regards,
Mandeep
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev