Here's a thought - to the point on having to avoid churn in the nova code (on account Quantum plugin specific VIFs), how about we try to solve this issue via packaging? What if we have a separate Quantum nova driver package which when deployed will install the VIF drivers in the appropriate location(s). Existing location for the VIF drivers can be used as an installation target (or some new convention can be set up). But, in general, this approach will avoid having to adding/updating VIF-driver code to nova every time a Quantum plugin requires it.
Note also that I am suggesting a separate driver package, and not as a part of the Quantum server or client/common package since these drivers are nova-specific and need to be installed only for nova and after nova is installed. Thanks, ~Sumit. From: netstack-bounces+snaiksat=cisco....@lists.launchpad.net [mailto:netstack-bounces+snaiksat=cisco....@lists.launchpad.net] On Behalf Of Dan Wendlandt Sent: Wednesday, April 04, 2012 4:39 PM To: Ryota MIBU Cc: netstack@lists.launchpad.net Subject: Re: [Netstack] quantum community projects & folsom summit plans Thanks for your thoughts on this Ryota. Some additional comments below. Happy to chat more about it at the summit as well. On Tue, Apr 3, 2012 at 11:42 PM, Ryota MIBU <r-m...@cq.jp.nec.com> wrote: > >However, I think the goal should be that code that is in Nova is not specific to the plugin. As we talked about >earlier in the thread, you may need different types of vif-plugging to attach to different types of switching >technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that different plugins that use the same switching >technology should be able to use the same vif-plugging (for example, there are several plugins that all use the OVS >vif-plugging). Our goal here should be to minimize churn in the Nova codebase. Yes, I checked the Cisco driver and the portprofile extension. The Cisco driver seems to pass and retrieve the plugin-specific data with PUT method. I just thought that this kind of vif-plugging could be a general model for plugins that work without "agent". I agree with you that we should minimize churn in the Nova codebase. But, I still feel that the "agent" model, especially polling, is not so good. Although there are many topics on the summit, I hope that we could have discussion about vif-plugging changes. One important thing to remember is that, there's no requirement that a plugin runs an agent. I believe at least two plugins already, don't use agents at all, as they have other ways of remotely configuring the switches when needed. I think the polling performed by some plugins (e.g., OVS) you mention is actually really easy to remove by sending notifications to agents using something like RabbitMQ. This is something that's already planned for Folsom. I think the issue of "agents" is more fundamental. It may be that the term "agent" is confusing, as it is really just code that runs as a service on the hypervisor, just like nova-compute. The question is really whether there should be a single python process (all network logic embedded in nova), or one for nova and one for quantum. In a way, having a quantum agent on the hypervisor is similar to what happens when running nova-network in multi-host model. There are two key points here in my mind: 1) We want to minimize network related code churn in Nova. The vswitch configuration supported by Quantum plugins will continue to grow over time, and our goal should be that adding a new capability to a Quantum plugin should rarely require Nova changes. 2) Its likely that more advanced plugins need to make changes to the vswitch at times other than vif-plug and vif-unplug. For example, consider that quantum already exposes the ability to put a port in "admin down" (i.e., no packets forwarded) at any point if a tenant makes an API request. It may be that having a more flexible vif-plugging mechanism is still valuable despite these points, so let's chat more about it at the summit. Thanks again for your thoughts. Dan > I think there is another sub topic, but I am not sure yet. > I agree that a configurable VIF driver is much better. > For designing the configuration of vif-plugging, it is required that we discuss the granularity of selecting >VIF Driver. > Should The granularity of selecting VIF Driver be per node, VM, or VIF? > Currently, VIF Driver would be configured in nova-compute.conf. > This means that the granularity is per Hypervisor Node. > To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of VM1 maps >to a physical NIC > directly. > If so, it may raise another issue; how to determine connection type of VIF. > > > >That's an interesting use case, and something that we haven't tried to deal with yet. In your use case, who would >determine how a VIF was mapped? Would it be a policy described by the service provider? Would it be part of the >VM flavor? Adding this kind of flexibility is certainly possible, though you are the first person who has expressed >a need for this type of flexibility. It could be mixed. I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor, then a service provider determines a hypervisor node and it's available physical NIC. It is not suitable that the cloud user specifies physical NIC itself. But this though is not clear enough to having a session on the summit, I hope that we discuss this issue on vif-plugging session or somewhere in the summit. > Is there any blueprint of Security Group in NetStack? > I have a primitive proposal for an firewall model and API, and a firewall implementation on Quantum OpenFlow >Plugin. > That is just a prototype based on Quantum L2 API. > But, this proposal shows how firewalling API should be. > I think the first point in designing firewalling models is to which entity each rule is associated. > Is the entity network or port? > I hope that we will have discussions for firewalling leads much more functionalities and APIs than security >group has currently. > > > >Dave Lapsley (on netstack list) is doing a session on this at the summit. Feel free to join as a driver. My thinking >on the topic is that each Quantum port could be assigned one or more security groups. There is also scope, I believe, >for more advanced ACLs that could be associated with each port, essentially consisting of inbound/outbound lists >of rules that each have a "match" and an "action" (allow/deny). There is also the topic of NAT, which in my mind >makes the most sense to think about in terms of our "L3 forwarding" discussion (see etherpad). > I found it. I'll join the sessions. Thanks! > Finally, I would like to suggest that the Security Group session would be suitable to be held after Quantum >L3 session. > I think the Security Group is a cross layer function and its design should better be coupled with L3 design. > > > >I think our first task needs to be properly scoping each of these discussions, particularly on the many topics loosely >call "L3". I've sent another email to the list with thoughts on breaking up those discussions. > Thanks, Ryota MIBU -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dan Wendlandt Nicira, Inc: www.nicira.com twitter: danwendlandt ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Mailing list: https://launchpad.net/~netstack Post to : netstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~netstack More help : https://help.launchpad.net/ListHelp