I haven't read the entire proposal yet but I wanted to clarify
something quickly (inline)..

Thanks,
Brad

On Mon, Oct 17, 2011 at 9:49 AM, Salvatore Orlando
<salvatore.orla...@eu.citrix.com> wrote:
> Hi Sumit,
>
>
>
> Thanks for your feedback!
>
> First of all please keep in mind that the wiki page started with the
> following disclaimer sentence:
>
> Caution: most of these thoughts were gathered after midnight, so some of
> them could turn out to be just nonsenses. And by writing 'some' I'm being
> quite optimistic.
>
> For more detailed replies, please see inline!
>
>
>
> From: Sumit Naiksatam (snaiksat) [mailto:snaik...@cisco.com]
> Sent: 17 October 2011 16:09
> To: Salvatore Orlando; netstack@lists.launchpad.net
> Subject: RE: [Netstack] 'Basic VLAN' plugin - some thoughts
>
>
>
> Hi Salvatore,
>
>
>
> Thanks for starting this. A few comments/questions –
>
>
>
> * You mention – “This model however implies that the Quantum plugin will be
> doing not just the 'logical' plugging of the VIF/vNIC, but also the
> 'physical' plugging, which is not what happens today.” May be I am not
> reading this correctly, but the underlying plugin implementation today does
> perform the physical plugging, right? So how would that be different in the
> “basic plugin”. Just want to make sure that I understand you correctly.
>
>
>
> [Salvatore]: My statement here derives from discussions we had in the Diablo
> development timeframe and an analysis of the Open vSwitch plugin. I just
> want to say that I’m probably misusing the term “physical” plugging. I think
> that the act of plugging a VIF into a Quantum network is outside Quantum
> scope. What I meant with this term is the configuration required on the
> compute host for the Quantum network to run.I don’t have a thorough
> knowledge of the UCS plugin, but I believe that for this plugin the ‘plug’
> operation configures networking on the compute host (i.e.: applying the port
> profile). On the other hand, I reckon the OVS plugin relies on the plugin
> agent for doing this when an interface is plugged into the ‘integration
> bridge’.
>
>
>
> * You mention – “Compute Manager invokes Network manager for allocating
> networks for instance (compute.manager._make_network_info)”. This step would
> result in the creation on a Quantum network (with the allocation of a VLAN),
> right?
>
>
>
> [Salvatore]:  in _make_network_info an rpc call is sent to the network node
> for allocating a network for an instance. This result into a call to
> ‘allocate_for_instance’ in the nova-network process. For Quantum Manager,
> this implies that port creation and VIF plug operations are performed.
> Whether a VLAN is allocated just on the DB or actually on virtual/switches
> bridges, is plugin-dependent.
>
>
>
> * Per steps 2 to 6 in the workflow, I gather that the proposal is to
> logically associate a VIF with a port first (but what is the trigger for
> doing this; create_port()? invoked from where?), and the physical plugging
> happens via the VIF driver. Is that correct? If so how will the VIF ID be
> available to Quantum (since the VM is not up at that point)?
>
>
>
> [Salvatore]: That is really not a proposal, but an analysis of how things
> currently work with QuantumManager! I concede this analysis might be wrong
> anyway J.
>

Salvatores understanding of how this works seems right.  In
QuantumManager (allocate_for_instance) a vif is created in the nova
db, a port is created via quantum::create_port(), and then the uuid
for the vif is passed to quantum::attach.

            vif_rec = manager.FlatManager.add_virtual_interface(self,
                                  context, instance_id, network_ref['id'])

            # talk to Quantum API to create and attach port.
            q_tenant_id = project_id or FLAGS.quantum_default_tenant_id
            self.q_conn.create_and_attach_port(q_tenant_id, quantum_net_id,
                                               vif_rec['uuid'])

So, the VM doesn't need to be up/running in order for us to attach the
port.  We just have to know the uuid that the VIF is going to have
when it is plugged.

>
> * You mention – “Given this workflow, it turns out that the operations that
> the Basic VLAN plugin would performs are exactly the same as the ones
> performed by the VIF driver.” Per my understanding, the VIF driver only
> creates the VIF configuration (including populating any information on where
> the VIF would be connected to when it comes up). So I am a little confused
> by the earlier statement that the operations are “exactly the same”.
>
>
>
> [Salvatore]: I think I was not precise enough in discussing this point.  I
> agree VIF drivers (libvirt, xenapi, and also vmwareapi), setup the VIF
> configuration, as you correctly pointed out, but they also configure
> networking on the compute host for allowing that VIF to run in the
> appropriate network. For instance this is what it is achieved with calls to
> ensure_vlan_bridge. Even though it does not physically plug the VIF into the
> network (I don’t think we want Quantum to do that anyway), it makes sure the
> network(s) where the hypervisor will plug the VIFs for instances being spawn
> is properly configured.
>
> This is translated into brctl/vconfig configuration for libvirt, xapi
> networks and PIF objects for XenAPI, and port groups objects for VMWareAPI.
>
>
>
> * You mention - “Therefore a first potentially extremely simple
> implementation of this plugin would consist of a simple 'VLAN ID' tracker,
> as it will just provide the VIF driver the appropriate VLAN ID to configure
> on each compute host.” How will the plugin make the VLAN ID available to the
> driver?
>
>
>
> [Salvatore]: I have a few ideas, which I’m still fleshing out. I think
> anyway we could leverage the fact that the current VIF drivers look for a
> vlan field in the network info data structure.
>
>
>
> * You mention – “The ideal destination point would be ending up in a
> situation in which we would not need anymore a VIF driver at all.” I am not
> sure I agree with this (unless of course my understanding for the need of
> the VIF driver is incorrect). Isn’t the VIF owned by compute (nova)?
>
>
>
> [Salvatore]: Please allow me to backtrack and slightly reformulate this
> sentence. I understand a VIF driver will always be needed as we want
> nova-compute to create VIFs. We don’t want Quantum to create  VIFs. My ideal
> target is a point in which Quantum manages network virtualization on the
> hypervisor, whereas the compute service manages server virtualization.
>
>
>
> * I am not clear as to how the plug/unplug operations on a port are going to
> be realized in the above proposal. Could you kindly elaborate?
>
>
>
> [Salvatore]: I will provide a sort-of sequence diagram on the wiki page
> keeping in mind your feedback.
>
>
>
> Thanks,
>
> ~Sumit.
>
>
>
> From: netstack-bounces+snaiksat=cisco....@lists.launchpad.net
> [mailto:netstack-bounces+snaiksat=cisco....@lists.launchpad.net] On Behalf
> Of Salvatore Orlando
> Sent: Monday, October 17, 2011 6:55 AM
> To: netstack@lists.launchpad.net
> Subject: [Netstack] 'Basic VLAN' plugin - some thoughts
>
>
>
> Hi all,
>
>
>
> I put some thoughts regarding the design and the implementation of this
> plugin on a wiki page: http://wiki.openstack.org/Quantum-BasicVlanPlugin
>
> Please let me have your feedback. If everything goes according to plan, I
> plan to start implementing this plugin in two weeks’ time.
>
>
>
> Regards,
>
> Salvatore
>
> --
> Mailing list: https://launchpad.net/~netstack
> Post to     : netstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~netstack
> More help   : https://help.launchpad.net/ListHelp
>
>

-- 
Mailing list: https://launchpad.net/~netstack
Post to     : netstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to