Hi Mickey

I was going with the assumption that the “localnet” logical port on each HV has 
a unique name linked to HV/logical switch tuple
Localnet configuration uses multiple logical switches to support a single 
localnet.

My reference to base this assumption on was this link 
http://openvswitch.org/pipermail/git/2015-September/007480.html
Also, if you check the OVN tests for localnet, the configuration uses the same 
approach.

If this assumption holds, then even for localnet, a logical port is only bound 
to a single physical endpoint (chassis/port/encap)

I think you are suggesting that a single logical port name for a localnet 
network be used across the HVs ?

Then the logical port becomes a single name, such as your example below - 
provnet1-physnet1

I understand the advantage of a using a single logical port name in some cases, 
as it slightly simplifies the configuration at the NB,

although we would still need to configure each HV access point uniquely with 
ovs-vsctl for physical endpoints then.


However, in general, I think there may be some reasons to retain the unique 
logical port names for localnet access points -

to support different addresses and port security per logical access point, per 
the NB schema.


Let me know if you think being able to have a single localnet logical port name 
across

hypervisors is a hard requirement.


I think the explicit bridge-mapping for localnet could be eliminated.

The network_name is associated with a logical port and a

logical port is bound to a physical endpoint chassis, which is the

localnet access bridge….

However, I did not mention it in this patch, because for localnet,

I just wanted to focus on the encapsulation/tag for now.


Thanks Darrell




From: Mickey Spiegel <emspi...@us.ibm.com<mailto:emspi...@us.ibm.com>>
Date: Tuesday, March 1, 2016 at 3:43 PM
To: Darrel Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: Russell Bryant <russ...@ovn.org<mailto:russ...@ovn.org>>, Darrell Lu 
<dlu...@gmail.com<mailto:dlu...@gmail.com>>, 
"dev@openvswitch.org<mailto:dev@openvswitch.org>" 
<dev@openvswitch.org<mailto:dev@openvswitch.org>>
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint 
separation proposal

Darrell,

After seeing your latest RFC and patches, I still do not understand if/how you 
intend to address the questions below with regard to "localnet" support. I see 
from the proposed code that each logical port is only bound to one physical 
endpoint. I also see that you intend to deprecate the "tag" column in the 
"Port_Binding" table, suggesting that you believe that the current way of 
binding to physical networks should go away.

In existing OVN, VMs can connect directly to provider networks, requiring each 
"localnet" logical port to be instantiated on each ovn-controller, i.e. there 
are multiple chassis/chassis-port bindings, each one done locally on each 
hypervisor based on local ovn-bridge-mapping configuration.
Do you intend to support this case?
Or are you proposing that all provider networks traffic must flow through a 
"localnet" port bound to one particular physical endpoint representing one 
gateway chassis or gateway chassis pair?

Mickey

-----Mickey Spiegel/San Jose/IBM wrote: -----
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
From: Mickey Spiegel/San Jose/IBM
Date: 02/17/2016 08:33PM
Cc: Russell Bryant <russ...@ovn.org<mailto:russ...@ovn.org>>, Darrell Lu 
<dlu...@gmail.com<mailto:dlu...@gmail.com>>, 
"dev@openvswitch.org<mailto:dev@openvswitch.org>" 
<dev@openvswitch.org<mailto:dev@openvswitch.org>>
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint 
separation proposal

Darrell,

Thanks for your replies.

A few more questions for clarification.

You said:
>> There are multiple phys_endpts, one per chassis
>> chassis and chassis_port are set per chassis
>> Each localnet port on a chassis is bound to a phys_endpt on that chassis

In your earlier reply to Russell, your example:
>> ovn-sbctl    lport-bind-phys-endpt   provnet1-1-physnet1   endpt_0

If the command is onv-sbctl, how do you apply this per chassis?
Can you have multiple "ovn-sbctl lport-bind-phys-endpt" commands for the same 
logical port?
If so, would you leave chassis in the port binding empty?
Why is the logical port name in this example "provnet1-1-physnet1" when the 
logical port was defined earlier as "provnet1-physnet1"?
Was this intentional or a typo?

Mickey


-----Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>> wrote: -----
To: Mickey Spiegel/San Jose/IBM@IBMUS
From: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Date: 02/17/2016 07:47PM
Cc: Russell Bryant <russ...@ovn.org<mailto:russ...@ovn.org>>, Darrell Lu 
<dlu...@gmail.com<mailto:dlu...@gmail.com>>, 
"dev@openvswitch.org<mailto:dev@openvswitch.org>" 
<dev@openvswitch.org<mailto:dev@openvswitch.org>>
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint 
separation proposal

Hi Mickey

Thanks for your questions/comments

Darrell

From: Mickey Spiegel <emspi...@us.ibm.com<mailto:emspi...@us.ibm.com>>
Date: Tuesday, February 16, 2016 at 2:32 PM
To: Darrel Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: Russell Bryant <russ...@ovn.org<mailto:russ...@ovn.org>>, Darrell Lu 
<dlu...@gmail.com<mailto:dlu...@gmail.com>>, 
"dev@openvswitch.org<mailto:dev@openvswitch.org>" 
<dev@openvswitch.org<mailto:dev@openvswitch.org>>
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint 
separation proposal

Darrell,

Just catching up on this thread. A few things are still unclear.


  1.  The example that you gave bound the one "localnet" logical port to one 
physical endpoint. Perhaps this is what you are intending for the L3 gateway 
case (still waiting for that proposal).
In existing OVN, VMs can connect directly to provider networks, requiring each 
"localnet" logical port to be instantiated on each ovn-controller, i.e. there 
are multiple chassis/chassis-port bindings, each one done locally on each 
hypervisor based on local ovn-bridge-mapping configuration.
Does your proposal support this case?
If so, which of the following do you do?
     *   The chassis and chassis_port columns are empty. On each hypervisor, 
ovn-bridge-mapping still needs to be configured.
     *   There is a list of phys_endpts for each localnet, one per chassis?

  *         This replaces the bridge mapping configured on each hypervisor?

>> There are multiple phys_endpts, one per chassis
>> chassis and chassis_port are set per chassis
>> Each localnet port on a chassis is bound to a phys_endpt on that chassis


2. What is the relationship between the "chassis" column in port bindings, and 
the "chassis" column in physical endpoints?

>> They are identical which I described in the original proposal e-mail as 
>> somewhat redundant
>> The only reason to leave the chassis field in the port_binding record is for 
>> non-localnet/non-gateway support to be unaffected

3. For L2 gateway, I think I am beginning to understand how this would work. 
The L2 gateway still has to populate MACs up in southbound DB.
    For L3 gateway, without a detailed proposal, I don't know how this fits yet.
    Are you adding a new port type for L3 external gateway ports?

>> Since the behavior of these ports will be different (for example for L3/L2 
>> binding process details) it
>> will likely be necessary; L3 ports are logical ports

    Are those ports bound to a chassis rather than run locally on each 
ovn-controller?

>> These ports are bound to a gateway chassis

    Are the provider networks run on only one chassis rather than each 
ovn-controller?

>> a provider network runs on one gateway chassis or gateway chassis pair in 
>> the case of redundant
>> from each HV, there should be/could be a tunnel to each gateway transport 
>> node in the pair; possibly active/standby (A/S) default


To the extent that this proposal is meant to replace the “tag” column with 
something more generic that can support different encapsulations, this is a 
very good thing. As Kyle mentioned, we are interested in supporing VXLAN from 
OpenStack/OVN to upstream physical routers.

Mickey


-----"dev" <dev-boun...@openvswitch.org<mailto:dev-boun...@openvswitch.org>> 
wrote: -----
To: Russell Bryant <russ...@ovn.org<mailto:russ...@ovn.org>>, Darrell Lu 
<dlu...@gmail.com<mailto:dlu...@gmail.com>>, 
"dev@openvswitch.org<mailto:dev@openvswitch.org>" 
<dev@openvswitch.org<mailto:dev@openvswitch.org>>
From: Darrell Ball
Sent by: "dev"
Date: 02/11/2016 01:51PM
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint 
separation proposal

On 2/11/16, 12:20 PM, "Russell Bryant" 
<russ...@ovn.org<mailto:russ...@ovn.org>> wrote:


>On 02/10/2016 09:56 PM, Darrell Ball wrote:
>> Hi Russell
>>
>> Please see inline
>>
>> Thanks Darrell
>>
>>
>>
>> On 2/8/16, 12:38 PM, "Russell Bryant" 
>> <russ...@ovn.org<mailto:russ...@ovn.org>> wrote:
>>
>>> On 02/08/2016 12:05 PM, Darrell Ball wrote:
>>>> On 2/5/16, 12:23 PM, "Russell Bryant" 
>>>> <russ...@ovn.org<mailto:russ...@ovn.org>> wrote:
>>>>> I agree with this sort of separation in principle.  Some specific
>>>>> examples would help me understand the proposal, though.  You mention
>>>>> that this applies to both localnet and gateway cases.  Can we lay out
>>>>> some clear workflows before and after the proposed changes?
>>>>>
>>>>> The simplest localnet example would be connecting a single VM to a
>>>>> physical network locally attached to a hypervisor.
>>>>>
>>>>> On the hypervisor running, ovn-controller, we set:
>>>>>
>>>>>    $ ovs-vsctl set open . \
>>>>>    > external-ids:ovn-bridge-mappings=physnet1:br-eth1
>>>>>
>>>>> Then, we set up the logical connectivity with:
>>>>>
>>>>>    $ ovn-nbctl lswitch-add provnet1
>>>>>
>>>>>    $ ovn-nbctl lport-add provnet1 provnet1-lp1
>>>>>    $ ovn-nbctl lport-set-addresses provnet1-lp1 $MAC
>>>>>    $ ovn-nbctl lport-set-port-security provnet1-lp1 $MAC
>>>>>
>>>>>    $ ovn-nbctl lport-add provnet1 provnet1-physnet1
>>>>>    $ ovn-nbctl lport-set-addresses provnet1-physnet1 unknown
>>>>>    $ ovn-nbctl lport-set-type provnet1-physnet1 localnet
>>>>>    $ ovn-nbctl lport-set-options provnet1-physnet1 \
>>>>>    > network_name=physnet1
>>>>>
>>>>> Then we can create the VIF on the hypervisor like usual.
>>>>>
>>>>> How does your proposal modify the workflow for this use case?
>>>>
>>>> Localnet case: The NB programming is unchanged, as intended.
>>>>
>>>> The SB programming using sb-ctl in lieu of CMS might be of
>>>> the form below.
>>>
>>> In this case, the CMS is only interfacing with the NB database.
>>>
>>>> This example assumes that we use the legacy endpoint type of
>>>> single_vlan and vlan 42 is used on chassis_port_0 on chassis_only
>>>> (which is our HV in this example).
>>>>
>>>> ovn-sbctl phys-endpt-add endpt_0 chassis_only chassis_port_0 single_vlan  
>>>> 42   42
>>>>
>>>>
>>>> ovn-sbctl    lport-bind-phys-endpt   provnet1-1-physnet1   endpt_0
>>>
>>> I'm sorry if I'm being dense, but I'm afraid that I don't understand
>>> what this is replacing.
>
>Note the above question.
>
>
>>>
>>>>> It would be nice to see the same sort of thing for gateways.  The
>>>>> OpenStack driver already has code for the current vtep gateway
>>>>> integration.  We set vtep_logical_switch and vtep_physical_switch on a
>>>>> logical port.  What new workflow would we need to implement?
>>>>
>>>>
>>>> Gateway case: Consider ls0-port2 is a logical endpt on a gateway
>>>>
>>>>
>>>> ovn-nbctl lswitch-add ls0
>>>> .
>>>> .
>>>> ovn-nbctl lport-add ls0 ls0-port2
>>>> .
>>>> .
>>>> ovn-nbctl lport-set-addresses ls0-port2 52:54:00:f3:1c:c6
>>>> .
>>>> .
>>>> ovn-nbctl lport-set-type ls0-port2 vtep
>>>>
>>>>
>>>> The NB programming lport-set-options, of the form:
>>>> “ovn-nbctl lport-set-options ls0-port2 vtep-physical-switch=br-int 
>>>> vtep-logical-switch=ls0”
>>>> could be omitted and the same information could be derived from
>>>> other logical/physical binding. SB programming semantics, assuming that we 
>>>> use
>>>> the legacy endpoint type and vlan 42 is used on chassis_port_0 on 
>>>> chassis_0 (a gateway):
>>>>
>>>>
>>>> ovn-sbctl phys-endpt-add endpt_0 chassis_0 chassis_port_0 single_vlan   42 
>>>>   42
>>>>
>>>>
>>>> ovn-sbctl    lport-bind-phys-endpt   ls0-port2  endpt_0
>>>
>>> Is this right?
>>>
>>> 1) We're dropping the use of vtep-physical-switch and
>>> vtep-logical-switch options and instead getting the same information
>>>from logical-to-physical mappings in the southbound database.
>>
>> That’s the proposal
>> The logical port association to
>> 1) vtep Physical switch, can be derived from the port_binding/chassis tables 
>> in the SB DB
>> 2) vtep logical switch, can come down to the SB DB via information in the
>>   NB DB Logical Switch/Logical Port Tables
>>
>>
>>>
>>> 2) (I'm less sure on this part) We're replacing direct management of the
>>> hardware_vtep schema with defining endpoings in the physical endpoint
>>> table in OVN's southbound db?
>>
>>
>> For SW gateway, we don’t plan to support the hardware_vtep schema and use a
>> common code path b/w gateway and HV transport nodes, as much as possible. 
>> Hence the SB DB is one option
>> to house the physical endpt table which is closely associated with the port 
>> binding table.
>> The gateway or gateway pair/cluster supports the overall network.
>
>Are you planning on dropping hardware_vtep support?  Are there two
>separate worksflows (software gateways vs hardware_vtep)?


There are two separate workflows for SW and HW gateways.

Hardware_vtep support remains for hardware gateways; the vtep schema
will certainly evolve as well to support the hardware gateways.
Since there is only minimal usage support of the VTEP schema today from a
SW gateway POV, in the form of the "vtep-emulator", not much is lost by 
abandoning
the VTEP schema w.r.t. the new software gateway development.

There will be some loss of OVN gateway reference behavior for the hardware 
vendors
since SW and HW gateways are working from different DB schemas, but since 
hardware
vendor designs/implementations differ b/w themselves and from SW approaches,
there is limited value to be maintained by not splitting SW and HW support.


>
>If you think it'd be easier to just proceed with your implementation and
>then it will be easier to understand, that's fine with me.

Ok, thanks


>
>--
>Russell Bryant
_______________________________________________
dev mailing list
dev@openvswitch.org<mailto:dev@openvswitch.org>
http://openvswitch.org/mailman/listinfo/dev<https://urldefense.proofpoint.com/v2/url?u=http-3A__openvswitch.org_mailman_listinfo_dev&d=BQMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=dGZmbKhBG9tJHY4odedsGA&m=8ejP80xVWwsyfHolHAaknzDCmc2cs-QkiyVRQFv2PFA&s=_h8lsnaYV0kELqbL98C-KSOZhL00-GNZ3jgIEwAiLzU&e=>


_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to