Re: [Openstack] How to utilize Neutron independently with veths

2017-05-23 Thread duhongwei
Thanks Kevin! I've made a big step forward!


Till now, I've successfully connect vNIC directly into br-int without qbr, qvo, 
and qvb. And, it works well.


However, following your scripts (connect vNIC into qbr, then connect qbr into 
br-int) exposes another problem. In this scenario, qbr won't forward packets 
from vNIC to br-int (packets seem to be dropped on qbr).


After some troubleshooting, it turns out to be iptables who drops packets on 
qbr. Reviewing the FORWARD chain in filter table, packets come from vNIC won't 
match any rule of neutron-filter-top and neutron-openvswi-FORWARD so that the 
default policy DROP applies.


So, after setting up all these qbr, qvo, qvb, vNIC, it seems there're still 
some iptables rules missed. Question is,


Who's adding this iptables rules? (Nova or Neutron?) How can I make it happen?


Regards,
Dastan
 
 
-- Original --
From:  "Kevin Benton";
Date:  Mon, May 22, 2017 10:47 PM
To:  "duhongwei"; 
Cc:  "openstack"; "Vallachorum 
Tyranorum"; 
Subject:  Re: [Openstack] How to utilize Neutron independently with veths

 
Yes, the only thing that needs to use the correct MAC is whatever is actually 
sending traffic. 

On May 21, 2017 22:06, "duhongwei"  wrote:


Thanks for your patient, Kevin.


So qvo could be any veth whose mac address doesn't matter, but veth/tap must 
have exact the same mac address as port, otherwise it will be anti-spoofed.


qvo's attributes (external-ids) tell neutron which logical port qvo is 
connecting, so neutron knows how to add flows to ovs br-int and br-tun.


Am I correct?


Regards,
Dastan
 
-- Original --
From:  "Kevin Benton";
Date:  Sat, May 20, 2017 03:26 AM
To:  "duhongwei"; 
Cc:  "openstack"; "Vallachorum 
Tyranorum"; 
Subject:  Re: [Openstack] How to utilize Neutron independently with veths

 
>After all these, we create veth/tap (as vm/containers vNIC) and plugin it into 
>qbr then we're able to talk with other vms/containers on the same network 
>through veth/tap, am I understanding it right?

Yes, this last step of creating a veth/tap is missing from my script because I 
didn't need actual dataplane communication for the tests I was doing.


>1) isn't it necessary that veth/tap's mac address same as neutron port's mac 
>address?


Yeah, if you attach something to qbr to behave like the VM interface, you will 
need it to be using the mac address of the neutron port, or else the neutron 
anti-spoofing rules will prevent it from communicating.




>2) after we plug qvo into ovs br-int, neutron just automatically add flows 
>into ovs bridge?


Yes, the agent will receive to the new port event from ovs, retrieve port 
details from the server and then setup the flows.

On Fri, May 19, 2017 at 12:09 AM, duhongwei  wrote:


This script seems easy and cool!


So first we have to create a logical neutron port, then create qbr, qvo and 
qvb, and plug qvb into qbr, finally plug qvo into ovs br-int. After all these, 
we create veth/tap (as vm/containers vNIC) and plugin it into qbr then we're 
able to talk with other vms/containers on the same network through veth/tap, am 
I understanding it right?


Questions,


1) isn't it necessary that veth/tap's mac address same as neutron port's mac 
address? 
2) after we plug qvo into ovs br-int, neutron just automatically add flows into 
ovs bridge?


Regards,
Dastan
 
-- Original --
From:  "Kevin Benton";
Date:  Sat, May 13, 2017 07:46 AM
To:  "duhongwei"; 
Cc:  "openstack"; "Vallachorum 
Tyranorum"; 
Subject:  Re: [Openstack] How to utilize Neutron independently with veths



 
Nova is only responsible for creating the interface and plugging it into the 
OVS bridge. It's the neutron agent (or alternative neutron backend like OVN) 
responsible for setting up all of the flows.

Here is a hacky script that I had used to create and delete a bunch of ports 
like Nova would that you can probably start with: 
http://paste.openstack.org/show/609478/


On Fri, May 12, 2017 at 4:25 AM, duhongwei  wrote:


Thanks Kevin!


I'll dig into neutron.agent.linux.interface to see how it works. Before that, 
would you give me any previews about what steps should be taken to add a veth 
to a existed Neutron network?


Furthermore, is it Neutron who add a veth to ovs bridge or is it the Neutron 
caller? (such as Nova)


Who's adding flows to ovs bridge? Neutron or caller?


Regards,
Dastan 
 
-- Original --
From:  "Kevin Benton";
Date:  Fri, May 12, 2017 10:45 AM
To:  "duhongwei"; 
Cc:  "openstack"; "Vallachorum 
Tyranorum"; 
Subject:  Re: [Openstack] How to utilize Neutron independently with veths



 
You want to look in neutron.agent.linux.interface to see how things are plugged 
into OVS. That's the module used by the L3 agent to plug into OVS/linux 
bridge/etc. 

There is a well defined interface name format corresponding to the port ID and 
the port ID, Mac address, and a couple of oth

Re: [Openstack] Openstack Routed Provider Networks Question

2017-05-23 Thread Chris Marino
Thanks Kevin, very helpfulother comments in line.
CM

On Mon, May 22, 2017 at 9:15 PM, Kevin Benton  wrote:

> On May 22, 2017 9:34 AM, "Chris Marino"  wrote:
>
> I'm digging into how Routed Provider Networks work and have some questions
> as well. I will presenting at the OpenStack Meetup
> on Wed on this and
> want to make sure I have my facts straight..
>
> From the doc page
> 
>  it
> shows a multi-segment network with segment 1 on 203.0.113.0/24 and
> segment 2 on 198.51.100.0/24. It also suggests using the same VLAN ID for
> these segments.
>
> I find both of these things really confusing.
>
>
> What do you find confusing about this? It's a pretty standard L3 to ToR
> and L2 in rack setup. L2 is limited to the rack so you can use or not use
> whatever VLANs in that scope. We can fix the docs to clear up whatever
> confusion you have.
>

My confusion was based on my somewhat narrow understanding of the use case.
My thinking was that the current L2 provider networks would be cast as set
of L2 segments (with contiguous CIDRs) on an L3 network. Seeing both the
203. and 198. networks getting consolidated into a single network is pretty
disorienting. Together they are not a single network by the traditional
definition of a network. Then describing the 'segment ID' as a VLAN ID was
confusing for two reasons. First, it carries forward the idea that the
segments are VLANs, which they might be, but don't have to be. Then using
the same 'VLAN ID' for different segments (even though its a segment ID) on
different networks implies that they might even be the same VLAN, which
they are not.


> But ignoring that for a minute, I'm more interested in the expected use
> case for this feature. I see from the original spec/blueprint
> 
>  that the goal was to allow for a single Provider Network to be made up
> from multiple network segments, where external routing provided
> connectivity among the segments. And Routed Provider Networks provide
> this.  Great.
>
> But the use cases I'm curious about are where the operator wants to take
> their current L2/VLAN Provider Networks, but deploy it as an L3 Provider
> Network. Same CIDR as the L2 provider network, but in a fully routed
> deployment (i.e. no L2 adjacency). It might be L3 to the ToR and (untagged)
> L2 to the host. Or L3 to the host.
>
> Both of these configurations are gaining popularity and wondering how they
> would need to be configured. For the L3 to ToR, the network segments would
> have to be split across the ToRs as described in the doc, but what about L3
> to host? Guessing a segment per host, but wondering how practical that's
> going to be without better coordination of IP/segments with Nova?
>
>
> L3 to the host and one segment per host is possible, but it's going to
> have a severe limitation of not being able to migrate VMs without an IP
> change. To get migration at that point you will need some form of dynamic
> routing.
>

Yes, dynamic routing is going to help here as well as the config/setup.


>
> L3 to ToR and L2 in rack is definitely the target use case as of now.
>

Yes, I see that more clearly now.


>
> On May 22, 2017 9:34 AM, "Chris Marino"  wrote:
>
>> I'm digging into how Routed Provider Networks work and have some
>> questions as well. I will presenting at the OpenStack Meetup
>> on Wed on this and
>> want to make sure I have my facts straight..
>>
>> From the doc page
>> 
>>  it
>> shows a multi-segment network with segment 1 on 203.0.113.0/24 and
>> segment 2 on 198.51.100.0/24. It also suggests using the same VLAN ID
>> for these segments.
>>
>> I find both of these things really confusing. But ignoring that for a
>> minute, I'm more interested in the expected use case for this feature. I
>> see from the original spec/blueprint
>> 
>>  that the
>> goal was to allow for a single Provider Network to be made up from multiple
>> network segments, where external routing provided connectivity among the
>> segments. And Routed Provider Networks provide this.  Great.
>>
>> But the use cases I'm curious about are where the operator wants to take
>> their current L2/VLAN Provider Networks, but deploy it as an L3 Provider
>> Network. Same CIDR as the L2 provider network, but in a fully routed
>> deployment (i.e. no L2 adjacency). It might be L3 to the ToR and (untagged)
>> L2 to the host. Or L3 to the host.
>>
>> Both of these configurations are gaining popularity and wondering how
>> they would need to be configured. For the L3 to ToR, the network segments
>> would have to be split across the ToRs as des

Re: [Openstack] Openstack Routed Provider Networks Question

2017-05-23 Thread John Griessen

On 05/23/2017 08:31 AM, Chris Marino wrote:

L3 to ToR and L2 in rack


So, when you refer to providers of VMs do you still use these terms?  ovh.com 
offers VMs built on openstack
where one can create a vrack, (virtual rack). Does L3 to top of rack and L2 in 
rack apply to such vracks,
or only physical networks.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack Routed Provider Networks Question

2017-05-23 Thread Chris Marino
John, not really familiar with the OVH offering, but my quick scan
indicated their vRack is a
collection of servers on one or more VLANs.  Doubt that these vRacks are
tied in any way to actual physical racks. My use of the term 'top of rack'
and ToR is meant to be the physical devices and would not apply to these
virtual racks.

CM
ᐧ

On Tue, May 23, 2017 at 7:45 AM, John Griessen  wrote:

> On 05/23/2017 08:31 AM, Chris Marino wrote:
>
>> L3 to ToR and L2 in rack
>>
>
> So, when you refer to providers of VMs do you still use these terms?
> ovh.com offers VMs built on openstack
> where one can create a vrack, (virtual rack). Does L3 to top of rack and
> L2 in rack apply to such vracks,
> or only physical networks.
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to utilize Neutron independently with veths

2017-05-23 Thread Dmitry Sutyagin
Afaik, iptables are set by Nova, and the driver is set via firewall_driver
option in nova.conf

On Tue, May 23, 2017 at 12:15 AM, duhongwei  wrote:

>
> Thanks Kevin! I've made a big step forward!
>
> Till now, I've successfully connect *vNIC* directly into *br-int *without
> *qbr*, *qvo*, and *qvb*. And, it works well.
>
> However, following your scripts (connect *vNIC* into *qbr*, then connect
> *qbr* into *br-int*) exposes another problem. In this scenario, *qbr*
> won't forward packets from *vNIC* to *br-int* (packets seem to be dropped
> on *qbr*).
>
> After some troubleshooting, it turns out to be *iptables* who drops
> packets on *qbr*. Reviewing the *FORWARD* chain in *filter* table,
> packets come from *vNIC* won't match any rule of *neutron-filter-top* and
> *neutron-openvswi-FORWARD* so that the default policy *DROP* applies.
>
> So, after setting up all these *qbr*, *qvo*, *qvb*, *vNIC*, it seems
> there're still some *iptables rules* missed. Question is,
>
> Who's adding this iptables rules? (Nova or Neutron?) How can I make it
> happen?
>
> Regards,
> Dastan
>
>
> -- Original --
> *From: * "Kevin Benton";
> *Date: * Mon, May 22, 2017 10:47 PM
> *To: * "duhongwei";
> *Cc: * "openstack"; "Vallachorum
> Tyranorum";
> *Subject: * Re: [Openstack] How to utilize Neutron independently with
> veths
>
> Yes, the only thing that needs to use the correct MAC is whatever is
> actually sending traffic.
>
> On May 21, 2017 22:06, "duhongwei"  wrote:
>
>>
>> Thanks for your patient, Kevin.
>>
>> So *qvo *could be any veth whose mac address doesn't matter, but *veth/tap
>> *must have exact the same mac address as *port*, otherwise it will be
>> anti-spoofed.
>>
>> *qvo*'s attributes (external-ids) tell neutron which logical *port* *qvo*
>> is connecting, so neutron knows how to add flows to ovs *br-int *and
>> *br-tun*.
>>
>> Am I correct?
>>
>> Regards,
>> Dastan
>>
>> -- Original --
>> *From: * "Kevin Benton";
>> *Date: * Sat, May 20, 2017 03:26 AM
>> *To: * "duhongwei";
>> *Cc: * "openstack"; "Vallachorum
>> Tyranorum";
>> *Subject: * Re: [Openstack] How to utilize Neutron independently with
>> veths
>>
>> >After all these, we create *veth/tap* (as vm/containers vNIC) and
>> plugin it into *qbr* then we're able to talk with other vms/containers
>> on the same network through *veth/tap*, am I understanding it right?
>>
>> Yes, this last step of creating a veth/tap is missing from my script
>> because I didn't need actual dataplane communication for the tests I was
>> doing.
>>
>> >1) isn't it necessary that *veth/tap*'s mac address same as neutron
>> *port*'s mac address?
>>
>> Yeah, if you attach something to qbr to behave like the VM interface, you
>> will need it to be using the mac address of the neutron port, or else the
>> neutron anti-spoofing rules will prevent it from communicating.
>>
>>
>> >2) after we plug *qvo* into ovs *br-int*, neutron just automatically
>> add flows into ovs bridge?
>>
>> Yes, the agent will receive to the new port event from ovs, retrieve port
>> details from the server and then setup the flows.
>>
>> On Fri, May 19, 2017 at 12:09 AM, duhongwei  wrote:
>>
>>>
>>> This script seems easy and cool!
>>>
>>> So first we have to create a logical neutron *port*, then create *qbr*,
>>> *qvo* and *qvb*, and plug *qvb* into *qbr*, finally plug *qvo* into ovs
>>> *br-int*. After all these, we create *veth/tap* (as vm/containers vNIC)
>>> and plugin it into *qbr* then we're able to talk with other
>>> vms/containers on the same network through *veth/tap*, am I
>>> understanding it right?
>>>
>>> Questions,
>>>
>>> 1) isn't it necessary that *veth/tap*'s mac address same as neutron
>>> *port*'s mac address?
>>> 2) after we plug *qvo* into ovs *br-int*, neutron just automatically
>>> add flows into ovs bridge?
>>>
>>> Regards,
>>> Dastan
>>>
>>> -- Original --
>>> *From: * "Kevin Benton";
>>> *Date: * Sat, May 13, 2017 07:46 AM
>>> *To: * "duhongwei";
>>> *Cc: * "openstack"; "Vallachorum
>>> Tyranorum";
>>> *Subject: * Re: [Openstack] How to utilize Neutron independently with
>>> veths
>>>
>>> Nova is only responsible for creating the interface and plugging it into
>>> the OVS bridge. It's the neutron agent (or alternative neutron backend like
>>> OVN) responsible for setting up all of the flows.
>>>
>>> Here is a hacky script that I had used to create and delete a bunch of
>>> ports like Nova would that you can probably start with:
>>> http://paste.openstack.org/show/609478/
>>>
>>> On Fri, May 12, 2017 at 4:25 AM, duhongwei  wrote:
>>>

 Thanks Kevin!

 I'll dig into neutron.agent.linux.interface to see how it works. Before
 that, would you give me any previews about what steps should be taken to
 add a veth to a existed Neutron network?

 Furthermore, is it Neutron who add a veth to ovs bridge or is it the
 Neutron caller? (such as Nova)

 Who's 

Re: [Openstack] Openstack Routed Provider Networks Question

2017-05-23 Thread Sławek Kapłoński
Hello,

Vrack based networks are little bit different. We made it by self in OVH. It 
allows users to create tenant networks and connect them with e.g. dedicated 
servers. From Openstack user point of view it is similar to vlan network but 
it’s done littlebit different. Traffic from host in different racks can be 
tagged with different vlan id for ports from same network.
For user it’s L2 network everywhere.

—
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl




> Wiadomość napisana przez Chris Marino  w dniu 23.05.2017, o 
> godz. 17:10:
> 
> John, not really familiar with the OVH offering, but my quick scan indicated 
> their vRack is a collection of servers on one or more VLANs.  Doubt that 
> these vRacks are tied in any way to actual physical racks. My use of the term 
> 'top of rack' and ToR is meant to be the physical devices and would not apply 
> to these virtual racks.
> 
> CM
> ᐧ
> 
> On Tue, May 23, 2017 at 7:45 AM, John Griessen  wrote:
> On 05/23/2017 08:31 AM, Chris Marino wrote:
> L3 to ToR and L2 in rack
> 
> So, when you refer to providers of VMs do you still use these terms?  ovh.com 
> offers VMs built on openstack
> where one can create a vrack, (virtual rack). Does L3 to top of rack and L2 
> in rack apply to such vracks,
> or only physical networks.
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



signature.asc
Description: Message signed with OpenPGP
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to utilize Neutron independently with veths

2017-05-23 Thread Kevin Benton
Neutron sets up the iptables rules if you have security groups enabled and
the agent firewall is set to iptables_hybrid or
neutron.agent.linux.iptables_firewall:OVSHybridIptablesFirewallDriver .

What are you naming your vNIC? The iptables rules setup by the agent match
specifically on 'tap' + port UUID prefix. So if the bridge is qvb1234567890
then the vnic you plug into it needs to be named tap1234567890 .

On Tue, May 23, 2017 at 12:15 AM, duhongwei  wrote:

>
> Thanks Kevin! I've made a big step forward!
>
> Till now, I've successfully connect *vNIC* directly into *br-int *without
> *qbr*, *qvo*, and *qvb*. And, it works well.
>
> However, following your scripts (connect *vNIC* into *qbr*, then connect
> *qbr* into *br-int*) exposes another problem. In this scenario, *qbr*
> won't forward packets from *vNIC* to *br-int* (packets seem to be dropped
> on *qbr*).
>
> After some troubleshooting, it turns out to be *iptables* who drops
> packets on *qbr*. Reviewing the *FORWARD* chain in *filter* table,
> packets come from *vNIC* won't match any rule of *neutron-filter-top* and
> *neutron-openvswi-FORWARD* so that the default policy *DROP* applies.
>
> So, after setting up all these *qbr*, *qvo*, *qvb*, *vNIC*, it seems
> there're still some *iptables rules* missed. Question is,
>
> Who's adding this iptables rules? (Nova or Neutron?) How can I make it
> happen?
>
> Regards,
> Dastan
>
>
> -- Original --
> *From: * "Kevin Benton";
> *Date: * Mon, May 22, 2017 10:47 PM
> *To: * "duhongwei";
> *Cc: * "openstack"; "Vallachorum
> Tyranorum";
> *Subject: * Re: [Openstack] How to utilize Neutron independently with
> veths
>
> Yes, the only thing that needs to use the correct MAC is whatever is
> actually sending traffic.
>
> On May 21, 2017 22:06, "duhongwei"  wrote:
>
>>
>> Thanks for your patient, Kevin.
>>
>> So *qvo *could be any veth whose mac address doesn't matter, but *veth/tap
>> *must have exact the same mac address as *port*, otherwise it will be
>> anti-spoofed.
>>
>> *qvo*'s attributes (external-ids) tell neutron which logical *port* *qvo*
>> is connecting, so neutron knows how to add flows to ovs *br-int *and
>> *br-tun*.
>>
>> Am I correct?
>>
>> Regards,
>> Dastan
>>
>> -- Original --
>> *From: * "Kevin Benton";
>> *Date: * Sat, May 20, 2017 03:26 AM
>> *To: * "duhongwei";
>> *Cc: * "openstack"; "Vallachorum
>> Tyranorum";
>> *Subject: * Re: [Openstack] How to utilize Neutron independently with
>> veths
>>
>> >After all these, we create *veth/tap* (as vm/containers vNIC) and
>> plugin it into *qbr* then we're able to talk with other vms/containers
>> on the same network through *veth/tap*, am I understanding it right?
>>
>> Yes, this last step of creating a veth/tap is missing from my script
>> because I didn't need actual dataplane communication for the tests I was
>> doing.
>>
>> >1) isn't it necessary that *veth/tap*'s mac address same as neutron
>> *port*'s mac address?
>>
>> Yeah, if you attach something to qbr to behave like the VM interface, you
>> will need it to be using the mac address of the neutron port, or else the
>> neutron anti-spoofing rules will prevent it from communicating.
>>
>>
>> >2) after we plug *qvo* into ovs *br-int*, neutron just automatically
>> add flows into ovs bridge?
>>
>> Yes, the agent will receive to the new port event from ovs, retrieve port
>> details from the server and then setup the flows.
>>
>> On Fri, May 19, 2017 at 12:09 AM, duhongwei  wrote:
>>
>>>
>>> This script seems easy and cool!
>>>
>>> So first we have to create a logical neutron *port*, then create *qbr*,
>>> *qvo* and *qvb*, and plug *qvb* into *qbr*, finally plug *qvo* into ovs
>>> *br-int*. After all these, we create *veth/tap* (as vm/containers vNIC)
>>> and plugin it into *qbr* then we're able to talk with other
>>> vms/containers on the same network through *veth/tap*, am I
>>> understanding it right?
>>>
>>> Questions,
>>>
>>> 1) isn't it necessary that *veth/tap*'s mac address same as neutron
>>> *port*'s mac address?
>>> 2) after we plug *qvo* into ovs *br-int*, neutron just automatically
>>> add flows into ovs bridge?
>>>
>>> Regards,
>>> Dastan
>>>
>>> -- Original --
>>> *From: * "Kevin Benton";
>>> *Date: * Sat, May 13, 2017 07:46 AM
>>> *To: * "duhongwei";
>>> *Cc: * "openstack"; "Vallachorum
>>> Tyranorum";
>>> *Subject: * Re: [Openstack] How to utilize Neutron independently with
>>> veths
>>>
>>> Nova is only responsible for creating the interface and plugging it into
>>> the OVS bridge. It's the neutron agent (or alternative neutron backend like
>>> OVN) responsible for setting up all of the flows.
>>>
>>> Here is a hacky script that I had used to create and delete a bunch of
>>> ports like Nova would that you can probably start with:
>>> http://paste.openstack.org/show/609478/
>>>
>>> On Fri, May 12, 2017 at 4:25 AM, duhongwei  wrote:
>>>

 Thanks Kevin!

 I'll dig i

Re: [Openstack] Openstack Routed Provider Networks Question

2017-05-23 Thread Kevin Benton
>Then describing the 'segment ID' as a VLAN ID

Segment ID is not a VLAN ID. A segment ID is a UUID for a segment, which
can contain a segmentation ID of a VLAN ID or VXLAN VNI or it might even be
a flat network.

It is unfortunate that we have both segment ID and segmentation ID, which
is what let to your confusion. The first uniquely identifies the segment,
the second describes how things will be encapsulated on the wire. I think
updating the docs to refer to the first as segment UUID might go a long
ways to help disambiguate.

>Then using the same 'VLAN ID' for different segments (even though its a
segment ID) on different networks implies that they might even be the same
VLAN, which they are not.

With VLANs the segmentation ID can be re-used on different physical
networks because they are not wired to the same L2 domain. They will have
different segment UUIDs and different 'physical_network' values, but may
have the same VLAN ID (segmentation_id).

On Tue, May 23, 2017 at 6:31 AM, Chris Marino  wrote:

> Thanks Kevin, very helpfulother comments in line.
> CM
>
> On Mon, May 22, 2017 at 9:15 PM, Kevin Benton  wrote:
>
>> On May 22, 2017 9:34 AM, "Chris Marino"  wrote:
>>
>> I'm digging into how Routed Provider Networks work and have some
>> questions as well. I will presenting at the OpenStack Meetup
>> on Wed on this and
>> want to make sure I have my facts straight..
>>
>> From the doc page
>> 
>>  it
>> shows a multi-segment network with segment 1 on 203.0.113.0/24 and
>> segment 2 on 198.51.100.0/24. It also suggests using the same VLAN ID
>> for these segments.
>>
>> I find both of these things really confusing.
>>
>>
>> What do you find confusing about this? It's a pretty standard L3 to ToR
>> and L2 in rack setup. L2 is limited to the rack so you can use or not use
>> whatever VLANs in that scope. We can fix the docs to clear up whatever
>> confusion you have.
>>
>
> My confusion was based on my somewhat narrow understanding of the use
> case. My thinking was that the current L2 provider networks would be cast
> as set of L2 segments (with contiguous CIDRs) on an L3 network. Seeing both
> the 203. and 198. networks getting consolidated into a single network is
> pretty disorienting. Together they are not a single network by the
> traditional definition of a network. Then describing the 'segment ID' as a
> VLAN ID was confusing for two reasons. First, it carries forward the idea
> that the segments are VLANs, which they might be, but don't have to be.
> Then using the same 'VLAN ID' for different segments (even though its a
> segment ID) on different networks implies that they might even be the same
> VLAN, which they are not.
>
>
>> But ignoring that for a minute, I'm more interested in the expected use
>> case for this feature. I see from the original spec/blueprint
>> 
>>  that the goal was to allow for a single Provider Network to be made up
>> from multiple network segments, where external routing provided
>> connectivity among the segments. And Routed Provider Networks provide
>> this.  Great.
>>
>> But the use cases I'm curious about are where the operator wants to take
>> their current L2/VLAN Provider Networks, but deploy it as an L3 Provider
>> Network. Same CIDR as the L2 provider network, but in a fully routed
>> deployment (i.e. no L2 adjacency). It might be L3 to the ToR and (untagged)
>> L2 to the host. Or L3 to the host.
>>
>> Both of these configurations are gaining popularity and wondering how
>> they would need to be configured. For the L3 to ToR, the network segments
>> would have to be split across the ToRs as described in the doc, but what
>> about L3 to host? Guessing a segment per host, but wondering how practical
>> that's going to be without better coordination of IP/segments with Nova?
>>
>>
>> L3 to the host and one segment per host is possible, but it's going to
>> have a severe limitation of not being able to migrate VMs without an IP
>> change. To get migration at that point you will need some form of dynamic
>> routing.
>>
>
> Yes, dynamic routing is going to help here as well as the config/setup.
>
>
>>
>> L3 to ToR and L2 in rack is definitely the target use case as of now.
>>
>
> Yes, I see that more clearly now.
>
>
>>
>> On May 22, 2017 9:34 AM, "Chris Marino"  wrote:
>>
>>> I'm digging into how Routed Provider Networks work and have some
>>> questions as well. I will presenting at the OpenStack Meetup
>>> on Wed on this and
>>> want to make sure I have my facts straight..
>>>
>>> From the doc page
>>> 
>>>  it
>>> shows a multi-segment network with segment 1 on 203.0.113.0/24 and
>>> segment 2 on 198.51.10

[Openstack] [MassivelyDistributed] IRC Meeting tomorrow15:00 UTC

2017-05-23 Thread lebre . adrien
Dear all, 

A gentle reminder for our meeting tomorrow. 
As usual, the agenda is available at: 
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
597)
Please feel free to add items.

Best, 
ad_rien_

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack