Erik ,
I agree that is overlay addresses  over what VNI it is running ?

David



On Nov 7, 2014, at 8:35 PM, Erik Nordmark 
<[email protected]<mailto:[email protected]>> wrote:

On 11/7/14 2:23 AM, David Mozes wrote:
Coerced  they have MAC and IP configuration see below :
You think these are  only overlay address  , not underlay  ?

David,

Having default IP and MAC addresses wouldn't make any sense for the underlay. 
So I think they are overlay addresses.

   Erik




Thx
David

“
BFD Local Configuration:
       The  HSC  writes  the  key-value  pairs in the bfd_config_local column to
       specifiy the local configurations to be used for  BFD  sessions  on  this
       tunnel.

       bfd_config_local : bfd_dst_mac: optional string
              Set  to  an  Ethernet address in the form xx:xx:xx:xx:xx:xx to set
              the MAC expected as destination for received BFD packets.

       bfd_config_local : bfd_dst_ip: optional string
              Set to an IPv4 address to set the IP address that is  expected  as
              destination for received BFD packets.  The default is 169.254.1.0.

     BFD Remote Configuration:
       The   bfd_config_remote   column   is   the  remote  counterpart  of  the
       bfd_config_local column.  The NVC writes  the  key-value  pairs  in  this
       column.

       bfd_config_remote : bfd_dst_mac: optional string
              Set  to  an  Ethernet address in the form xx:xx:xx:xx:xx:xx to set
              the destination MAC to be used for transmitted BFD  packets.   The
              default is 00:23:20:00:00:01.

       bfd_config_remote : bfd_dst_ip: optional string
              Set  to  an IPv4 address to set the IP address used as destination
              for transmitted BFD packets.  The default is 169.254.1.1.

“


From: Erik Nordmark [mailto:[email protected]]
Sent: Friday, November 07, 2014 8:16 AM
To: David Mozes; Erik Nordmark
Cc: [email protected]<mailto:[email protected]>; Marc Binderberger; Tom Herbert
Subject: Re: [nvo3] Concerns about NVO3 dataplane requirements document +BFD

On 11/6/14 10:37 AM, David Mozes wrote:
Sorry regarding the confusion .
However , I referring to BFD on the dataplane

Ah - sorry for being a bit narrow-minded.

My understanding is that BFD is done by having some BFD-over-foo documents (BFD 
over IP, BFD over TRILL, etc).

BFD could potential be run (multi-hop) between a pair of VTEP IPs on the 
underlay, or one could define a BFD-over-NVO3-dataplane which specifies how it 
would be carried as a NVO3 payload.
I think that implies that the NVO dataplane would need to have some implicit or 
explicit way to identify that the payload is BFD.

I'm think the VTEP OVSDB  BFD  parameters does this implicitly since it defines 
MAC addresses and IP addresses used to identify the BFD packets.

Thanks,
  Erik



Thx
David


On Nov 6, 2014, at 8:31 PM, "Erik Nordmark" 
<[email protected]<mailto:[email protected]>> wrote:
On 11/6/14 1:42 AM, David Mozes wrote:

Hi ,
Recently I saw that in  VTEP OVSDB  BFD  parameters  have added  .
 Is it align to what we are defining here  ?

I'm not sure I understand the context. At one of the interim NVO3 meetings the 
chairs had a slide suggesting the OVSDB (with associated schemas I assume) 
could be considered as a potential NVO3 *controlplane* protocol (I think they 
listed LISP and OpFlex on the same slide.)

But this thread is about the *dataplane* protocol. Hence I confused about the 
context.

  Eruj




 Thx
David

-----Original Message-----
From: nvo3 [mailto:[email protected]] On Behalf Of Erik Nordmark
Sent: Saturday, October 25, 2014 3:45 AM
To: Marc Binderberger; Erik Nordmark; Tom Herbert
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [nvo3] Concerns about NVO3 dataplane requirements document

On 10/22/14 5:20 PM, Marc Binderberger wrote:
To pick up some of the points:

VNI: we live with "flat" IP addresses and yet they support the rich
structure in the name space. I don't see why this should be different
with overlay
headers: the control plane (or the configuration) will know about any
structure and will program the data plane accordingly; VNIs are then
just a reference numbers (read: flat).
I guess we still need to have some idea how many bits would be required up 
front (24, 32, more?) and whether we think this field needs to be extensible.


QoS: I would consider additional QoS bits in the NVO3 overlay header
as redundant. Either the tenant frame and underlay header have some
QoS already, then we have the requirement for the data plane to be
able to map QoS values (probably some small table). Or the
tenant-frame has no QoS - well, sounds like a fixed mapping then.
OK
Security:  could we re-use IPSEC ESP/AH ?  In tunnel mode as we would
add already an underlay IPv4/IPv6 header?
(I'm no expert in this area but why not re-using other peoples work)
One question is whether the higher assurance is just for the VNI or for the 
whole encapsulated frame. Using something like ESP/AH takes us down the path of 
protecting the whole frame, which might be overkill.

ECMP: with leaf-spine topologies in mind and IP as an underlay I would say
being able to use already existing IP ECMP methods is a plus to simplify
deployments. I would make it a requirement.
OK


Meta-Data: I probably missed some discussions (sorry!) but what data would
this be?
As I tried to clarify in my response to Tom the meta-data discussion in
the IETF was mostly about vendor-specific service meta-data, but perhaps
this term is being used for more general extensibility?

I think there should be ways to add better assurance (checksum, keyed
hashes) for the NVO3 header. But perhaps that can be in fixed fields in
a fixed length header.

In terms of the overall architecture there is a desire to carry some
service meta-data with frames. The sfc WG is thinking about doing that
using a separate NSH header.

It would be good for the NVO3 WG to have a clear understanding of what
data needs to be carried with each encapsulated frame. That helps
determine how flexible and extensible the packet format needs to be.
The experience with extensibility for protocols that are in the
dataplane (be it IPv4 options, IPv6 extension headers, TRILL options,
etc) is that they don't tend to get implemented in hardware. And the
dataplane protocols tend to have a mixture of hardware and software
implementations - which is different than TCP which is mostly software.
One observation is that we (the IETF + industry) seems to be able to
redefine fixed-fields (e.g., IPv4 TOS->DSCP+ECN, MPLS labels with new
semantics like the entropy label) a lot easier than implementing new
options or extension headers.

Anyway, it sounds TLV-like and having a variable overhead length may be a
problem for the overlay MTU. Assuming that this Meta-Data is orthogonal to
the VNI, would another "MD-ID" field help?  The control-/config plane could
then map this MD-ID to the Meta-Data and program the data plane accordingly.
One would have to require that the underlay MTU exceeds the overlay MTU
by the maximum encapsulation overhead. Thus a large max size of
options/extensions has some cost.

I think another point, which was mentioned on the list, is the
fragmentation/reassembly or MTU problem. For simplicity I would prefer the
NVO3 header has no support for this. If your tenant frame is IPv4/v6 then
fragmentation/reassembly should happen on this level. For Ethernet tenant
frames - no idea but I assume Ethernet networks solve the MTU problem by
"correct configuration"? So the NVO3 "link" would just be another interface
with an unusual MTU (?).
That seems to be how the hardware encapsulations handle things.
If it was all software on the endpoints then there would be more
options, but for efficiency we typically want to avoid fragmentation.

The document also mentions the "learning bridge" behaviour. I would have seen
the details of MAC learning as "control plane" (albeit not necessarily the
"centralized authority" of the charter).  For the data plane it is a
requirement to punt packets to the control plane. Well, actually forward the
packet and punt a copy to control plane. I wonder if we have other
requirement to trigger such a copy/punt? (e.g. an OAM/alert flag, as
discussed in VXLAN-gpe)

While the option of "learning bridge" behavior might be useful, it
doesn't have anything to do with the dataplane encapsulation format.

Your question about OAM/alert flags is a good one. I think it makes
sense to define some flags.
Perhaps we also want a "drop packet if you don't know about this flags"
flag; in many cases the control plane can be used to determine the
capabilities hence one can avoid sending dataplane packets with some new
OAM or other feature to endpoints that don't know about it. In such a
case it is sufficient to have flags that have the "ignore if you don't
know about it" semantics.

    Erik

_______________________________________________
nvo3 mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/nvo3






_______________________________________________

nvo3 mailing list

[email protected]<mailto:[email protected]>

https://www.ietf.org/mailman/listinfo/nvo3




_______________________________________________
nvo3 mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/nvo3


_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to