>
"To say that storage is out of scope is just insane. Taking a forward looking 
view of how things might be (as Robert suggested) would be very wise. "
>
Storage has and continues to move to the front end network through I/O 
consolidation using 10GE. Even when there is a separate storage fabric in 
place; the edge i.e. server to the first hop switch, may be shared. This trend 
is continuing with x20GE and 40GE to the server.

One may take the position that FC and its derivative FCoE would not be part of 
any cloud offering. On the other hand if FCoE is in scope; it will put a 
requirement on the underlay service interface for lossless delivery (per 
David's comments). This may not be as bad as it looks. DCB may be available; 
the scope may be limited to the edge with hand off to a separate fabric; even 
with "TOR >switches should be just basic IP nodes providing only transport 
between >servers" the links will be Ethernet! And in the time frame of this 
working group; lossless Ethernet will likely be supported across the 
datacenter. So looking forward even IP based storage e.g. iSCSI can benefit 
from the lossless feature of the underlay and use it to scale.

I don't see any harm in having storage, in its broadest sense,  in scope given 
the time frames and evolution of datacenter and storage networking. If not at 
least we should clearly articulate why it was left out or limited in scope.

Siamack

From: [email protected] [mailto:[email protected]] On Behalf Of Jon 
Hudson
Sent: Monday, August 27, 2012 4:49 AM
To: Stiliadis, Dimitrios (Dimitri)
Cc: Ivan Pepelnjak; Black, David; [email protected]; Linda Dunbar
Subject: Re: [nvo3] Let's refocus on real world (was: Comments on Live 
Migration and VLAN-IDs)

oh man, this is getting good....

First I will say I think Ivan is dead on.

Second to Roberts input

> However I am afraid authors of this document are not likely to admit that TOR 
> >switches should be just basic IP nodes providing only transport between 
> >servers.
>
> Likewise they will not likely to admit that all logic of encapsulation should 
> >happen on the hyper-visors as they are simply not in that technology space.
this has been a fundamental issue for a while now. The people who will decide 
which method prevails will be the Virtualization Admins not the Network admins. 
And what the Virt Admins want is: point -> click -> drag -> drop = poof virtual 
magic network overlay.

Since most Virt Admins don't have access to ToR or any other network hardware, 
I would argue that if control and encap/decap doesn't happen in the hypervisor 
then NVo3 won't see much adoption. Especially since this is what is happening 
now in the proprietary offerings.

And then Dimitri!

>Correct, and here is where IETF as a standard body fails. There is no
>easy way (any time soon) for a VXLAN based solution to talk to an NVGRE
>or MAC/GRE, or Cloudstack MAC/GRE or STT  (you forgot this one), based
>solution.
>Proprietary approaches that drive enterprises to vendor lock ins. And
>instead
>of trying to address the first problem that is about "interoperability",
>we completely throw it under the rug as "not important". And by the time
>we are done with NVO3, there will be a controller lock in as well, and the
>death of interoperability..

THANK YOU

Pure hypervisors environments are not going to exist very often. You will have 
a mixed model of vmware, xen & hyperV (kvm?) in most if not all shops.

This makes the current environment of VXLAN/NVGRE/STT to be very very scary. Do 
I really want to run each tech for each hypervisor? Do people really assume 
that whatever the NVo3 group produces will result in all those vendors just 
dropping their versions and running with open arms to the standard?

Offering a method of interoperability between the three would be HUGE. That 
would be a reason for the user community to really get excited about NVo3. 
Right now many just look at is as a standards version of what they can already 
get (correct or incorrect as that vision may be)

And finally a quick comment on the storage issue. I won't say much as I think 
others have said it well. But I will restate the most important part. To say 
that storage is out of scope is just insane. Taking a forward looking view of 
how things might be (as Robert suggested) would be very wise.


On Sun, Aug 26, 2012 at 9:55 PM, Stiliadis, Dimitrios (Dimitri) 
<[email protected]<mailto:[email protected]>>
 wrote:
Ivan:

I agree and at the same time disagree with some of the statements
below. I would like to understand your view.

See inline:

On 8/25/12 8:22 AM, "Ivan Pepelnjak" 
<[email protected]<mailto:[email protected]>> wrote:

>On 8/24/12 11:11 PM, Linda Dunbar wrote:
>[...]
>
>> But most, if not all, data centers today don't have the Hypervisors
>> which can encapsulate the NVo3 defined header. The deployment to all
> > 100% NVo3 header based servers won't happen overnight. One thing for
> > sure that you will see data centers with mixed types of servers for
> > very long time.
>>
>> If NVEs are in the ToR, you will see mixed scenario of blade servers,
>> servers with simple virtual switches, or even IEEE802.1Qbg's VEPA. So
>> it is necessary for NVo3 to deal with the "L2 Site" defined in this
>> draft.
>
>There are two hypothetical ways of implementing NVO3: existing layer-2
>technologies (with well-known scaling properties that prompted the
>creation of NVO3 working group) or something-over-IP encapsulation.
>
>I might be myopic, but from what I see most data centers today (at least
>based on market shares of individual vendors) don't have ToR switches
>that would be able to encapsulate MAC frames or IP datagrams in UDP, GRE
>or MPLS envelopes. I am not familiar enough with the commonly used
>merchant silicon hardware to understand whether that's a software or
>hardware limitation. In any case, I wouldn't expect switch vendors to
>roll out NVO3-like something-over-IP solutions any time soon.
>
>On the hypervisor front, VXLAN is shipping for months, NVGRE is included
>in the next version of Hyper-V and MAC-over-GRE is available (with Open
>vSwitch) for both KVM and Xen. Open vSwitch is also part of standard
>Linux kernel distribution and thus available to any other Linux-based
>hypervisor product.
>
>So: all major hypervisors have MAC-over-IP solutions, each one using a
>proprietary encapsulation because there's no standard way of doing it,
>and yet we're spending time discussing and documenting the history of
>evolution of virtual networking. Maybe we should be a bit more
>forward-looking, acknowledge the world has changed, and come up with a
>relevant hypervisor-based solution.
Correct, and here is where IETF as a standard body fails. There is no
easy way (any time soon) for a VXLAN based solution to talk to an NVGRE
or MAC/GRE, or Cloudstack MAC/GRE or STT  (you forgot this one), based
solution.
Proprietary approaches that drive enterprises to vendor lock ins. And
instead
of trying to address the first problem that is about "interoperability",
we completely throw it under the rug as "not important". And by the time
we are done with NVO3, there will be a controller lock in as well, and the
death of interoperability.. If I was on the deployment side of the
solution,
that's the number one flexibility I would like to see. I don't want to be
forced to buy all my hypervisors from a given vendor, given that not all
applications are served equally by all hypervisors (for several $$$
reasons
I might add, that can be related with the licensing options of different
OSes on top of different hypervisors).

>
>Furthermore, performing something-in-IP encapsulation in the hypervisors
>greatly simplifies the data center network, removes the need for
>bridging (each ToR switch can be a L3 switch) and all associated
>bridging kludges (including large-scale bridging solutions). Maybe we
>should remember that "Perfection is achieved, not when there is nothing
>more to add, but when there is nothing left to take away" along with a
>few lessons from RFC 3439.
>
>I am positive a decade from now we'll see ancient servers still using
>VLAN-only hypervisor switches (or untagged interfaces), so there might
>definitely be an need for an NVO3-to-VLAN gateway, but we shouldn't
>continuously focus our efforts on something that's probably going to be
>a rare corner case a few years from now.
You are absolutely correct.
I think that if the gateways were trying to solve the interoperability with
legacy servers problem, then obviously they are doomed since they have a
limited
life time as you correctly point out.

But I would argue that the only reason for the gateways is rather
different,
and it has to do with the point of separation of trust. I believe that
people
have several use cases in mind, where there is no hypervisor involved.
Some examples: ARM/Low power servers where the unit of computation is the
processor and there is no hypervisor, offers of "bare metal servers as as
service", where the handover is the physical wire and what the server puts
on the wire cannot be trusted, etc.

Another real problem described in the last Openstack conference
by one of the panelists:"We want test&dev and QA systems to run over VMs
and production systems to run over the same data center network, but on
bare metal."
Obviously, they want to scale bare metal usage the same way as VMs
depending
on load etc (i.e. drop down the available test&dev resources when
production needs it).
Same separation problems etc exist.

So yes, spending too much time worrying about VMs moving around and doing
encapsulations on ToRs is probably a waste of time. But spending a lot of
time
to understand interoperability between a hypervisor based environments and
use
cases such as the above that require gateways, I think is a real world
problem.

Dimitri

>
>... or I may be completely wrong. Wouldn't be the first time.
>Ivan
>_______________________________________________
>nvo3 mailing list
>[email protected]<mailto:[email protected]>
>https://www.ietf.org/mailman/listinfo/nvo3

_______________________________________________
nvo3 mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/nvo3



--
"Do not lie. And do not do what you hate."
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to