David, 

I agree that the first phase of NVo3 should not consider encapsulating  FCoE 
traffic. 

That means servers need to have separate physical ports for FCOE traffic so 
that NVE won't need any mechanism to recognize the FCoE traffic. 

If servers don't have separate physical ports for FCoE traffic, they use some 
VLANs for FCoE traffic instead. Then NVEs have to be configured not to process 
those VLANs.  Is it correct?  

How about iSCSI & NFS traffic? Should they be handled in the same way as FCoE 
Traffic? 

Linda

> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of
> Black, David
> Sent: Sunday, August 26, 2012 1:02 AM
> To: [email protected]
> Cc: [email protected]
> Subject: [nvo3] Storage (part of: Let's refocus on real world)
> 
> Robert,
> 
> > Also as you have pointed out storage discussion can not be just
> swapped
> > under carpet and addressed by quote: "storage issues are out of the
> scope".
> 
> I agree ... and that looks like my cue to say something ... e.g., see
> the
> domain part of my email address ;-).
> 
> iSCSI and NFS use TCP/IP in the storage stack and hence will run fine
> over
> all of the data encapsulations being discussed here.  If the iSCSI
> initiator
> or NFS client is in the VM, that's most of the discussion.  That's not
> always
> the case for a number of reasons - the obvious one is that a hypervisor
> iSCSI initiator or NFS client is required if the VM's executable image
> is being
> loaded and/or paged using one of those protocols.  It's also the case
> that
> many hypervisors simplify the storage interface presented to VMs so
> that it
> looks like direct attached or internal disk drives), and map those
> disks to
> networked storage using a hypervisor iSCSI initiator or NFS client.
> Ensuring
> that the VM migration destination hypervisor has appropriate
> connectivity to
> storage is mostly a configuration concern.  The upshot is that iSCSI
> and NFS
> run fine over nvo3-encapsulated networks.
> 
> In contrast, as I said at the microphone at the nvo3 BOF in Paris, I
> suggest
> that the WG not initially consider FCoE, in order to defer spending
> time on
> discussing how to deliver DCB Ethernet service/behavior (required by
> FCoE -
> ordinary non-DCB Ethernet isn't sufficient for FCoE because FCoE is
> *very*
> sensitive to drops) through the encapsulation(s).
> 
> Thanks,
> --David
> 
> 
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]] On Behalf
> Of Robert
> > Raszuk
> > Sent: Saturday, August 25, 2012 11:55 AM
> > To: Ivan Pepelnjak
> > Cc: Black, David; [email protected]; Linda Dunbar
> > Subject: Re: [nvo3] Let's refocus on real world
> >
> > Ivan,
> >
> >  > ... or I may be completely wrong.
> >
> > I think you are actually right on the spot correct.
> >
> > However I am afraid authors of this document are not likely to admit
> > that TOR switches should be just basic IP nodes providing only
> transport
> > between servers.
> >
> > Likewise they will not likely to admit that all logic of
> encapsulation
> > should happen on the hyper-visors as they are simply not in that
> > technology space.
> >
> > Similarly I very much agree and support providing clear distinction
> > between "cold" and "hot" VM mobility cases and perhaps even further
> > provide number of sub-classes hot VM mobility can be accomplish today
> -
> > clearly there is more then one way.
> >
> > Also as you have pointed out storage discussion can not be just
> swapped
> > under carpet and addressed by quote: "storage issues are out of the
> scope".
> >
> > While Linda was perhaps right to say that today most storage today is
> > coming to servers via backend this is what I would call very
> inefficient
> > and legacy way. If we are to think ahead one needs to observe how the
> > industry advances in storage virtualization via front-end IP very
> often
> > not co-located with the compute racks.
> >
> > In my view network related mobility discussion is not about TOR or
> about
> > VLANs. It is about an IP layer above IP transport which would carry
> all
> > necessary information of the actual location of the VMs and which in
> > fact would play the main role in shortening or eliminating the
> > triangular routing problem.
> >
> > Rgs,
> > R.
> >
> >
> >
> > > On 8/24/12 11:11 PM, Linda Dunbar wrote:
> > > [...]
> > >
> > >> But most, if not all, data centers today don't have the
> Hypervisors
> > >> which can encapsulate the NVo3 defined header. The deployment to
> all
> > >  > 100% NVo3 header based servers won't happen overnight. One thing
> for
> > >  > sure that you will see data centers with mixed types of servers
> for
> > >  > very long time.
> > >>
> > >> If NVEs are in the ToR, you will see mixed scenario of blade
> servers,
> > >> servers with simple virtual switches, or even IEEE802.1Qbg's VEPA.
> So
> > >> it is necessary for NVo3 to deal with the "L2 Site" defined in
> this
> > >> draft.
> > >
> > > There are two hypothetical ways of implementing NVO3: existing
> layer-2
> > > technologies (with well-known scaling properties that prompted the
> > > creation of NVO3 working group) or something-over-IP encapsulation.
> > >
> > > I might be myopic, but from what I see most data centers today (at
> least
> > > based on market shares of individual vendors) don't have ToR
> switches
> > > that would be able to encapsulate MAC frames or IP datagrams in UDP,
> GRE
> > > or MPLS envelopes. I am not familiar enough with the commonly used
> > > merchant silicon hardware to understand whether that's a software
> or
> > > hardware limitation. In any case, I wouldn't expect switch vendors
> to
> > > roll out NVO3-like something-over-IP solutions any time soon.
> > >
> > > On the hypervisor front, VXLAN is shipping for months, NVGRE is
> included
> > > in the next version of Hyper-V and MAC-over-GRE is available (with
> Open
> > > vSwitch) for both KVM and Xen. Open vSwitch is also part of
> standard
> > > Linux kernel distribution and thus available to any other Linux-
> based
> > > hypervisor product.
> > >
> > > So: all major hypervisors have MAC-over-IP solutions, each one
> using a
> > > proprietary encapsulation because there's no standard way of doing
> it,
> > > and yet we're spending time discussing and documenting the history
> of
> > > evolution of virtual networking. Maybe we should be a bit more
> > > forward-looking, acknowledge the world has changed, and come up
> with a
> > > relevant hypervisor-based solution.
> > >
> > > Furthermore, performing something-in-IP encapsulation in the
> hypervisors
> > > greatly simplifies the data center network, removes the need for
> > > bridging (each ToR switch can be a L3 switch) and all associated
> > > bridging kludges (including large-scale bridging solutions). Maybe
> we
> > > should remember that "Perfection is achieved, not when there is
> nothing
> > > more to add, but when there is nothing left to take away" along
> with a
> > > few lessons from RFC 3439.
> > >
> > > I am positive a decade from now we'll see ancient servers still
> using
> > > VLAN-only hypervisor switches (or untagged interfaces), so there
> might
> > > definitely be an need for an NVO3-to-VLAN gateway, but we shouldn't
> > > continuously focus our efforts on something that's probably going
> to be
> > > a rare corner case a few years from now.
> > >
> > > ... or I may be completely wrong. Wouldn't be the first time.
> > > Ivan
> > > _______________________________________________
> > > nvo3 mailing list
> > > [email protected]
> > > https://www.ietf.org/mailman/listinfo/nvo3
> > >
> > >
> >
> > _______________________________________________
> > nvo3 mailing list
> > [email protected]
> > https://www.ietf.org/mailman/listinfo/nvo3
> 
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to