Thanks a lot guys, will report eventually on steps we choose.

Thanks

On 5 October 2017 at 14:40, Voloshanenko Igor <igor.voloshane...@gmail.com>
wrote:

> Tnx Remi!
>
> Brilliant advice!
>
> чт, 5 окт. 2017 г. в 15:38, Remi Bergsma <rberg...@schubergphilis.com>:
>
> > Hi,
> >
> > We solved this problem by splitting the network in an underlay and
> overlay
> > network. The underlay is the physical network, including the management
> > traffic and storage from hypervisors and such. The simpler, the better.
> In
> > the overlay there’s your services layer, for example the guest networks
> for
> > your clients. Since you’re already using vxlan that shouldn’t be hard. In
> > our setup, each POD is a rack with TOR (Top-of-rack routing) which means
> a
> > L2 domain stays within a rack. If that goes wrong, only one rack aka POD
> > has issues and not the rest. We’ve many PODs in several zones. The
> overlay
> > makes tunnels (we use Nicira but vxlan, ovn or nuage can do the same) and
> > those can be created over L3 interconnected PODs. Effectively, this gives
> > you L2 (overlay) over L3 (underlay). Storage wise we have both
> cluster-wide
> > (in the POD) and zone-wide (basically a POD with just storage).
> >
> > VMs in a given guest network can run in any of the PODs and still have a
> > L2 connection between then, even though the actual physical network is
> L3.
> > This is one of the great benefits of SDN (Software defined networking).
> > It’s pretty much the best of both worlds. We scaled this quite a bit and
> > it’s rock solid.
> >
> > Regards,
> > Remi
> >
> >
> > On 05/10/2017, 13:57, "Rafael Weingärtner" <raf...@autonomiccs.com.br>
> > wrote:
> >
> >     Exactly; the management IPs is defined per POD already, the public
> you
> >     could work out dedicating domains per POD, and then you can dedicate
> a
> >     pool of IPs for each domain. The guest networking problem is solved
> if
> >     you force users from let´s say same domain to say in the same Pod.
> >
> >     The other approach as you said would be a zone per POD.
> >
> >     Please keep us posted with your tests, your findings may be valuable
> to
> >     spot improvement in ACS design and help others with more complex
> >     deployments.
> >
> >
> >     On 10/5/2017 6:51 AM, Andrija Panic wrote:
> >     > Thanks Rafael,
> >     >
> >     > yes that is my expectation also (same broadcast domain for Guest
> > network),
> >     > so it doesn't really solve my problem (identical thing is expected
> > for
> >     > Public Network, at least, if not other networks also)
> >     > Other options seems to be zones per each X racks...
> >     >
> >     > Will see.
> >     >
> >     > Thanks
> >     >
> >     > On 4 October 2017 at 22:25, Rafael Weingärtner <
> > raf...@autonomiccs.com.br>
> >     > wrote:
> >     >
> >     >> I think this can cause problems, if not properly managed. Unless
> you
> >     >> concentrate Domains/Users in Pods. Otherwise, you might end up
> with
> > some
> >     >> VMs of the same user/domain/project in different pods, and if they
> > are all
> >     >> in the same VPC for instance, we would expect them to be in the
> same
> >     >> broadcast domain.
> >     >>
> >     >> I think to apply what you want, it may require some designing and
> > testing,
> >     >> but it feels feasible with ACS.
> >     >>
> >     >>
> >     >> On 10/4/2017 5:19 PM, Andrija Panic wrote:
> >     >>
> >     >>> Anyone?  I know I'm trying to squeeze some free paid consulting
> > here :),
> >     >>> but trying to understand if PODs makes sense in this
> situation....
> >     >>>
> >     >>> Thx
> >     >>>
> >     >>> On 2 October 2017 at 10:21, Andrija Panic <
> andrija.pa...@gmail.com
> > >
> >     >>> wrote:
> >     >>>
> >     >>> Hi guys,
> >     >>>> Sorry for long post below...
> >     >>>>
> >     >>>> I was wondering if someone could bring some light for me for
> > multiple
> >     >>>> PODs
> >     >>>> networking design (L2 vs L3) - idea is to make smaller L2
> > broadcast
> >     >>>> domains
> >     >>>> (any other reason?)
> >     >>>>
> >     >>>> We might decide to transition from current single pod, single
> > cluster
> >     >>>> (single zone) to multiple PODs design (or not...) - we will
> > eventually
> >     >>>> grow
> >     >>>> to over 50 racks worth of KVM hosts (1000+ hosts) so Im trying
> to
> >     >>>> understand best options to avoid having insanely huge L2
> broadcast
> >     >>>> domains...
> >     >>>>
> >     >>>> Mgmt network is routed between pods, that is clear.
> >     >>>>
> >     >>>> We have dedicated primary storage network and Secondary Storage
> > networks
> >     >>>> (vlan interfaces configured locally on all KVM hosts, providing
> > direct L2
> >     >>>> connection obviously, not shared with mgmt.network), and same
> for
> > Public
> >     >>>> and Guest networks... (Advanced networking in zone, Vxlan used
> as
> >     >>>> isolation)
> >     >>>>
> >     >>>> Now with multiple PODs, since Public Network and Guest network
> is
> > defined
> >     >>>> per Zone level (not POD level), and currently same zone-wide
> > setup for
> >     >>>> Primary Storage... what would be the best way to make this
> > traffic stay
> >     >>>> inside PODs as much as possible and is this possible at all?
> > Perhaps I
> >     >>>> would need to look into multiple zones, not PODs.
> >     >>>>
> >     >>>> My humble conclusion, based on having all dedicated networks, is
> > that I
> >     >>>> need to strech (L2 attach as vlan interface) primary and
> secondary
> >     >>>> storage
> >     >>>> network across all racks/PODs, and also need to strech Guest
> vlan
> > (that
> >     >>>> carry all Guest VXLAN tunnels), and again same for Public
> > Network...and
> >     >>>> this again makes huge broadcast domains and doesn't solve my
> > issue...
> >     >>>> Don't see other option in my head to make networking work across
> > PODs.
> >     >>>>
> >     >>>> Any suggestion is most welcome (and if of any use as info - we
> > dont plan
> >     >>>> for any Xen, VmWare etc, will stay purely with KVM).
> >     >>>>
> >     >>>> Thanks
> >     >>>> Andrija
> >     >>>>
> >     >>>>
> >     >>>
> >     >> --
> >     >> Rafael Weingärtner
> >     >>
> >     >>
> >     >
> >
> >     --
> >     Rafael Weingärtner
> >
> >
> >
> >
>



-- 

Andrija Panić

Reply via email to