On Thu, Apr 20, 2017 at 2:13 AM, Maxime Guyot <maxime.gu...@elits.com>
wrote:

> >2) Why did you choose to run the ceph nodes on loopback interfaces as
> opposed to the /24 for the "public" interface?
>
> I can’t speak for this example, but in a clos fabric you generally want to
> assign the routed IPs on loopback rather than physical interfaces. This way
> if one of the link goes down (t.ex the public interface), the routed IP is
> still advertised on the other link(s).
>

That only makes sense if you're running multiple ToR switches per rack for
the public leaf network. Multiple public ToR switches per rack is not very
common; most Clos crossbar networks run a single ToR switch. Several guides
on the topic (including Arista & Cisco) suggest that you use something like
MLAG in a layer 2 domain between the switches if you need some sort of
switch redundancy inside the rack. This increases complexity, and most
people decide that it's not worth it and instead  scale out across racks to
gain the redundancy and survivability that multiple ToR offer.

On Thu, Apr 20, 2017 at 4:04 AM, Jan Marquardt <j...@artfiles.de> wrote:

>
> Maxime, thank you for clarifying this. Each server is configured like this:
>
> lo/dummy0: Loopback interface; Holds the ip address used with Ceph,
> which is announced by BGP into the fabric.
>
> enp5s0: Management Interface, which is used only for managing the box.
> There should not be any Ceph traffic on this one.
>
> enp3s0f0: connected to sw01 and used for BGP
> enp3s0f1: connected to sw02 and used for BGP
> enp4s0f0: connected to sw01 and used for BGP
> enp4s0f1: connected to sw02 and used for BGP
>
> These four interfaces are supposed to transport the Ceph traffic.


See above. Why are you running multiple public ToR switches in this rack?
I'd suggest switching them to a single layer 2 domain and participate in
the Clos fabric as a single unit, or scale out across racks (preferred).
Why bother with multiple switches in a rack when you can just use multiple
racks? That's the beauty of Clos: just add more spines if you need more
leaf to leaf bandwidth.

How many OSD, servers, and racks are planned for this deployment?

-richard
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to