Please make list posts in plain text.
> i am working on the plan for a 3 datacenter setup using ceph (in proxmox > nodes). > > Each datacenter has 3 physical nodes to start with and 100Gbit switches. I > will also have 2 x 100 Gbit/s connectivity between the datacenters (each > datacenter to each other). > the physical nodes have 2 x 100Gbit/s for the public network and 2 x > 100Gbit/s for the cluster network. You almost certainly don’t need a cluster / replication network unless these are exceptionally large nodes. > > About this setup i have 2 questions. > > is it even necessary to evaluate a stretched cluster since the WAN > connections are as fast as the local ones (including the latency, since it is > only 25km) ? There’s more to latency than just distance. What is the measured latency? A:B, B:C, C:A? > > If using a stretched pool across all 3 datacenters, what happens if one > datacenter fails ? I did read the documentation and the question came up, > because it do not understand the sentence "Individual Stretch Pools do not > support I/O operations during a netsplit scenario between two or more zones" > completely, does it mean there is no IO already if one datacenter fails ? That sentence refers to a non-stretch cluster. Tell us why you’re spreading across three DCs, what you’re trying to accomplish, and what your performance requirements are. AIUI a stretch 3-site cluster requires all pools to be replicated, size=6. Explicit stretch mode treats the mon quorum in a different way. With two OSD sites you deploy a tiebreaker at a third site, which is possibly just a cloud VM. With three OSD sites, I might speculate that one would deploy 7 mons, 2 At each OSD site + tiebreaker. Operations on a stretch cluster can be slow. Sometimes separate clusters with asynchronous replication make more sense. > > > If i am on the wrong path, maybe someone has a link for me, where is can find > information on this setup ? > > Cheers > Soeren _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io