On Wed, Feb 8, 2017 at 5:34 PM, Marcus Furlong <furlo...@gmail.com> wrote:

> On 9 February 2017 at 09:34, Trey Palmer <t...@mailchimp.com> wrote:
> > The multisite configuration available starting in Jewel sound more
> > appropriate for your situation.
> >
> > But then you need two separate clusters, each large enough to contain
> all of
> > your objects.
>
> On that note, is anyone aware of documentation that details the
> differences between federated gateway and multisite? And where each
> would be most appropriate?


They are similar, but mostly reworked and simplified in the case of
multisite. A multisite configuration allows writes to non-master zones.


>
> Regards,
> Marcus.
> --
> Marcus Furlong
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

On Tue, Feb 7, 2017 at 12:17 PM, Daniel Picolli Biazus
<picol...@gmail.com> wrote:
> Hi Guys,
>
> I have been planning to deploy a Ceph Cluster with the following hardware:
>
> OSDs:
>
> 4 Servers Xeon D 1520 / 32 GB RAM / 5 x 6TB SAS 2 (6 OSD daemon per
server)
>
> Monitor/Rados Gateways
>
> 5 Servers Xeon D 1520 32 GB RAM / 2 x 1TB SAS 2 (5 MON daemon/ 4 rados
> daemon)
>
> Usage: Object Storage only
>
>     However I need to deploy 2 OSD and 3 MON Servers in Miami datacenter
and
> another 2 OSD and 2 MON Servers in Montreal Datacenter. The latency
between
> these datacenters is 50 milliseconds.
>    Considering this scenario, should I use Federated Gateways or should I
> use a single Cluster ?

There's nothing stopping you from separating the datacenters with CRUSH
root definitions to have one zone serving from the pools existing solely in
one Datacenter, and another zone servicing the other. That way the transfer
of data and metadata will occur at the RadosGW level (more latency-tolerant
than rados. this is what it was designed to do), while still being managed
at one point.

It's worth testing both configurations, as well as the effects of latency
on your monitors. In some cases I'd consider trying to source another MON
and running two separate clusters, but simply put, YMMV.

>
> Thanks in advance
> Daniel

-- 
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to