Hello Marc,

In my beliefs that's exactly the main reason why people use Ceph: its gets
more reliable the more nodes we put in the cluster. You should take a look
in documentation and try to make use of placement rules, erasure codes or
whatever fits your needs. I'm yet new in Ceph (been using for about 1 year)
and I strongly tell you that your ideia just *may be* good, but may be a
little overkill too =D

Regards,

On Mon, May 14, 2018 at 2:26 PM Michael Kuriger <mk7...@dexyp.com> wrote:

> The more servers you have in your cluster, the less impact a failure
> causes to the cluster. Monitor your systems and keep them up to date.  You
> can also isolate data with clever crush rules and creating multiple zones.
>
>
>
> *Mike Kuriger*
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Marc Boisis
> *Sent:* Monday, May 14, 2018 9:50 AM
> *To:* ceph-users
> *Subject:* [ceph-users] a big cluster or several small
>
>
>
>
> Hi,
>
>
>
> Hello,
>
> Currently we have a 294 OSD (21 hosts/3 racks) cluster with RBD clients
> only, 1 single pool (size=3).
>
>
>
> We want to divide this cluster into several to minimize the risk in case
> of failure/crash.
>
> For example, a cluster for the mail, another for the file servers, a test
> cluster ...
>
> Do you think it's a good idea ?
>
>
>
> Do you have experience feedback on multiple clusters in production on the
> same hardware:
>
> - containers (LXD or Docker)
>
> - multiple cluster on the same host without virtualization (with
> ceph-deploy ... --cluster ...)
>
> - multilple pools
>
> ...
>
>
>
> Do you have any advice?
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 

João Paulo Bastos
DevOps Engineer at Mav Tecnologia
Belo Horizonte - Brazil
+55 31 99279-7092
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to