The default crush rule uses "host" as the failure domain, so in order
to deploy on one host you will need to make a crush rule that
specifies "osd". Then simply adding more hosts with osds will result
in automatic rebalancing. Once you have enough hosts to satisfy the
crush rule ( 3 for replicated size = 3) you can change the pool(s)
back to the default rule.

On Thu, Nov 21, 2019 at 7:46 AM Alfredo De Luca
<alfredo.del...@gmail.com> wrote:
>
> Hi all.
> We are doing some tests on how to scale out nodes on Ceph Nautilus.
> Basically we want to try to install Ceph on one node and scale up to 2+ 
> nodes. How to do so?
>
> Every nodes has 6 disks and maybe  we can use the crushmap to achieve this?
>
> Any thoughts/ideas/recommendations?
>
>
> Cheers
>
>
> --
> Alfredo
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to