Re: [ceph-users] Scaling out

2019-11-21 Thread Alfredo De Luca
Thanks heaps Nathan. That's what we thoughts and we wanted implement but I wanted to double check with the community. Cheers On Thu, Nov 21, 2019 at 2:42 PM Nathan Fish wrote: > The default crush rule uses "host" as the failure domain, so in order > to deploy on one host you will need to make

Re: [ceph-users] Scaling out

2019-11-21 Thread Nathan Fish
The default crush rule uses "host" as the failure domain, so in order to deploy on one host you will need to make a crush rule that specifies "osd". Then simply adding more hosts with osds will result in automatic rebalancing. Once you have enough hosts to satisfy the crush rule ( 3 for replicated

[ceph-users] Scaling out

2019-11-21 Thread Alfredo De Luca
Hi all. We are doing some tests on how to scale out nodes on Ceph Nautilus. Basically we want to try to install Ceph on one node and scale up to 2+ nodes. How to do so? Every nodes has 6 disks and maybe we can use the crushmap to achieve this? Any thoughts/ideas/recommendations? Cheers -- *