Re: [ceph-users] Adding multiple OSD

2017-12-05 Thread Richard Hesketh
On 05/12/17 09:20, Ronny Aasen wrote: > On 05. des. 2017 00:14, Karun Josy wrote: >> Thank you for detailed explanation! >> >> Got one another doubt, >> >> This is the total space available in the cluster : >> >> TOTAL : 23490G >> Use  : 10170G >> Avail : 13320G >> >> >> But ecpool shows max avail

Re: [ceph-users] Adding multiple OSD

2017-12-05 Thread Ronny Aasen
On 05. des. 2017 00:14, Karun Josy wrote: Thank you for detailed explanation! Got one another doubt, This is the total space available in the cluster : TOTAL : 23490G Use  : 10170G Avail : 13320G But ecpool shows max avail as just 3 TB. What am I missing ? == $ ceph df GLOBAL:  

Re: [ceph-users] Adding multiple OSD

2017-12-04 Thread Karun Josy
Thank you for detailed explanation! Got one another doubt, This is the total space available in the cluster : TOTAL : 23490G Use : 10170G Avail : 13320G But ecpool shows max avail as just 3 TB. What am I missing ? == $ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USE

Re: [ceph-users] Adding multiple OSD

2017-12-04 Thread Karun Josy
Thank you for detailed explanation! Got one another doubt, This is the total space available in the cluster : TOTAL 23490G Use 10170G Avail : 13320G But ecpool shows max avail as just 3 TB. Karun Josy On Tue, Dec 5, 2017 at 1:06 AM, David Turner wrote: > No, I would only add disks to 1 f

Re: [ceph-users] Adding multiple OSD

2017-12-04 Thread David Turner
No, I would only add disks to 1 failure domain at a time. So in your situation where you're adding 2 more disks to each node, I would recommend adding the 2 disks into 1 node at a time. Your failure domain is the crush-failure-domain=host. So you can lose a host and only lose 1 copy of the data.

Re: [ceph-users] Adding multiple OSD

2017-12-04 Thread Karun Josy
Thanks for your reply! I am using erasure coded profile with k=5, m=3 settings $ ceph osd erasure-code-profile get profile5by3 crush-device-class= crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=5 m=3 plugin=jerasure technique=reed_sol_van w=8 Cluster has 8 nod

Re: [ceph-users] Adding multiple OSD

2017-12-04 Thread David Turner
Depending on how well you burn-in/test your new disks, I like to only add 1 failure domain of disks at a time in case you have bad disks that you're adding. If you are confident that your disks aren't likely to fail during the backfilling, then you can go with more. I just added 8 servers (16 OSD