Does the new host show up under the proper CRUSH bucket?  Do its OSDs?  Send 
`ceph osd tree` please.


>> 
>> 
>>      > Hello guys,
>>      > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>> single
>>      > pool that consumes all OSDs of all nodes. After adding another
>> host, I
>>      > noticed that no extra space was added. Can this be a result of
>> the
>>      > number
>>      > of PGs I am using?
>>      >
>>      > I mean, when adding more hosts/OSDs, should I always consider
>> increasing
>>      > the number of PGs from a pool?
>>      >
>> 
>>      ceph osd tree
>> 
>>      shows all up and with correct weight?
>> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to