Please send the outputs of

        ceph osd df
        ceph df
        ceph osd crush dump
        ceph osd tree

At first I will ask if you intend for size=2 to be final, as that’s a recipe 
for data loss.

> On Feb 27, 2025, at 12:40 PM, quag...@bol.com.br wrote:
> 
> Hello, 
>     I recently installed a new cluster.
> 
>     After the first node was working, I started transferring the files I 
> needed. As I was in some urgency to do rsync, I enabled size=1 for the CephFS 
> data pool. 
>     After a few days, when I managed to place a new node, I put size = 2 for 
> that pool.
> 
>     Replicas of existing objects are already being recorded, but the 
> available space has not yet been updated.
> 
>     The available space should automatically increase as I add more disks. 
> Right?
> 
>     Could you help me identify where I'm going wrong?
> 
> Rafael.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to