I will correct some small things:

we have 6 nodes, 3 osd node and 3 gaeway node ( which run RGW, mds and nfs 
service)
you r corrct, 2/3 osd node have ONE-NEW 10tib disk

About your suggestion, add another osd host, we will. But we need to end this 
nightmare, my NFS folder which have 10tib data is down :(

My ratio
ceph osd dump | grep ratio
full_ratio 0.95
backfillfull_ratio 0.92
nearfull_ratio 0.85
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to