Hello,
I am running a nautilus cluster with 5 OSD nodes/90 disks that is
exclusively used for S3. My disks are identical, but utilization
ranges from 9% to 82%, and I am starting to get backfill_toofull
errors even though I have only used 150TB out of 650TB of data.
- Other than manually crush reweighting OSDs, is there any other
option for me ?
- what would cause this uneven distribution? Is there some
documentation on how to track down what's going on?
output of 'ceph osd df" is at https://pastebin.com/17HWFR12
Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io