Hi,

One of our user is migrating 1.2 billions of objects to one bucket from another 
system (cassandra) and we are facing in our clusters bluefs spillovers on 50% 
of the osds. 
We have 600-900GB dbs bit seems like can’t fit.
Also the cluster is very unstable, I can’t really set recovery operations, 
backfills more than 1 because osds started to rebooting and it makes the 
recovery very slow.
1.
What did I miss during planning for this? The osds are 15.3 TB ssds.

2.
If I remove the db device from the nvme with the ceph-objectstore-tool and keep 
it with the block, would it be an issue still? I guess if stay together cannot 
spillover anywhere.
I guess need to compact the spilledover disks before remove db.

3.
Correct me if I’m wrong but the separate db device is just a help for the block 
to be able to find the files from which block it is located in the disk so if I 
remove, the osd will still know where is the data, but with looking on the 
block device itself not on the separated nvme.

Thank you

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to