[ceph-users] Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-13 Thread sharathvuthpala
We have a user-provisioned instance( Bare Metal Installation) of OpenShift cluster running on version 4.12 and we are using OpenShift Data Foundation as the Storage System. Earlier we had 3 disks attached to the storage system and 3 OSDs were available in the cluster. Today, while adding additio

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-13 Thread sharathvuthpala
Hi, We have HDD disks. Today, after almost 36 hours, Rebuilding Data Resiliency is 58% and still going on. The good thing is it is not stuck at 5%. Does it take this long to complete rebuilding resiliency process whenever there is a maintenance in the cluster? ___

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-14 Thread sharathvuthpala
We are using ceph version 16.2.10-172.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable). ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-17 Thread sharathvuthpala
Hi Guys, Thanks for your responses. The issue has been resolved. We increased the number of backfill threads to 5 from default value(1) and noticed some increase in the speed of rebalancing. Anyhow, it took almost 3 and a half days for the entire rebalancing process, which we believe would no