Hello all,
We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This
pool has 9 servers with each 12 disks of 16TBs. About 10 days ago we lost a
server and we've removed its OSDs from the cluster. Ceph has started to
remap and backfill as expected, but the process has been getting s
Hi,
I have many 'not {deep-}scrubbed in time' and a1 PG remapped+backfilling
and I don't understand why this backfilling is taking so long.
root@hbgt-ceph1-mon3:/# ceph -s
cluster:
id: c300532c-51fa-11ec-9a41-0050569c3b55
health: HEALTH_WARN
15 pgs not deep-scrubbed in t