hy the
> scrubbing never finishes. Perhaps it's really a good idea, just like you
> proposed, to shutdown the corresponding OSDs. But that's just my thoughts.
> Perhaps some Ceph pro can shed some light on the possible reasons, why a
> scrubbing might get stuck and how to r
gt;
> The Ceph experts say scrubbing is important. Don't know why, but I just
> believe them. They've built this complex stuff after all :-)
>
> Thus, you can use "noscrub"/"nodeepscrub" to quickly get a hung server back
> to work, but you should not let it
Hello,
I have a Ceph Cluster with specifications below:
3 x Monitor node
6 x Storage Node (6 disk per Storage Node, 6TB SATA Disks, all disks have SSD
journals)
Distributed public and private networks. All NICs are 10Gbit/s
osd pool default size = 3
osd pool default min size = 2
Ceph version is
Hello,
I have a Ceph Cluster with specifications below:
3 x Monitor node
6 x Storage Node (6 disk per Storage Node, 6TB SATA Disks, all disks have SSD
journals)
Distributed public and private networks. All NICs are 10Gbit/s
osd pool default size = 3
osd pool default min size = 2
Ceph version is