[ceph-users] Re: 18.2.4 regression: 'diskprediction_local' has failed: No module named 'sklearn'
Same issue here ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Ceph is constantly scrubbing 1/4 of all PGs and still have pigs not scrubbed in time
I recently switched from 16.2.x to 18.2.x and migrated to cephadm, since the switch the cluster is constantly scrubbing, 24/7 up to 50 PGs simultaneously and up to 20 deep scrubs simultaneously in a cluster that has only 12 (in use) OSDs. Furthermore it still manages to regularly have a warning