Hi list,
Looking into diskprediction_local module, and I see that it only
predicts a few states: good, warning and bad:
ceph/src/pybind/mgr/diskprediction_local/predictor.py:
if score > 10:
return "Bad"
if score > 4:
return "Warning"
return "Good"
The predicted fail date is just a deriva
And a more broader question: is anyone using diskpredictor (local or cloud) ?
On Wed, May 20, 2020 at 7:35 PM Paul Emmerich wrote:
>
>
>
> On Wed, May 20, 2020 at 5:36 PM Vytenis A wrote:
>>
>> Is it possible to get any finer prediction date?
>
>
> related ques
Hi list,
We have balancer plugin in upmap mode running for a while now:
health: HEALTH_OK
pgs:
1973 active+clean
194 active+remapped+backfilling
73 active+remapped+backfill_wait
recovery: 588 MiB/s, 343 objects/s
Our objects are stored on EC pool. We got an PG_NOT_DEEP_SCRUBBED
Hi cephists,
We have a 10 node cluster running Nautilus 14.2.9
All objects are on EC pool. We have mgr balancer plugin in upmap mode
doing it's rebalancing:
health: HEALTH_OK
pgs:
1985 active+clean
190 active+remapped+backfilling
65 active+remapp
Forgot to mention the CEPH version we're running: Nautilus 14.2.9
On Fri, May 29, 2020 at 12:44 AM Vytenis A wrote:
>
> Hi list,
>
> We have balancer plugin in upmap mode running for a while now:
>
> health: HEALTH_OK
>
> pgs:
> 1973 active+clean
>194
Thanks for pointing this out!
One thing to mention is that we're not using cache tiering, as
described on https://tracker.ceph.com/issues/44286 , but it's a good
lead.
This means that we can't restart (or experience crashes of) OSDs
during rebalancing.
On Fri, May 29, 2020 at 4:18 AM Chad Willia
during recovery"?
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Fri, May 29, 2020