no, I mean ceph sees it as a failure and marks it out for a while

On Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick <singap...@amerrick.co.uk>
wrote:

> Is your HD actually failing and vanishing from the OS and then coming back
> shortly?
>
> Or do you just mean your OSD is crashing and then restarting it self
> shortly later?
>
>
> ---- On Fri, 06 Sep 2019 01:55:25 +0800 * solarflo...@gmail.com
> <solarflo...@gmail.com> * wrote ----
>
> One of the things i've come to notice is when HDD drives fail, they often
> recover in a short time and get added back to the cluster.  This causes the
> data to rebalance back and forth, and if I set the noout flag I get a
> health warning.  Is there a better way to avoid this?
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to