For Ceph,this is fortunately not a major issue. Drives failing is
considered entirely normal, and Ceph will automatically rebuild your data
from redundancy onto a new replacement drive.If You're able to predict the
imminent failure of a drive, adding a new drive /OSD will automatically
start flowin
Destroy this OSD, replace disk, deploy OSD.
k
Sent from my iPhone
> On 8 Dec 2020, at 15:13, huxia...@horebdata.cn wrote:
>
> Hi, dear cephers,
>
> On one ceph i have a failing disk, whose SMART information signals an
> impending failure but still availble for reads and writes. I am setting
Thanks a lot. I got it.
huxia...@horebdata.cn
From: Janne Johansson
Date: 2020-12-08 13:38
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: [ceph-users] How to copy an OSD from one failing disk to another
one
"ceph osd set norebalance" "ceph osd set nobackfill"
Add new OSD, set osd weig
"ceph osd set norebalance" "ceph osd set nobackfill"
Add new OSD, set osd weight to 0 on old OSD
unset the norebalance and nobackfill options,
and the cluster will do it all for you.
Den tis 8 dec. 2020 kl 13:13 skrev huxia...@horebdata.cn <
huxia...@horebdata.cn>:
> Hi, dear cephers,
>
> On