ecover from OSD restart
Sorry for the many small e-mails: requested IDs in the commands, 288-296. One
new OSD per host.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:
rom OSD restart
Sorry for the many small e-mails: requested IDs in the commands, 288-296. One
new OSD per host.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To:
________________
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host
6. One
new OSD per host.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi
==
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is
;,
"num": 100
},
{
"op": "take",
"item": -53,
"item_name": "ServerRoom~hdd"
},
{
"op": "chooseleaf_ind
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host=bb-04
ceph osd crush move osd.289 host=bb-05
ceph osd crush move
ng 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move
: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host=bb-04
ceph osd crush move osd.289 host=bb-
==
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host=bb-04
ceph osd crush mov
-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host=bb-04
ceph osd crush move osd.289 host=bb-05
ceph osd crush move osd.290 host=bb-06
ceph osd crush move osd.291
quot;op": "take",
"item": -53,
"item_name": "ServerRoom~hdd"
},
{
"op": "chooseleaf_indep",
"num": 0,
&q
uot;num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -53,
"item_name&quo
,
"num": 100
},
{
"op": "take",
"item": -53,
"item_name": "ServerRoom~hdd"
},
{
"op": "chooseleaf_i
After moving the newly added OSDs out of the crush tree and back in again, I
get to exactly what I want to see:
cluster:
id: e4ece518-f2cb-4708-b00f-b6bf511e91d9
health: HEALTH_WARN
norebalance,norecover flag(s) set
53030026/1492404361 objects misplaced (3.55
,
"num": 100
},
{
"op": "take",
"item": -53,
"item_name": "ServerRoom~hdd"
},
{
"op": "chooseleaf_i
Can you post the output of these commands:
ceph osd pool ls detail
ceph osd tree
ceph osd crush rule dump
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 9:19 AM
To: ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
After moving the
18 matches
Mail list logo