hi, thanks for the replies.  I guess this would also explain why every time
an OSD would fail it wouldn't stay down, and add itself back again.



On Tue, Sep 3, 2019 at 11:45 AM Frank Schilder <fr...@dtu.dk> wrote:

> "ceph osd down" will mark an OSD down once, but not shut it down. Hence,
> it will continue to send heartbeats and request to be marked up again after
> a couple of seconds. To keep it down, there are 2 ways:
>
> - either set "ceph osd set noup",
> - or actually shut the OSD down.
>
> The first version will allow the OSD to keep running so you can talk to
> the daemon while it is marked "down" . Be aware that the OSD will be marked
> "out" after a while. You might need to mark it "in" manually when you are
> done with maintenance.
>
> I believe with nautilus it is possible to set the noup flag on a specific
> OSD, which is much safer.
>
> Best regards,
>
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ________________________________________
> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
> solarflow99 <solarflo...@gmail.com>
> Sent: 03 September 2019 19:40:59
> To: Ceph Users
> Subject: [ceph-users] forcing an osd down
>
> I noticed this has happened before, this time I can't get it to stay down
> at all, it just keeps coming back up:
>
> # ceph osd down osd.48
> marked down osd.48.
>
> # ceph osd tree |grep osd.48
> 48   3.64000         osd.48         down        0          1.00000
>
> # ceph osd tree |grep osd.48
> 48   3.64000         osd.48           up        0          1.00000
>
>
>
> health HEALTH_WARN
>             2 pgs backfilling
>             1 pgs degraded
>             2 pgs stuck unclean
>             recovery 18/164089686 objects degraded (0.000%)
>             recovery 1467405/164089686 objects misplaced (0.894%)
>      monmap e1: 3 mons at {0=
> 192.168.4.10:6789/0,1=192.168.4.11:6789/0,2=192.168.4.12:6789/0<
> http://192.168.4.10:6789/0,1=192.168.4.11:6789/0,2=192.168.4.12:6789/0>}
>             election epoch 210, quorum 0,1,2 0,1,2
>      mdsmap e166: 1/1/1 up {0=0=up:active}, 2 up:standby
>      osdmap e25733: 45 osds: 45 up, 44 in; 2 remapped pgs
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to