IIRC cephadm refreshes its daemons within 15 minutes, at least that
was my last impression. So sometimes you have to be patient. :-)
Zitat von Satish Patel :
Hi Eugen,
My error cleared up itself, Look like it took some time but now I am not
seeing any errors and the output is very clean. Th
Hi Eugen,
My error cleared up itself, Look like it took some time but now I am not
seeing any errors and the output is very clean. Thank you so much.
On Fri, Oct 21, 2022 at 1:46 PM Eugen Block wrote:
> Do you still see it with ‚cephadm ls‘ on that node? If yes you could
> try ‚cephadm rm-da
Do you still see it with ‚cephadm ls‘ on that node? If yes you could
try ‚cephadm rm-daemon —name osd.3‘. Or you try it with the
orchestrator: ceph orch daemon rm…
I don’t have the exact command at the moment, you should check the docs.
Zitat von Satish Patel :
Hi Eugen,
I have delected os
Hi Eugen,
I have delected osd.3 directory from datastorn4 node as you mentioned but
still i am seeing that duplicate osd in ps output.
root@datastorn1:~# ceph orch ps | grep osd.3
osd.3 datastorn4stopped 5m
ago 3w-42.6G
osd.3
Hi,
it looks like the OSDs haven't been cleaned up after removing them. Do
you see the osd directory in /var/lib/ceph//osd.3 on datastorn4?
Just remove the osd.3 directory, then cephadm won't try to activate it.
Zitat von Satish Patel :
Folks,
I have deployed 15 OSDs node clusters using
Great, thanks!
Don't ask me how many commands I have typed to fix my issue. Finally I did
it. Basically i fix /etc/hosts and then i remove mgr service using
following command
ceph orch daemon rm mgr.ceph1.xmbvsb
And cephadm auto deployed a new working mgr. I found ceph orch ps was
hanging and t
I'm not sure exactly what needs to be done to fix that, but I'd imagine
just editing the /etc/hosts file on all your hosts to be correct would be
the start (the cephadm shell would have taken its /etc/hosts off of
whatever host you ran the shell from). Unfortunately I'm not much of a
networking exp
Hi Adam,
You are correct, look like it was a naming issue in my /etc/hosts file. Is
there a way to correct it?
If you see i have ceph1 two time. :(
10.73.0.191 ceph1.example.com ceph1
10.73.0.192 ceph2.example.com ceph1
On Thu, Sep 1, 2022 at 8:06 PM Adam King wrote:
> the naming for daemons
the naming for daemons is a bit different for each daemon type, but for mgr
daemons it's always "mgr..". The daemons cephadm
will be able to find for something like a daemon redeploy are pretty much
always whatever is reported in "ceph orch ps". Given that
"mgr.ceph1.xmbvsb" isn't listed there, it
Hi Adam,
I have also noticed a very strange thing which is Duplicate name in the
following output. Is this normal? I don't know how it got here. Is there
a way I can rename them?
root@ceph1:~# ceph orch ps
NAME HOST PORTSSTATUS REFRESHED AGE
MEM USE MEM LIM
Hi Adam,
Getting the following error, not sure why it's not able to find it.
root@ceph1:~# ceph orch daemon redeploy mgr.ceph1.xmbvsb
Error EINVAL: Unable to find mgr.ceph1.xmbvsb daemon(s)
On Thu, Sep 1, 2022 at 5:57 PM Adam King wrote:
> what happens if you run `ceph orch daemon redeploy mgr
what happens if you run `ceph orch daemon redeploy mgr.ceph1.xmbvsb`?
On Thu, Sep 1, 2022 at 5:12 PM Satish Patel wrote:
> Hi Adam,
>
> Here is requested output
>
> root@ceph1:~# ceph health detail
> HEALTH_WARN 4 stray daemon(s) not managed by cephadm
> [WRN] CEPHADM_STRAY_DAEMON: 4 stray daemo
Hi Adam,
Here is requested output
root@ceph1:~# ceph health detail
HEALTH_WARN 4 stray daemon(s) not managed by cephadm
[WRN] CEPHADM_STRAY_DAEMON: 4 stray daemon(s) not managed by cephadm
stray daemon mon.ceph1 on host ceph1 not managed by cephadm
stray daemon osd.0 on host ceph1 not man
cephadm deploys the containers with --rm so they will get removed if you
stop them. As for getting the 2nd mgr back, if it still lists the 2nd one
in `ceph orch ps` you should be able to do a `ceph orch daemon redeploy
` where should match the name given in
the orch ps output for the one that isn'
Adam,
I have posted a question related to upgrading earlier and this thread is
related to that, I have opened a new one because I found that error in logs
and thought the upgrade may be stuck because of duplicate OSDs.
root@ceph1:~# ls -l /var/lib/ceph/f270ad9e-1f6f-11ed-b6f8-a539d87379ea/
total
Are there any extra directories in /var/lib/ceph or /var/lib/ceph/
that appear to be for those OSDs on that host? When cephadm builds the info
it uses for "ceph orch ps" it's actually scraping those directories. The
output of "cephadm ls" on the host with the duplicates could also
potentially have
16 matches
Mail list logo