: Brent Kennedy
Cc: Eugen Block ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Cephadm - Adding host to migrated cluster
Do the journal logs for the OSDs say anything about why they
couldn't start up? ("cephadm ls --no-detail" run on the host will
give the systemd units
, 2022 2:25 PM
To: Brent Kennedy
Cc: Eugen Block ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Cephadm - Adding host to migrated cluster
Do the journal logs for the OSDs say anything about why they couldn't start up?
("cephadm ls --no-detail" run on the host will give th
178 TiB avail
> 2022-10-17T17:31:45.043+ 7f035fb44700 -1 mgr get_metadata_python
> Requested missing service osd.37
> 2022-10-17T17:31:45.589+ 7f036d35f700 0 log_channel(cluster) log [DBG]
> : pgmap v1006975: 656 pgs: 656 active+clean; 12 TiB data,
>
> -Brent
>
>
mons stay persistent in the cephadm dashboard
until I delete them manually from the node. Its like the container doesn't
spin up on the node for each of the disks.
-Brent
-Original Message-
From: Eugen Block
Sent: Monday, October 17, 2022 12:52 PM
To: ceph-users@ceph.io
Subject: [
Block
Sent: Monday, October 17, 2022 12:52 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Cephadm - Adding host to migrated cluster
Does the cephadm.log on that node reveal anything useful? What about the
(active) mgr log?
Zitat von Brent Kennedy :
> Greetings everyone,
>
>
>
&
Does the cephadm.log on that node reveal anything useful? What about
the (active) mgr log?
Zitat von Brent Kennedy :
Greetings everyone,
We recently moved a ceph-ansible cluster running pacific on centos 8 to
centos 8 stream and then upgraded to quincy using cephadm after converting
to cep