Hi,

I'm trying to upgrade a (cephadm) cluster from 18.2.2 to 18.2.6, using 'ceph orch upgrade'. When I enter the command 'ceph orch upgrade start --ceph-version 18.2.6', I receive a message saying that the upgrade has been initiated, with a similar message in the logs but nothing happens after this. 'ceph orch upgrade status' says:

-------

[root@ijc-mon1 ~]# ceph orch upgrade status
{
    "target_image": "quay.io/ceph/ceph:v18.2.6",
    "in_progress": true,
    "which": "Upgrading all daemon types on all hosts",
    "services_complete": [],
    "progress": "",
    "message": "",
    "is_paused": false
}
-------

The first time I entered the command, the cluster status was HEALTH_WARN because of 2 stray daemons (caused by destroyed OSDs, rm --replace). I set mgr/cephadm/warn_on_stray_daemons to false to ignore these 2 daemons, the cluster is now HEALTH_OK but it doesn't help. Following a Red Hat KB entry, I tried to failover the mgr, stopped an restarted the upgrade but without any improvement. I have not seen anything in the logs, except that there is an INF entry every 10s about the destroyed OSD saying:

------

2025-04-24T21:30:54.161988+0000 mgr.ijc-mon1.yyfnhz (mgr.55376028) 14079 : cephadm [INF] osd.253 now down 2025-04-24T21:30:54.162601+0000 mgr.ijc-mon1.yyfnhz (mgr.55376028) 14080 : cephadm [INF] Daemon osd.253 on dig-osd4 was already removed 2025-04-24T21:30:54.164440+0000 mgr.ijc-mon1.yyfnhz (mgr.55376028) 14081 : cephadm [INF] Successfully destroyed old osd.253 on dig-osd4; ready for replacement 2025-04-24T21:30:54.164536+0000 mgr.ijc-mon1.yyfnhz (mgr.55376028) 14082 : cephadm [INF] Zapping devices for osd.253 on dig-osd4
-----

The message seems to be only for one of the 2 destroyed OSDs since I restarted the mgr. May this be the cause for the stucked upgrade? What can I do for fixing this?

Thanks in advance for any hint. Best regards,

Michel
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to