> "drain_done_at": null, "process_started_at":
> > "2024-08-19T07:21:27.794688Z"}]
> >
> > Here you see the original_weight which the orchestrator failed to
> > read, apparently. (Note that there are only small 20 GB OSDs, hence
> > the sma
y also fail during an upgrade.
> Just trying to understand what could cause this (I haven’t upgraded
> production clusters to Reef yet, only test clusters). Have you stopped
> the upgrade to cancel the process entirely? Can you share this
> information please:
>
> ceph versions
&
t argument.
Please let me know if you guys have any thoughts
Thank you!
On Wed, Aug 14, 2024, 7:37 PM Benjamin Huth wrote:
> Hey there, so I went to upgrade my ceph from 18.2.2 to 18.2.4 and have
> encountered a problem with my managers. After they had been upgraded, my
> ceph orch modul
Hey there, so I went to upgrade my ceph from 18.2.2 to 18.2.4 and have
encountered a problem with my managers. After they had been upgraded, my
ceph orch module broke because the cephadm module would not load. This
obviously halted the update because you can't really update without the
orchestrator