[ceph-users] Re: Non existing host in maintenance

2025-01-17 Thread Dominique Ramaekers
ore > > > > I assume I should remove these 'history' key's to? > > > > Note => for now the removed host is still in the warning... > > #ceph health detail > > HEALTH_WARN 1 host is in maintenance mode [WRN] > HOST_IN_MAINTENANCE: 1 > > hos

[ceph-users] Re: Non existing host in maintenance

2025-01-17 Thread Eugen Block
nance mode hvs004 is in maintenance -----Oorspronkelijk bericht- Van: Eugen Block Verzonden: vrijdag 17 januari 2025 12:17 Aan: ceph-users@ceph.io Onderwerp: [ceph-users] Re: Non existing host in maintenance Hi, there's no need to wipe OSDs from a failed host. Just reinstall the OS

[ceph-users] Re: Non existing host in maintenance

2025-01-17 Thread Dominique Ramaekers
#ceph health detail HEALTH_WARN 1 host is in maintenance mode [WRN] HOST_IN_MAINTENANCE: 1 host is in maintenance mode hvs004 is in maintenance > -----Oorspronkelijk bericht- > Van: Eugen Block > Verzonden: vrijdag 17 januari 2025 12:17 > Aan: ceph-users@ceph.io > Onderwerp: [ceph-us

[ceph-users] Re: Non existing host in maintenance

2025-01-17 Thread Eugen Block
Hi, there's no need to wipe OSDs from a failed host. Just reinstall the OS, configure it to your needs, install cephadm and podman/docker, add the cephadm pub key and then just reactivate the OSDs: ceph cephadm osd activate I just did that yesterday. To clear the warning, I would check t