Wondering if anyone knows or has put together a way to wipe an Octopus
install? I’ve looked for documentation on the process, but if it exists, I
haven’t found it yet. I’m going through some test installs - working through
the ins and outs of cephadm and containers and would love an ea
[2].
>
> Regards,
> Eugen
>
>
> [1] https://docs.ceph.com/en/latest/man/8/cephadm/
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192
>
>
> Zitat von Samuel Taylor Liston :
>
>> Wondering if anyone knows or has put together a way to wipe an O
Block wrote:
Ah, if you run 'cephadm rm-cluster --fsid ...' on each node it will remove all
containers and configs (ceph-salt comes in handy with this). You'll still have
to wipe the drives though, but nevertheless it's a little quicker than doing it
all manually.
Zita
I did a dumb thing and removed OSDs across a failover domain and as a
result have 4 remapped+incomplete pgs. The data is still on the drives. Is
there a way to add one of these OSDs back in to the cluster?
I’ve made an attempt to re-add the keyring back in using ‘ceph auth’
and