[ceph-users] Re: help with failed osds after reboot

2020-06-15 Thread Paul Emmerich
On Mon, Jun 15, 2020 at 7:01 PM wrote: > Ceph version 10.2.7 > > ceph.conf > [global] > fsid = 75d6dba9-2144-47b1-87ef-1fe21d3c58a8 > (...) > mount_activate: Failed to activate > ceph-disk: Error: No cluster conf found in /etc/ceph with fsid > e1d7b4ae-2dcd-40ee-bea5-d103fe1fa9c9 > -- Paul

[ceph-users] Re: help with failed osds after reboot

2020-06-15 Thread seth . duncan2
Ceph version 10.2.7 ceph.conf [global] fsid = 75d6dba9-2144-47b1-87ef-1fe21d3c58a8 mon_initial_members = chad, jesse, seth mon_host = 192.168.10.41,192.168.10.40,192.168.10.39 mon warn on legacy crush tunables = false auth_cluster_required = cephx auth_service_required = cephx auth_client_require

[ceph-users] Re: help with failed osds after reboot

2020-06-12 Thread Marc Roos
Maybe you have the same issue? https://tracker.ceph.com/issues/44102#change-167531 In my case an update(?) disabled osd runlevels. systemctl is-enabled ceph-osd@0 -Original Message- To: ceph-users@ceph.io Subject: [ceph-users] Re: help with failed osds after reboot Hi, which ceph

[ceph-users] Re: help with failed osds after reboot

2020-06-12 Thread Eugen Block
Hi, which ceph release are you using? You mention ceph-disk so your OSDs are not LVM based, I assume? I've seen these messages a lot when testing in my virtual lab environment although I don't believe it's the cluster's fsid but the OSD's fsid that's in the error message (the OSDs have th