ume, to create a legacy osd (you'd
> get warnings about a stray daemon). If that works, adopt the osd with
> cephadm.
> I don't have a better idea right now.
>
> Zitat von Bob Gibson :
>
>> Here are the contents from the same directory on our osd node:
&g
nmanaged, and the
cluster is otherwise healthy, so unless anyone has any other ideas to offer I
guess I’ll just leave things as-is until the maintenance window.
Cheers,
/rjg
On Oct 25, 2024, at 10:31 AM, Bob Gibson wrote:
[…]
My hunch is that some persistent state is corrupted, or there’s some
ceph orch host ls' output? I wonder
if it could be a connection issue, MTU mismatch, apparmor or firewall...
Zitat von Bob Gibson :
Sorry to resurrect this thread, but while I was able to get the
cluster healthy again by manually creating the osd, I'm still unable
to manage osds using the o
HI Frédéric,
> I think this message shows up as this very specific post adoption 'osd'
> service has already been marked as 'deleted'. Maybe when you ran the command
> for the first time.
> The only reason it still shows up on 'ceph orch ls' is that 95 OSDs are still
> referencing this service
phadm debug logs - see
https://docs.ceph.com/en/latest/cephadm/operations/#watching-cephadm-log-messages
Cheers,
tobi
Am Mi., 23. Okt. 2024 um 20:15 Uhr schrieb Bob Gibson
mailto:r...@oicr.on.ca>>:
Sorry to resurrect this thread, but while I was able to get the cluster healthy
again by manu
Block wrote:
EXTERNAL EMAIL | USE CAUTION
Glad to hear it worked out for you!
Zitat von Bob Gibson :
I’ve been away on vacation and just got back to this. I’m happy to
report that manually recreating the OSD with ceph-volume and then
adopting it with cephadm fixed the problem.
Thanks again fo
egards,
Eugen
Zitat von Bob Gibson :
Hi,
We recently converted a legacy cluster running Quincy v17.2.7 to
cephadm. The conversion went smoothly and left all osds unmanaged by
the orchestrator as expected. We’re now in the process of converting
the osds to be managed by the orchestrator. We successf
ng seems wrong with that
per se.
Regarding the stuck device list, do you see the mgr logging anything
suspicious? Especially when you say that it only returns output after
a failover. Those two osd specs are not conflicting since the first is
"unmanaged" after adoption.
Is there something
-volume inventory' locally on that node? Do you see any
hints in the node's syslog? Maybe try a reboot or something?
Zitat von Bob Gibson :
Thanks for your reply Eugen. I’m fairly new to cephadm so I wasn’t
aware that we could manage the drives without rebuilding them.
However, we though
Hi,
We recently converted a legacy cluster running Quincy v17.2.7 to cephadm. The
conversion went smoothly and left all osds unmanaged by the orchestrator as
expected. We’re now in the process of converting the osds to be managed by the
orchestrator. We successfully converted a few of them, but
10 matches
Mail list logo