What is your cluster status (ceph -s)? I assume that either your cluster is not healthy or your crush rules don't cover an osd failure. Sometimes it helps to fail the active mgr (ceph mgr fail). Can you also share your 'ceph osd tree'? Do you use the default replicated_rule or any additional crush rules?

Zitat von Budai Laszlo <laszlo.bu...@gmail.com>:

Dear All,

I'm testing ceph quincy and I have problems using the cephadm ochestrator backend. When I'm trying to use it to start/stop osd daemons nothing happens.

I have a "brand new" cluster deployed with cephadm. So far everything else that I tried worked just like in Pacific, but the ceph orch daemon start osd.X, ceph orch daemon stop osd.X is not working.

# ceph version
ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)

# ceph orch status --detail
Backend: cephadm
Available: Yes
Paused: No
Host Parallelism: 10

# ceph osd stat
8 osds: 8 up (since 112m), 8 in (since 112m); epoch: e481

# ceph orch daemon stop osd.2
Scheduled to stop osd.2 on host 'storage3'

After several minutes nothing is changed:

# ceph osd stat
8 osds: 8 up (since 113m), 8 in (since 113m); epoch: e481


I have also tried to capture logs but I don't see anything about why the request is not fulfilled:

# ceph config set mgr mgr/cephadm/log_to_cluster_level debug
# ceph --watch-debug -W '*'
...

2022-09-29T12:28:02.756949+0000 mgr.monitor1.ossyne [DBG] from='client.15018 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2022-09-29T12:28:02.757415+0000 mgr.monitor1.ossyne [INF] Schedule stop daemon osd.0
2022-09-29T12:28:02.757477+0000 mgr.monitor1.ossyne [DBG] _kick_serve_loop

this is all I get.

What am I doing wrong?


Thank you,
Laszlo
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to