thanks but it seems another issue.
Is there any way to upgrade mgr without orchestrator ? or any other services ?
We are online but cannot give any command to cluster
alertmanager.ceph01 ceph01 *:9093,9094 running (10h) 6h ago 21M
63.1M - 0.25.0 c8568f914cd2 0bf1d544a775
ceph-exporter.ceph01 ceph01 running (10h) 6h ago 21M
59.0M - 17.2.6 c9a1062f7289 de3f438c448c
ceph-exporter.ceph02 ceph02 running (10h) 6h ago 21M
59.5M - 17.2.6 c9a1062f7289 ba1edb035a66
crash.ceph01 ceph01 running (7h) 6h ago 21M
8640k - 17.2.6 2747c7f13104 e692b2314548
crash.ceph02 ceph02 running (7h) 6h ago 21M
8568k - 17.2.6 2747c7f13104 24535dbbfd76
crash.ceph03 ceph03 running (7h) 6h ago 21M
7394k - 17.2.6 2747c7f13104 86b981a9254c
grafana.ceph01 ceph01 *:3000 running (10h) 6h ago 21M
212M - 9.4.7 954c08fa6188 7ef73e324096
mds.cephfs1.ceph01.niiova ceph01 stopped 6h ago 11h
- - <unknown> <unknown> <unknown>
mds.cephfs1.ceph02.qsrlyo ceph02 stopped 6h ago 21M
- - <unknown> <unknown> <unknown>
mds.cephfs1.ceph03.kuysxq ceph03 stopped 6h ago 21M
- - <unknown> <unknown> <unknown>
mgr.ceph01.eydqvm ceph01 *:8443,9283 running (7h) 6h ago 21M
439M - 17.2.6 2747c7f13104 6093b54fda30
mgr.ceph02.wwiqqs ceph02 *:8443,9283 running (7h) 6h ago 21M
576M - 17.2.6 2747c7f13104 bd8a6a0e0dc7
mgr.ceph03.orwlyv ceph03 *:8443,9283 running (7h) 6h ago 21M
438M - 17.2.6 2747c7f13104 61d563cc5fd2
mon.ceph01 ceph01 running (7h) 6h ago 21M
343M 2048M 17.2.6 2747c7f13104 33c26b68b7e0
mon.ceph02 ceph02 running (7h) 6h ago 21M
324M 2048M 17.2.6 2747c7f13104 479224b3c6d9
mon.ceph03 ceph03 running (7h) 6h ago 21M
314M 2048M 17.2.6 2747c7f13104 affa57e31300
node-exporter.ceph01 ceph01 *:9100 running (10h) 6h ago 21M
37.5M - 1.5.0 0da6a335fe13 a868bdcacffb
node-exporter.ceph02 ceph02 *:9100 running (10h) 6h ago 21M
36.5M - 1.5.0 0da6a335fe13 ffd26fa4c977
node-exporter.ceph03 ceph03 *:9100 running (10h) 6h ago 21M
38.3M - 1.5.0 0da6a335fe13 432fb5b9e903
osd.0 ceph01 running (10h) 6h ago 21M
7501M 9461M 17.2.8 259b35566514 3a451ff18bbf
osd.1 ceph02 running (10h) 6h ago 21M
8623M 9589M 17.2.8 259b35566514 a420f4373061
osd.2 ceph03 running (10h) 6h ago 7h
3270M 9845M 17.2.8 259b35566514 57eba1a3fcaf
osd.3 ceph02 running (10h) 6h ago 7h
4580M 9589M 17.2.8 259b35566514 6e4ffb81fee5
osd.4 ceph03 running (10h) 6h ago 21M
3928M 9845M 17.2.8 259b35566514 111b8d5ffb54
osd.5 ceph01 running (10h) 6h ago 21M
11.1G 9461M 17.2.8 259b35566514 50727e330c7a
osd.6 ceph02 running (10h) 6h ago 21M
8811M 9589M 17.2.8 259b35566514 602b72c69ab6
osd.7 ceph03 running (10h) 6h ago 7h
4491M 9845M 17.2.8 259b35566514 1c16697e2c4c
osd.8 ceph01 error 6h ago 21M
- 9461M <unknown> <unknown> <unknown>
osd.9 ceph02 running (10h) 6h ago 21M
9769M 9589M 17.2.8 259b35566514 3e812c1dd841
osd.10 ceph03 running (10h) 6h ago 6h
2877M 9845M 17.2.8 259b35566514 2102dcc50ead
osd.11 ceph01 running (10h) 6h ago 21M
18.6G 9461M 17.2.8 259b35566514 9422bb2e4dca
osd.12 ceph02 running (10h) 6h ago 21M
24.8G 9589M 17.2.8 259b35566514 736ae6924f2e
osd.13 ceph03 running (6h) 6h ago 21M
40.0G 9845M 17.2.8 259b35566514 b0fd0200dd0e
osd.14 ceph01 running (10h) 6h ago 6h
4578M 9461M 17.2.8 259b35566514 ec79e5a13a94
osd.15 ceph02 running (10h) 6h ago 21M
16.7G 9589M 17.2.8 259b35566514 a9c876d93119
osd.16 ceph03 running (10h) 6h ago 21M
4951M 9845M 17.2.8 259b35566514 49151f7eee3c
osd.17 ceph01 running (10h) 6h ago 21M
4980M 9461M 17.2.8 259b35566514 46888e5de208
osd.18 ceph02 running (10h) 6h ago 6h
3290M 9589M 17.2.8 259b35566514 8a37412c3c1a
osd.19 ceph03 running (10h) 6h ago 21M
8666M 9845M 17.2.8 259b35566514 7f6b7da46bd7
osd.20 ceph01 running (10h) 6h ago 21M
6821M 9461M 17.2.8 259b35566514 7a82a72dd3fe
osd.21 ceph03 running (10h) 6h ago 21M
7521M 9845M 17.2.8 259b35566514 6ca1c1e49295
osd.22 ceph02 running (6h) 6h ago 21M
13.8G 9589M 17.2.8 259b35566514 b533f0882529
osd.23 ceph01 running (10h) 6h ago 21M
11.1G 9461M 17.2.8 259b35566514 34abef4862bb
prometheus.ceph02 ceph02 *:9095 running (7h) 6h ago 21M
312M - 2.43.0 a07b618ecd1d 8cf2bb6067e1
rgw.s3service.ceph01.sqnfig ceph01 *:8081 error 6h ago 21M
- - <unknown> <unknown> <unknown>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io