On 02/01/2025 16:37, Redouane Kachach wrote:
Just to comment on the ceph.target. Technically in a containerized ceph a
node can host daemons from *many ceph clusters* (each with its own
ceph_fsid).

The ceph.target is a global unit and it's the root for all the clusters
running in the node. There's another target which is specific to
each cluster (ceph-<fsid>.target). From my testing env where I created two
clusters and I forced maintenance mode for the first one only:

[root@ceph-node-2 ~]# systemctl list-dependencies ceph.target
ceph.target
○ ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8.target
○ │
├─ceph-789c5638-bec0-11ef-9350-5254002ff...@ceph-exporter.ceph-node-2.service
○ │ ├─ceph-789c5638-bec0-11ef-9350-5254002ff...@crash.ceph-node-2.service
× │
├─ceph-789c5638-bec0-11ef-9350-5254002ff...@mgr.ceph-node-2.ptlcoi.service
○ │ ├─ceph-789c5638-bec0-11ef-9350-5254002ff...@mon.ceph-node-2.service
× │
└─ceph-789c5638-bec0-11ef-9350-5254002ff...@node-exporter.ceph-node-2.service
● └─ceph-a3cf42a0-becc-11ef-9470-52540012a496.target
●
├─ceph-a3cf42a0-becc-11ef-9470-52540012a...@ceph-exporter.ceph-node-2.service
●   ├─ceph-a3cf42a0-becc-11ef-9470-52540012a...@crash.ceph-node-2.service
●
├─ceph-a3cf42a0-becc-11ef-9470-52540012a...@mgr.ceph-node-2.bodyuz.service
●   ├─ceph-a3cf42a0-becc-11ef-9470-52540012a...@mon.ceph-node-2.service
●
└─ceph-a3cf42a0-becc-11ef-9470-52540012a...@node-exporter.ceph-node-2.service

*Global target:*
[root@ceph-node-2 ~]# systemctl is-active ceph.target
active

*First cluster:*
systemctl is-active ceph-789c5638-bec0-11ef-9350-5254002ff0d8.target
inactive

*Second cluster:*
systemctl is-active ceph-a3cf42a0-becc-11ef-9470-52540012a496.target
active


Right, so in my view that's one more reason *not* to use maintenance mode in a distro upgrade, since stopping ceph.target ensures that all Ceph-related services are stopped on a node, even in the — somewhat uncommon — case of that node running services related to multiple clusters. Wouldn't you agree?

Cheers,
Florian
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to