Not sure why you need the ceph.conf in a cephadm installation (as most of
the config is done through the config db). Anyway, If you are using a
(modern) cephadm installation then you can just
put the label "*_admin*" on the hosts where you would like to have the
config and the ceph.conf and keyring will be copied to
*/var/lib/ceph/<fsid>/config/*

When you start a cephadm shell normally it first checks if there's any
running monitor, if found then it will use its config file, otherwise it
will check for the above directory to get the fsid.

Regards,
Redo.




On Tue, Nov 18, 2025 at 2:28 PM Janne Johansson <[email protected]> wrote:

> > > ... that said, before Mimic brought us the central config db,
> maintaining <clustername>.conf across clusters was a pain. When testing or
> troubleshooting one would need to persist changes across all nodes, and in
> the heat of an escalation it was all too easy to forget to persist
> changes.  Even with the file templated in automation, consider what happens
> when an important change happens while one or more nodes is down ... then
> it comes back up.
> >
> > No doubt but my question is what happens if I put the ceph.conf, can be
> any
> > unexpected side effect.
>
> ceph.conf keeps data in sections, so all the global stuff gets read by
> all daemons and clients, then you can have OSD specifics and down to
> instance-specific data for osd.123 in a section of its own if need be,
> so it is certainly made so one file could work for all of the parts of
> a ceph cluster. But as said, now with the ceph config db, you only
> need a super short conf with the ips of the mons and possibly the fsid
> and it will pick up the rest from the config db itself.
>
> --
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to