> > ... that said, before Mimic brought us the central config db, maintaining 
> > <clustername>.conf across clusters was a pain. When testing or 
> > troubleshooting one would need to persist changes across all nodes, and in 
> > the heat of an escalation it was all too easy to forget to persist changes. 
> >  Even with the file templated in automation, consider what happens when an 
> > important change happens while one or more nodes is down ... then it comes 
> > back up.
>
> No doubt but my question is what happens if I put the ceph.conf, can be any
> unexpected side effect.

ceph.conf keeps data in sections, so all the global stuff gets read by
all daemons and clients, then you can have OSD specifics and down to
instance-specific data for osd.123 in a section of its own if need be,
so it is certainly made so one file could work for all of the parts of
a ceph cluster. But as said, now with the ceph config db, you only
need a super short conf with the ips of the mons and possibly the fsid
and it will pick up the rest from the config db itself.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to