I think heartbeats will failover to the public network if the private doesn't work -- may not have always done that.
>> Hi >> Cephadm Reef 18.2.0. >> We would like to remove our cluster_network without stopping the cluster and >> without having to route between the networks. >> global advanced cluster_network 192.168.100.0/24 >> * >> global advanced public_network 172.21.12.0/22 >> * >> The documentation[1] states: >> " >> You may specifically assign static IP addresses or override cluster_network >> settings using the cluster_addr setting for specific OSD daemons. >> " >> So for one OSD at a time I could set cluster_addr to override the >> cluster_network IP and use the public_network IP instead? As the containers >> are using host networking they have access to both IPs and will just layer 2 >> the traffic, avoiding routing? >> When all OSDs are running with a public_network IP set via cluster_addr we >> can just delete the cluster_network setting and then remove all the >> ceph_addr settings, as with no cluster_network setting the public_network >> setting will be used? >> We tried with one OSD and it seems to work. Anyone see a problem with this >> approach? > > Turned out to be even simpler for this setup where the OSD containers have > access til both host networks: > > 1) ceph config rm global cluster_network -> nothing happened, no automatic > redeploy or restart > > 2) Restart OSDs > > Mvh. > > Torkil > >> Thanks >> Torkil >> [1] >> https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#id3 > > -- > Torkil Svensgaard > Sysadmin > MR-Forskningssektionen, afs. 714 > DRCMR, Danish Research Centre for Magnetic Resonance > Hvidovre Hospital > KettegÄrd Allé 30 > DK-2650 Hvidovre > Denmark > Tel: +45 386 22828 > E-mail: tor...@drcmr.dk > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io