(newbie warning - my first go-round with ceph, doing a lot of reading) I have a small Ceph cluster, four storage nodes total, three dedicated to data (OSD’s) and one for metadata. One client machine.
I made a network change. When I installed and configured the cluster, it was done using the system’s 10Gb interface information. I now have everything on a 100Gb network (IB in Ethernet mode). My question is, what is the most expedient way for me to change the ceph config such that all nodes are using the 100Gb network? Can I shut down the cluster, edit one or more .conf files and restart, or do I need to re-configure from scratch? Thanks Jim cepher@srv-01:~$ sudo ceph --version ceph version 11.2.1 (e0354f9d3b1eea1d75a7dd487ba8098311be38a7) cepher@srv-01:~$ sudo ceph -s cluster f201e454-9c73-4b29-abe1-48dd609266a6 health HEALTH_OK monmap e4: 3 mons at {dgx-srv-04=10.33.3.46:6789/0,dgx-srv-05=10.33.3.48:6789/0,dgx-srv-06=10.33.3.50:6789/0} election epoch 12, quorum 0,1,2 dgx-srv-04,dgx-srv-05,dgx-srv-06 fsmap e5: 1/1/1 up {0=dgx-srv-03=up:active} mgr active: dgx-srv-06 standbys: dgx-srv-04, dgx-srv-05 osdmap e114: 18 osds: 18 up, 18 in flags sortbitwise,require_jewel_osds,require_kraken_osds pgmap v7946: 3072 pgs, 3 pools, 2148 bytes data, 20 objects 99684 MB used, 26717 GB / 26814 GB avail 3072 active+clean cepher@srv-01:~$ uname -a Linux srv-01 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------------------------------------------------- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. -----------------------------------------------------------------------------------
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com