I figured out why this was happening. When I went through the quick start
guide, I created a directory on the admin node that was /home/ceph/storage
and this is where ceph.conf, ceph.log, keyrings, etc. ended up. What I
realized though is that when i was running the ceph commands on the admin
nod
e02=10.32.0.182:6789/0,node03=10.32.0.183:6789/0},
> election epoch 14, quorum 0,1,2 node01,node02,node03
>
> Wolfgang
>
> On 01/01/2014 10:29 PM, Matt Rabbitt wrote:
> > I only have four because I want to remove the original one I used to
> > create the cluster. I trie
n_initial_members = node01,node02,node03
> mon_host = 10.32.0.181,10.32.0.182,10.32.0.183
>
> hth
> wogri
> --
> http://www.wogri.at
>
> On 01 Jan 2014, at 21:55, Matt Rabbitt wrote:
>
> > I created a cluster, four monitors, and 12 OSDs using the ceph-deploy
> tool. I initia
I created a cluster, four monitors, and 12 OSDs using the ceph-deploy tool.
I initially created this cluster with one monitor, then added a "public
network" statement in ceph.conf so that I could use ceph-deploy to add the
other monitors. When I run ceph -w now everything checks out and all
monit