Dear Joao , dear ceph users,
Thanks for your fast reply.
I couldn't get to my ceph cluster so far since I was visiting the OpenStack
summit in Austin, TX, and had really no time until now.
I just fixed the monitor, it is up and running again, by removing and re-adding
it.
I still wonder though
Hi Ceph users
This is my first post on this mailing list. Hope it's the correct one. Please
redirect me to the right place in case it is not.
I am running a small (3 nodes with 3 OSD and 1 monitor on each of them) Ceph
cluster.
Guess what, it is used as Cinder/Glance/Nova RDB storage for OpenSt
Hi Mihai, Grüezi Ivan :)
Thank both of you for the fast reply. Its appreciated.
When I bootstrapped the cluster I used
--
osd_pool_default_size = 3
osd_pool_default_min_size = 2
--
in ceph.conf. This is also set for each pool at the moment.
I understood from docs this means each object is stor
Dear Ceph Users,
I have the following situation in my small 3-node cluster:
--snip
root@ceph2:~# ceph status
cluster d1af2097-8535-42f2-ba8c-0667f90cab61
health HEALTH_WARN
1 mons down, quorum 0,1 ceph0,ceph1
monmap e1: 3 mons at
{ceph0=10.0.0.30:6789/0,ceph1=10.0.0.31: