Re: [ceph-users] Upgrade from 0.61.4 to 0.61.6 mon failed. Upgrade to 0.61.7 mon still failed.

2013-07-26 Thread Joao Eduardo Luis
On 26-07-2013 03:49, Keith Phua wrote: Hi all, 2 days ago, i upgraded one of my mon from 0.61.4 to 0.61.6. The mon failed to start. I checked the mailing list and found reports of mon failed after upgrading to 0.61.6. So I wait for the next release and upgraded the failed mon from 0.61.6 to

Re: [ceph-users] ceph-deploy and bugs 5195/5205: mon.host1 does not exist in monmap, will attempt to join an existing cluster

2013-07-26 Thread Sage Weil
On Thu, 25 Jul 2013, Josh Holland wrote: > Hi Sage, > > On 25 July 2013 17:21, Sage Weil wrote: > > I suspect the difference here is that the dns names you are specifying in > > ceph-deploy new do not match. > > Aha, this could well be the problem. The current DNS names resolve to > the address

[ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
Hi all, I need to know whether someone else also faced the same issue. I tried openstack + ceph integration. I have seen that I could create volumes from horizon and it is created in rados. When I check the created volumes in admin panel, all volumes are shown to be created in the same h

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Gregory Farnum
On Fri, Jul 26, 2013 at 9:17 AM, johnu wrote: > Hi all, > I need to know whether someone else also faced the same issue. > > > I tried openstack + ceph integration. I have seen that I could create > volumes from horizon and it is created in rados. > > When I check the created volumes in ad

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
Greg, I verified in all cluster nodes that rbd_secret_uuid is same as virsh secret-list. And If I do virsh secret-get-value of this uuid, i getting back the auth key for client.volumes. What did you mean by same configuration?. Did you mean same secret for all compute nodes? when w

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Gregory Farnum
On Fri, Jul 26, 2013 at 9:35 AM, johnu wrote: > Greg, > I verified in all cluster nodes that rbd_secret_uuid is same as > virsh secret-list. And If I do virsh secret-get-value of this uuid, i > getting back the auth key for client.volumes. What did you mean by same > configuration?. Did y

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
Greg, Yes, the outputs match master node: ceph auth get-key client.volumes AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ== virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ== /etc/cinder/cinder.conf volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Gregory Farnum
On Fri, Jul 26, 2013 at 10:11 AM, johnu wrote: > Greg, > Yes, the outputs match Nope, they don't. :) You need the secret_uuid to be the same on each node, because OpenStack is generating configuration snippets on one node (which contain these secrets) and then shipping them to another node where

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
Greg, :) I am not getting where was the mistake in the configuration. virsh secret-define gave different secrets sudo virsh secret-define --file secret.xml sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) On Fri, Jul 26, 2013 at 10:16 AM, Gr

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Gregory Farnum
IIRC, secret-define will give you a uuid to use, but you can also just tell it to use a predefined one. You need to do so. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Fri, Jul 26, 2013 at 10:32 AM, johnu wrote: > Greg, > :) I am not getting where was the mis

Re: [ceph-users] Basic questions

2013-07-26 Thread Hariharan Thantry
Hi John, Thanks for the responses. For (a), I remember reading somewhere that one can only run a max of 1 monitor/node, I assume that that implies the single monitor process will be responsible for ALL ceph clusters on that node, correct? So (b) isn't really a Ceph issue, that's nice to know. An

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Mike Dawson
You can specify the uuid in the secret.xml file like: bdf77f5d-bf0b-1053-5f56-cd76b32520dc client.volumes secret Then use that same uuid on all machines in cinder.conf: rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc Also, the column you are referring to in the Open

Re: [ceph-users] Basic questions

2013-07-26 Thread John Wilkins
(a) This is true when using ceph-deploy for a cluster. It's one Ceph Monitor for the cluster on one node. You can have many Ceph monitors, but the typical high availability cluster has 3-5 monitor nodes. With a manual install, you could conceivably install multiple monitors onto a single node for t

Re: [ceph-users] Basic questions

2013-07-26 Thread Hariharan Thantry
John, Thanks for the really insightful responses! It would be nice to know what the dominant deployment scenario for the native case (my question (c)). Do they usually end up with something like OCFS2 on top of RBD for the native case, or do they go with the CephFS? Thanks, Hari On Fri, Jul 26