On 26-07-2013 03:49, Keith Phua wrote:
Hi all,
2 days ago, i upgraded one of my mon from 0.61.4 to 0.61.6. The mon failed to
start. I checked the mailing list and found reports of mon failed after
upgrading to 0.61.6. So I wait for the next release and upgraded the failed
mon from 0.61.6 to
On Thu, 25 Jul 2013, Josh Holland wrote:
> Hi Sage,
>
> On 25 July 2013 17:21, Sage Weil wrote:
> > I suspect the difference here is that the dns names you are specifying in
> > ceph-deploy new do not match.
>
> Aha, this could well be the problem. The current DNS names resolve to
> the address
Hi all,
I need to know whether someone else also faced the same issue.
I tried openstack + ceph integration. I have seen that I could create
volumes from horizon and it is created in rados.
When I check the created volumes in admin panel, all volumes are shown to
be created in the same h
On Fri, Jul 26, 2013 at 9:17 AM, johnu wrote:
> Hi all,
> I need to know whether someone else also faced the same issue.
>
>
> I tried openstack + ceph integration. I have seen that I could create
> volumes from horizon and it is created in rados.
>
> When I check the created volumes in ad
Greg,
I verified in all cluster nodes that rbd_secret_uuid is same as
virsh secret-list. And If I do virsh secret-get-value of this uuid, i
getting back the auth key for client.volumes. What did you mean by same
configuration?. Did you mean same secret for all compute nodes?
when w
On Fri, Jul 26, 2013 at 9:35 AM, johnu wrote:
> Greg,
> I verified in all cluster nodes that rbd_secret_uuid is same as
> virsh secret-list. And If I do virsh secret-get-value of this uuid, i
> getting back the auth key for client.volumes. What did you mean by same
> configuration?. Did y
Greg,
Yes, the outputs match
master node:
ceph auth get-key client.volumes
AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc
AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
/etc/cinder/cinder.conf
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd
On Fri, Jul 26, 2013 at 10:11 AM, johnu wrote:
> Greg,
> Yes, the outputs match
Nope, they don't. :) You need the secret_uuid to be the same on each
node, because OpenStack is generating configuration snippets on one
node (which contain these secrets) and then shipping them to another
node where
Greg,
:) I am not getting where was the mistake in the configuration.
virsh secret-define gave different secrets
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat
client.volumes.key)
On Fri, Jul 26, 2013 at 10:16 AM, Gr
IIRC, secret-define will give you a uuid to use, but you can also just
tell it to use a predefined one. You need to do so.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Jul 26, 2013 at 10:32 AM, johnu wrote:
> Greg,
> :) I am not getting where was the mis
Hi John,
Thanks for the responses.
For (a), I remember reading somewhere that one can only run a max of 1
monitor/node, I assume that that implies the single monitor process will be
responsible for ALL ceph clusters on that node, correct?
So (b) isn't really a Ceph issue, that's nice to know. An
You can specify the uuid in the secret.xml file like:
bdf77f5d-bf0b-1053-5f56-cd76b32520dc
client.volumes secret
Then use that same uuid on all machines in cinder.conf:
rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc
Also, the column you are referring to in the Open
(a) This is true when using ceph-deploy for a cluster. It's one Ceph
Monitor for the cluster on one node. You can have many Ceph monitors,
but the typical high availability cluster has 3-5 monitor nodes. With
a manual install, you could conceivably install multiple monitors onto
a single node for t
John,
Thanks for the really insightful responses!
It would be nice to know what the dominant deployment scenario for the
native case (my question (c)).
Do they usually end up with something like OCFS2 on top of RBD for the
native case, or do they go with the CephFS?
Thanks,
Hari
On Fri, Jul 26
14 matches
Mail list logo