Not a problem at all, sometimes all we need is just need a second pair of eyes!
;)
On Mon, 19 Jun 2017 21:23:34 -0400 tribe...@tribecc.us wrote
That was it! Thank you so much for your help, Marko! What a silly thing for me
to miss!
<3 Trilliams
Sent from my iPhone
On Jun 19, 2017,
That was it! Thank you so much for your help, Marko! What a silly thing for me
to miss!
<3 Trilliams
Sent from my iPhone
> On Jun 19, 2017, at 8:12 PM, Marko Sluga wrote:
>
> Sorry,
>
> rbd_user = volumes
>
> Not client.volumes
>
>
>
> On Mon, 19 Jun 2017 21:09:38 -0400 ma...@marko
Sorry,
rbd_user = volumes
Not client.volumes
On Mon, 19 Jun 2017 21:09:38 -0400 ma...@markocloud.com wrote
Hi Nichole,
Yeah, your setup looks is ok, so the only thing here could be an auth issue. So
I went through the config again and I see you have set the client.volumes ceph
us
Hi Nichole,
Yeah, your setup looks is ok, so the only thing here could be an auth issue. So
I went through the config again and I see you have set the client.volumes ceph
user with rwx permissions on the volumes pool.
In your cinder.conf the setup is:
rbd_user = cinder
Unless the cinder cep
Hi Marko!
Here’s my details:
OpenStack Newton deployed with PackStack [controller + network node}
Ceph Kraken 3-node setup deployed with ceph-ansible
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
# ceph --version
ceph version 11.2.0 (f223e27eeb35991352ebc1f67423
Hi Nichole,
Since your config is ok.
I'm going to need more details on the OpenStack release, the hypervisor, linux
and librados versions.
You could also test if you can try and monut a volume from your os and/or
hypervisor and the machine that runs the cinder volume service to start with.
R
Hi Marko,
Here’s’ my ceph config:
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_u
you might want to configure cinder.conf with
verbose = true
debug = true
and see /var/log/cinder/cinder-volume.log after a "systemctl restart
cinder-volume" to see the real cause.
best.
alejandrito
On Mon, Jun 19, 2017 at 6:25 PM, T. Nichole Williams
wrote:
> Hello,
>
> I’m having trouble con
Hi Nicole,
I can help, I have been working on my own openstack connected to ceph - can you
send over the config in your /etc/cinder/cinder.conf file - especially the rbd
relevant section starting with:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
Also, make sure your rbd_secret_uu