Hi,

Can I see your ceph.conf?
I suspect that [client.cinder] and [client.glance] sections are missing.

Cheers.
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 10, rue de la Victoire - 75009 Paris 
Web : www.enovance.com - Twitter : @enovance 

On 16 Feb 2014, at 06:55, Ashish Chandra <mail.ashishchan...@gmail.com> wrote:

> Hi Jean,
> 
> Here is the output for ceph auth list for client.cinder
> 
> client.cinder
>         key: AQCKaP9ScNgiMBAAwWjFnyL69rBfMzQRSHOfoQ==
>         caps: [mon] allow r
>         caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> pool=volumes, allow rx pool=images
> 
> 
> Here is the output of ceph -s:
> 
> ashish@ceph-client:~$ ceph -s
>     cluster afa13fcd-f662-4778-8389-85047645d034
>      health HEALTH_OK
>      monmap e1: 1 mons at {ceph-node1=10.0.1.11:6789/0}, election epoch 1, 
> quorum 0 ceph-node1
>      osdmap e37: 3 osds: 3 up, 3 in
>       pgmap v84: 576 pgs, 6 pools, 0 bytes data, 0 objects
>             106 MB used, 9076 MB / 9182 MB avail
>                  576 active+clean
> 
> I created all the keyrings and copied as suggested by the guide.
> 
> 
> 
> 
> 
> 
> On Sun, Feb 16, 2014 at 3:08 AM, Jean-Charles LOPEZ <jc.lo...@inktank.com> 
> wrote:
> Hi,
> 
> what do you get when you run a 'ceph auth list' command for the user name 
> (client.cinder) you created for cinder? Are the caps and the key for this 
> user correct? No typo in the hostname in the cinder.conf file (host=) ? Did 
> you copy the keyring to the cinder running cinder (can’t really say from your 
> output and there is no ceph-s command to check the monitor names)?
> 
> It could just be a typo in the ceph auth get-or-create command that’s causing 
> it.
> 
> Rgds
> JC
> 
> 
> 
> On Feb 15, 2014, at 10:35, Ashish Chandra <mail.ashishchan...@gmail.com> 
> wrote:
> 
>> Hi Cephers,
>> 
>> I am trying to configure ceph rbd as backend for cinder and glance by 
>> following the steps mentioned in:
>> 
>> http://ceph.com/docs/master/rbd/rbd-openstack/
>> 
>> Before I start all openstack services are running normally and ceph cluster 
>> health shows "HEALTH_OK"
>> 
>> But once I am done with all steps and restart openstack services, 
>> cinder-volume fails to start and throws an error.
>> 
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd Traceback (most 
>> recent call last):
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd   File 
>> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 262, in 
>> check_for_setup_error
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd     with 
>> RADOSClient(self):
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd   File 
>> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 234, in __init__
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd     self.cluster, 
>> self.ioctx = driver._connect_to_rados(pool)
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd   File 
>> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 282, in 
>> _connect_to_rados
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd     client.connect()
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd   File 
>> "/usr/lib/python2.7/dist-packages/rados.py", line 185, in connect
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd     raise 
>> make_ex(ret, "error calling connect")
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd Error: error calling 
>> connect: error code 95
>> 2014-02-16 00:01:42.582 TRACE cinder.volume.drivers.rbd
>> 2014-02-16 00:01:42.591 ERROR cinder.volume.manager 
>> [req-8134a4d7-53f8-4ada-b4b5-4d96d7cad4bc None None] Error encountered 
>> during initialization of driver: RBDDriver
>> 2014-02-16 00:01:42.592 ERROR cinder.volume.manager 
>> [req-8134a4d7-53f8-4ada-b4b5-4d96d7cad4bc None None] Bad or unexpected 
>> response from the storage volume backend API: error connecting to ceph 
>> cluster
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager Traceback (most recent 
>> call last):
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager   File 
>> "/opt/stack/cinder/cinder/volume/manager.py", line 190, in init_host
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager     
>> self.driver.check_for_setup_error()
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager   File 
>> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 267, in 
>> check_for_setup_error
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager     raise 
>> exception.VolumeBackendAPIException(data=msg)
>> 2014-02-16 00:01:42.592 TRACE cinder.volume.manager 
>> VolumeBackendAPIException: Bad or unexpected response from the storage 
>> volume backend API: error connecting to ceph cluster
>> 
>> 
>> Here is the content of my /etc/ceph in openstack node: 
>> 
>> ashish@ubuntu:/etc/ceph$ ls -lrt
>> total 16
>> -rw-r--r-- 1 cinder cinder 229 Feb 15 23:45 ceph.conf
>> -rw-r--r-- 1 glance glance  65 Feb 15 23:46 ceph.client.glance.keyring
>> -rw-r--r-- 1 cinder cinder  65 Feb 15 23:47 ceph.client.cinder.keyring
>> -rw-r--r-- 1 cinder cinder  72 Feb 15 23:47 ceph.client.cinder-backup.keyring
>> 
>> I am really stuck and tried a lot. What Could possibly I be doing wrong.
>> 
>> 
>> HELP.
>> 
>> 
>> Thanks and Regards
>> Ashish Chandra
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to