Did you add the virsh-secret?

Look at the libvirt-bin logs
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 03 Apr 2014, at 03:48, Tomokazu HIRAI <tomokazu.hi...@gmail.com> wrote:

> Thanks for reply, Sebastien, Don,
> 
> I resolved this issue with ceph.conf include client.keyring.
> and now I can create volume.
> 
> But I found error when I attach volume to instance.
> 
> ---
> 2014-04-03 10:42:05.793 7783 ERROR nova.openstack.common.rpc.amqp 
> [req-4d65c390-6f13-45f3-a5d2-ce1c6a0a6b31 f6ba1c9d4725495e8daecab859488edb 
> a39f1717c4114ce49a2957187ec07fe7] Exception during message handling
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp Traceback 
> (most recent call last):
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 
> 461, in _process_data
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     **args)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
> line 172, in dispatch
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     result 
> = getattr(proxyobj, method)(ctxt, **kwargs)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     payload)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     return 
> f(self, context, *args, **kw)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, in 
> decorated_function
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     pass
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, in 
> decorated_function
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     return 
> function(self, context, *args, **kwargs)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 271, in 
> decorated_function
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     e, 
> sys.exc_info())
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258, in 
> decorated_function
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     return 
> function(self, context, *args, **kwargs)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3655, in 
> attach_volume
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     
> context, instance, mountpoint)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3650, in 
> attach_volume
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     
> mountpoint, instance)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3697, in 
> _attach_volume
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     
> connector)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3687, in 
> _attach_volume
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     
> encryption=encryption)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp   File 
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1112, in 
> attach_volume
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp     raise 
> exception.DeviceIsBusy(device=disk_dev)
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp 
> DeviceIsBusy: The supplied device (vdb) is busy.
> 2014-04-03 10:42:05.793 7783 TRACE nova.openstack.common.rpc.amqp
> ---
> 
> So I checked this bug report and I checked that I already added mon address 
> to ceph.conf.
> 
> https://bugs.launchpad.net/nova/+bug/1077817
> 
> Here is my ceph.conf.
> 
> --
> [global]
> fsid = 6525466f-cd8d-497c-aeed-4a48195c0377
> mon_initial_members = ceph01, ceph02, ceph03
> mon_host = 10.200.10.116,10.200.10.117,10.200.10.118
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> public_network = 10.200.10.0/24
> cluster_network = 10.200.9.0/24
> 
> [mon.a]
> host = ceph01
> mon_addr = 10.200.10.116:6789
> 
> [mon.b]
> host = ceph02
> mon_addr = 10.200.10.117:6789
> 
> [mon.c]
> host = ceph03
> mon_addr = 10.200.10.118:6789
> 
> [osd.0]
> public_addr = 10.200.10.116
> cluster_addr = 10.200.9.116
> 
> [osd.1]
> public_addr = 10.200.10.117
> cluster_addr = 10.200.9.117
> 
> [osd.2]
> public_addr = 10.200.10.118
> cluster_addr = 10.200.9.118
> 
> [mds.a]
> host = ceph01
> 
> [mds.b]
> host = ceph02
> 
> [mds.c]
> host = ceph03
> ---
> 
> Does anyone have an idea ? 
> 
> Thanks,
> 
> -- Tomokazu HIRAI (@jedipunkz)
> 
> 
> 
> 2014-04-03 3:19 GMT+09:00 Sebastien Han <sebastien....@enovance.com>:
> The section should be
> 
> [client.keyring]
>   keyring = <path-to-keyring>
> 
> Then restart cinder-volume after.
> 
> ––––
> Sébastien Han
> Cloud Engineer
> 
> "Always give 100%. Unless you're giving blood.”
> 
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien....@enovance.com
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance
> 
> On 02 Apr 2014, at 10:41, Tomokazu HIRAI <tomokazu.hi...@gmail.com> wrote:
> 
> > I Integrated Ceph + OpenStack with following document.
> >
> > https://ceph.com/docs/master/rbd/rbd-openstack/
> >
> > I could put image to glance on ceph cluster. but I can not create any 
> > volume to cinder.
> >
> > error messages are the same on this URL.
> >
> > http://comments.gmane.org/gmane.comp.file-systems.ceph.user/7641
> >
> > ---
> >
> > 2014-04-02 17:31:57.799 22321 ERROR cinder.volume.drivers.rbd 
> > [req-b18d0e8d-c818-4fb4-9dd8-dbdd938f919b None None] error connecting to 
> > ceph cluster
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd Traceback 
> > (most recent call last):
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd   File 
> > "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 262, 
> > in check_for_setup_error
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd     with 
> > RADOSClient(self):
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd   File 
> > "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 234, 
> > in __init__
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd     
> > self.cluster, self.ioctx = driver._connect_to_rados(pool)
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd   File 
> > "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 282, 
> > in _connect_to_rados
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd     
> > client.connect()
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd   File 
> > "/usr/lib/python2.7/dist-packages/rados.py", line 408, in connect
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd     raise 
> > make_ex(ret, "error calling connect")
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd 
> > ObjectNotFound: error calling connect
> > 2014-04-02 17:31:57.799 22321 TRACE cinder.volume.drivers.rbd
> > 2014-04-02 17:31:57.800 22321 ERROR cinder.volume.manager 
> > [req-b18d0e8d-c818-4fb4-9dd8-dbdd938f919b None None] Error encountered 
> > during initialization of driver: RBDDriver
> > 2014-04-02 17:31:57.801 22321 ERROR cinder.volume.manager 
> > [req-b18d0e8d-c818-4fb4-9dd8-dbdd938f919b None None] Bad or unexpected 
> > response from the storage volume backend API: error connecting to ceph 
> > cluster
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager Traceback (most 
> > recent call last):
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager   File 
> > "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 190, in 
> > init_host
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager     
> > self.driver.check_for_setup_error()
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager   File 
> > "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 267, 
> > in check_for_setup_error
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager     raise 
> > exception.VolumeBackendAPIException(data=msg)
> > 2014-04-02 17:31:57.801 22321 TRACE cinder.volume.manager 
> > VolumeBackendAPIException: Bad or unexpected response from the storage 
> > volume backend API: error connecting to ceph cluster
> >
> > so I added these lines to /etc/ceph/ceph.conf
> >
> > [client.cinder]
> >         key = <key_id>
> >
> > but I could not create any volumes to cinder.
> >
> > Does anyone have an idea ?
> >
> > Thanks from cloudy Tokyo.
> >
> > -- Tomokazu HIRAI (@jedipunkz)
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to