Hi Vasily,
  Of course,
from cinder-volume.log

 2015-11-06 12:28:52.865 366 WARNING oslo_config.cfg
[req-41a4bbbb-4bec-40d2-a7c1-6e8d73644b4c b7aadbb4a85745feb498b74e437129cc
ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
"DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 13:09:31.863 15534 WARNING oslo_config.cfg
[req-dd47624d-cf25-4beb-9d9e-70f532b2e8f9 - - - - -] Option "lock_path"
from group "DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 13:09:44.375 15544 WARNING oslo_config.cfg
[req-696a1282-b84c-464c-a220-d4e41a7dbd02 b7aadbb4a85745feb498b74e437129cc
ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
"DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 13:11:02.024 15722 WARNING oslo_config.cfg
[req-db3c3775-3607-4fb7-acc9-5dba207bde56 - - - - -] Option "lock_path"
from group "DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 13:11:40.042 15729 WARNING oslo_config.cfg
[req-45458cfd-4e3a-4be2-b858-cece77072829 b7aadbb4a85745feb498b74e437129cc
ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
"DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 13:16:49.331 15729 WARNING cinder.quota
[req-4e2c2f71-5bfa-487e-a99f-a6bb63bf1bc1 - - - - -] Deprecated: Default
quota for resource: gigabytes_rbd is set by the default quota flag:
quota_gigabytes_rbd, it is now deprecated. Please use the default quota
class for default quota.
2015-11-06 13:16:49.332 15729 WARNING cinder.quota
[req-4e2c2f71-5bfa-487e-a99f-a6bb63bf1bc1 - - - - -] Deprecated: Default
quota for resource: volumes_rbd is set by the default quota flag:
quota_volumes_rbd, it is now deprecated. Please use the default quota class
for default quota.
2015-11-06 13:18:16.163 16635 WARNING oslo_config.cfg
[req-503543b9-c2df-4483-a8b3-11f622a9cbe8 - - - - -] Option "lock_path"
from group "DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 14:17:08.288 16970 WARNING oslo_config.cfg
[req-a4ce4dbf-4119-427b-b555-930e66b9a2e3 58981d56c6cd4c5cacd59e518220a0eb
4d778e83692b44778f71cbe44da0bc0b - - -] Option "lock_path" from group
"DEFAULT" is deprecated. Use option "lock_path" from group
"oslo_concurrency".
2015-11-06 14:17:08.674 16970 WARNING cinder.quota
[req-fe21f3ad-7160-45b4-8adf-4cbe4bb85fc3 - - - - -] Deprecated: Default
quota for resource: gigabytes_rbd is set by the default quota flag:
quota_gigabytes_rbd, it is now deprecated. Please use the default quota
class for default quota.
2015-11-06 14:17:08.676 16970 WARNING cinder.quota
[req-fe21f3ad-7160-45b4-8adf-4cbe4bb85fc3 - - - - -] Deprecated: Default
quota for resource: volumes_rbd is set by the default quota flag:
quota_volumes_rbd, it is now deprecated. Please use the default quota class
for default quota.

And from nova-compute.log

2015-11-06 12:28:20.260 25915 INFO oslo_messaging._drivers.impl_rabbit
[req-dd85618c-ab24-43df-8192-b069d00abeeb - - - - -] Connected to AMQP
server on rabbitmq01:5672
2015-11-06 12:28:51.864 25915 INFO nova.compute.manager
[req-030d8966-cbe7-46c3-9d95-a1c886553fbd b7aadbb4a85745feb498b74e437129cc
ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
08f6fef5-7c98-445b-abfe-636c4c6fee89] Detach volume
4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd
2015-11-06 12:29:18.255 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
available compute resources for node cms01.ifca.es
2015-11-06 12:29:18.480 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
24, total allocated vcpus: 24
2015-11-06 12:29:18.481 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Final resource view:
name=cms01.ifca.es phys_ram=49143MB used_ram=47616MB phys_disk=270GB
used_disk=220GB total_vcpus=24 used_vcpus=24
pci_stats=<nova.pci.stats.PciDeviceStats object at 0x7fc458153d50>
2015-11-06 12:29:18.508 25915 INFO nova.scheduler.client.report
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
updated for ('cms01', 'cms01.ifca.es')
2015-11-06 12:29:18.508 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
updated for cms01:cms01.ifca.es
2015-11-06 12:29:49.825 25915 INFO nova.compute.manager
[req-92d8810c-bea8-4eba-b682-c0d4e9d90c89 b7aadbb4a85745feb498b74e437129cc
ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
08f6fef5-7c98-445b-abfe-636c4c6fee89] Attaching volume
4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvdd
2015-11-06 12:30:20.389 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
available compute resources for node cms01.ifca.es
2015-11-06 12:30:20.595 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
24, total allocated vcpus: 24
2015-11-06 12:30:20.596 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Final resource view:
name=cms01.ifca.es phys_ram=49143MB used_ram=47616MB phys_disk=270GB
used_disk=220GB total_vcpus=24 used_vcpus=24
pci_stats=<nova.pci.stats.PciDeviceStats object at 0x7fc458153d50>
2015-11-06 12:30:20.622 25915 INFO nova.scheduler.client.report
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
updated for ('cms01', 'cms01.ifca.es')
2015-11-06 12:30:20.623 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
updated for cms01:cms01.ifca.es
2015-11-06 12:31:21.421 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
available compute resources for node cms01.ifca.es
2015-11-06 12:31:21.721 25915 INFO nova.compute.resource_tracker
[req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
24, total allocated vcpus: 24

.........................

I can attach the full log if you want.

2015-11-06 13:48 GMT+01:00 Vasiliy Angapov <anga...@gmail.com>:

> There must be something in /var/log/cinder/volume.log or
> /var/log/nova/nova-compute.log that points to the problem. Can you
> post it here?
>
> 2015-11-06 20:14 GMT+08:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> > Hi Vasilly,
> >   Thanks, but I still see the same error:
> >
> > cinder.conf (of course I just restart the cinder-volume service)
> >
> > # default volume type to use (string value)
> >
> > [rbd-cephvolume]
> > rbd_user = cinder
> > rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
> > volume_backend_name=rbd
> > volume_driver = cinder.volume.drivers.rbd.RBDDriver
> > rbd_pool = volumes
> > rbd_ceph_conf = /etc/ceph/ceph.conf
> > rbd_flatten_volume_from_snapshot = false
> > rbd_max_clone_depth = 5
> > rbd_store_chunk_size = 4
> > rados_connect_timeout = -1
> > glance_api_version = 2
> >
> >
> >   xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> > xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> > xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> > xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> > xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> > xen be: qdisk-51760: error: Could not open
> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> > directory
> > xen be: qdisk-51760: initialise() failed
> >
> > Regards, I
> >
> > 2015-11-06 13:00 GMT+01:00 Vasiliy Angapov <anga...@gmail.com>:
> >>
> >> At cinder.conf you should place this options:
> >>
> >> rbd_user = cinder
> >> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
> >>
> >> to [rbd-cephvolume] section instead of DEFAULT.
> >>
> >> 2015-11-06 19:45 GMT+08:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> >> > Hi,
> >> >   One more step debugging this issue (hypervisor/nova-compute node is
> >> > XEN
> >> > 4.4.2):
> >> >
> >> >   I think the problem is that libvirt is not getting the correct user
> or
> >> > credentials tu access pool, on instance qemu log i see:
> >> >
> >> > xen be: qdisk-51760: error: Could not open
> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> >> > directory
> >> > xen be: qdisk-51760: initialise() failed
> >> > xen be: qdisk-51760: error: Could not open
> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> >> > directory
> >> > xen be: qdisk-51760: initialise() failed
> >> > xen be: qdisk-51760: error: Could not open
> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> >> > directory
> >> >
> >> > But using the user cinder on pool volumes :
> >> >
> >> > rbd ls -p volumes --id cinder
> >> > test
> >> > volume-4d26bb31-91e8-4646-8010-82127b775c8e
> >> > volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
> >> > volume-7da08f12-fb0f-4269-931a-d528c1507fee
> >> >
> >> > Using:
> >> > qemu-img info -f rbd rbd:volumes/test
> >> > Does not work, but using directly the user cinder and the ceph.conf
> file
> >> > works fine:
> >> >
> >> > qemu-img info -f rbd
> rbd:volumes/test:id=cinder:conf=/etc/ceph/ceph.conf
> >> >
> >> > I think nova.conf is set correctly (section libvirt):
> >> > images_rbd_pool = volumes
> >> > images_rbd_ceph_conf = /etc/ceph/ceph.conf
> >> > hw_disk_discard=unmap
> >> > rbd_user = cinder
> >> > rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
> >> >
> >> > And looking at libvirt:
> >> >
> >> > # virsh secret-list
> >> > setlocale: No such file or directory
> >> >  UUID                                  Usage
> >> >
> >> >
> --------------------------------------------------------------------------------
> >> >  67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX  ceph client.cinder secret
> >> >
> >> >
> >> > virsh secret-get-value 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
> >> > setlocale: No such file or directory
> >> > AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
> >> > cat /etc/ceph/ceph.client.cinder.keyring
> >> > [client.cinder]
> >> > key = AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
> >> >
> >> >
> >> > Any idea will be welcomed.
> >> > regards, I
> >> >
> >> > 2015-11-04 10:51 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> >> >>
> >> >> Dear Cephers,
> >> >>
> >> >>    I still can attach volume to my cloud machines, ceph version is
> >> >> 0.94.5
> >> >> (9764da52395923e0b32908d83a9f7304401fee43) and Openstack Juno
> >> >>
> >> >>    Nova+cinder are able to create volumes on Ceph
> >> >> cephvolume:~ # rados ls --pool volumes
> >> >> rbd_header.1f7784a9e1c2e
> >> >> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
> >> >> rbd_directory
> >> >> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
> >> >> rbd_header.23d5e33b4c15c
> >> >> rbd_id.volume-4d26bb31-91e8-4646-8010-82127b775c8e
> >> >> rbd_header.20407190ce77f
> >> >>
> >> >> cloud:~ # cinder list
> >> >>
> >> >>
> >> >>
> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
> >> >> |                  ID                                               |
> >> >> Status  | Display Name | Size | Volume Type | Bootable |
> >> >> Attached to              |
> >> >>
> >> >>
> >> >>
> +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------|---------------------------------------------------------+
> >> >> | 4d26bb31-91e8-4646-8010-82127b775c8e | in-use |     None     |
> >> >> 2
> >> >> |             rbd     |  false       |
> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
> >> >> |
> >> >>
> >> >>
> >> >>
> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
> >> >>
> >> >>
> >> >>    nova:~ # nova volume-attach 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e auto
> >> >>
> >> >>
> +----------+------------------------------------------------------------+
> >> >> | Property |                              Value
> >> >> |
> >> >>
> >> >>
> +----------+------------------------------------------------------------+
> >> >> | device      | /dev/xvdd
> >> >> |
> >> >> | id             | 4d26bb31-91e8-4646-8010-82127b775c8e |
> >> >> | serverId   | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb  |
> >> >> | volumeId | 4d26bb31-91e8-4646-8010-82127b775c8e |
> >> >> +----------+--------------------------------------+
> >> >>
> >> >> From nova-compute (Ubuntu 14.04 LTS \n \l) node I see the
> >> >> attaching/detaching:
> >> >> cloud01:~ # dpkg -l | grep ceph
> >> >> ii  ceph-common                         0.94.5-1trusty
> >> >> amd64        common utilities to mount and interact with a ceph
> storage
> >> >> cluster
> >> >> ii  libcephfs1                               0.94.5-1trusty
> >> >> amd64        Ceph distributed file system client library
> >> >> ii  python-cephfs                         0.94.5-1trusty
> >> >> amd64        Python libraries for the Ceph libcephfs library
> >> >> ii  librbd1                                    0.94.5-1trusty
> >> >> amd64        RADOS block device client library
> >> >> ii  python-rbd                              0.94.5-1trusty
> >> >> amd64        Python libraries for the Ceph librbd library
> >> >>
> >> >> at cinder.conf
> >> >>
> >> >>  rbd_user = cinder
> >> >> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
> >> >>
> >> >> [rbd-cephvolume]
> >> >> volume_backend_name=rbd
> >> >> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> >> >> rbd_pool = volumes
> >> >> rbd_ceph_conf = /etc/ceph/ceph.conf
> >> >> rbd_flatten_volume_from_snapshot = false
> >> >> rbd_max_clone_depth = 5
> >> >> rbd_store_chunk_size = 4
> >> >> rados_connect_timeout = -1
> >> >> glance_api_version = 2
> >> >>
> >> >> in nova.conf
> >> >> rbd_user=cinder
> >> >>
> >> >> # The libvirt UUID of the secret for the rbd_uservolumes
> >> >> # (string value)
> >> >> rbd_secret_uuid=67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
> >> >>
> >> >> images_rbd_pool=volumes
> >> >>
> >> >> # Path to the ceph configuration file to use (string value)
> >> >> images_rbd_ceph_conf=/etc/ceph/ceph.conf
> >> >>
> >> >> ls -la /etc/libvirt/secrets
> >> >> total 16
> >> >> drwx------ 2 root root 4096 Nov  4 10:28 .
> >> >> drwxr-xr-x 7 root root 4096 Oct 22 13:15 ..
> >> >> -rw------- 1 root root   40 Nov  4 10:28
> >> >> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxx.base64
> >> >> -rw------- 1 root root  170 Nov  4 10:25
> >> >> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxx.xml
> >> >>
> >> >>
> >> >>
> >> >> 2015-11-04 10:39:42.573 11653 INFO nova.compute.manager
> >> >> [req-8b2a9793-4b39-4cb0-b291-e492c350387e
> >> >> b7aadbb4a85745feb498b74e437129cc
> >> >> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Detach volume
> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd
> >> >> 2015-11-04 10:40:43.266 11653 INFO nova.compute.manager
> >> >> [req-35218de0-3f26-496b-aad9-5c839143da17
> >> >> b7aadbb4a85745feb498b74e437129cc
> >> >> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Attaching volume
> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvdd
> >> >>
> >> >> but one on cloud machine (SL6) the volume y never showed (xvdd).
> >> >> [root@cloud5 ~]# cat /proc/partitions
> >> >> major minor  #blocks  name
> >> >>
> >> >>  202        0   20971520 xvda
> >> >>  202       16 209715200 xvdb
> >> >>  202       32   10485760 xvdc
> >> >>
> >> >> Thanks in advance, I
> >> >>
> >> >> 2015-11-03 11:18 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> >> >>>
> >> >>> Hi all,
> >> >>>     During last week I been trying to deploy the pre-existing ceph
> >> >>> cluster with out openstack intance.
> >> >>>     The ceph-cinder integration was easy (or at least I think so!!)
> >> >>>     There is only one volume to attach block storage to out cloud
> >> >>> machines.
> >> >>>
> >> >>>     The client.cinder has permission on this volume (following the
> >> >>> guides)
> >> >>>     ...............
> >> >>>     client.cinder
> >> >>> key: AQAonXXXXXXXRAAPIAj9iErv001a0k+vyFdUg==
> >> >>> caps: [mon] allow r
> >> >>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> >> >>> pool=volumes
> >> >>>
> >> >>>    ceph.conf file seems to be OK:
> >> >>>
> >> >>> [global]
> >> >>> fsid = 6f5a65a7-316c-4825-afcb-428608941dd1
> >> >>> mon_initial_members = cephadm, cephmon02, cephmon03
> >> >>> mon_host = 10.10.3.1,10.10.3.2,10.10.3.3
> >> >>> auth_cluster_required = cephx
> >> >>> auth_service_required = cephx
> >> >>> auth_client_required = cephx
> >> >>> filestore_xattr_use_omap = true
> >> >>> osd_pool_default_size = 2
> >> >>> public_network = 10.10.0.0/16
> >> >>> cluster_network = 192.168.254.0/27
> >> >>>
> >> >>> [osd]
> >> >>> osd_journal_size = 20000
> >> >>>
> >> >>> [client.cinder]
> >> >>> keyring = /etc/ceph/ceph.client.cinder.keyring
> >> >>>
> >> >>> [client]
> >> >>> rbd cache = true
> >> >>> rbd cache writethrough until flush = true
> >> >>> admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
> >> >>>
> >> >>>
> >> >>> The trouble seems that blocks are created using the client.admin
> >> >>> instead
> >> >>> of client.cinder
> >> >>>
> >> >>> From cinder machine:
> >> >>>
> >> >>> cinder:~ # rados ls --pool volumes
> >> >>> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
> >> >>> rbd_directory
> >> >>> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
> >> >>> rbd_header.23d5e33b4c15c
> >> >>> rbd_header.20407190ce77f
> >> >>>
> >> >>> But if I try to look for using cinder client:
> >> >>>
> >> >>>
> >> >>>   cinder:~ #rados ls --pool volumes --secret client.cinder
> >> >>>   "empty answer"
> >> >>>
> >> >>> cinder:~ # ls -la /etc/ceph
> >> >>> total 24
> >> >>> drwxr-xr-x   2 root   root   4096 nov  3 10:17 .
> >> >>> drwxr-xr-x 108 root   root   4096 oct 29 09:52 ..
> >> >>> -rw-------   1 root   root     63 nov  3 10:17
> >> >>> ceph.client.admin.keyring
> >> >>> -rw-r--r--   1 cinder cinder   67 oct 28 13:44
> >> >>> ceph.client.cinder.keyring
> >> >>> -rw-r--r--   1 root   root    454 oct  1 13:56 ceph.conf
> >> >>> -rw-r--r--   1 root   root     73 sep 27 09:36 ceph.mon.keyring
> >> >>>
> >> >>>
> >> >>> from a client (I have supposed that this machine only need the
> cinder
> >> >>> key...)
> >> >>>
> >> >>> cloud28:~ # ls -la /etc/ceph/
> >> >>> total 28
> >> >>> drwx------   2 root root  4096 nov  3 11:01 .
> >> >>> drwxr-xr-x 116 root root 12288 oct 30 14:37 ..
> >> >>> -rw-r--r--   1 nova nova    67 oct 28 11:43
> ceph.client.cinder.keyring
> >> >>> -rw-r--r--   1 root root   588 nov  3 10:59 ceph.conf
> >> >>> -rw-r--r--   1 root root    92 oct 26 16:59 rbdmap
> >> >>>
> >> >>> cloud28:~ # rbd -p volumes ls
> >> >>> 2015-11-03 11:01:58.782795 7fc6c714b840 -1 monclient(hunting):
> ERROR:
> >> >>> missing keyring, cannot use cephx for authentication
> >> >>> 2015-11-03 11:01:58.782800 7fc6c714b840  0 librados: client.admin
> >> >>> initialization error (2) No such file or directory
> >> >>> rbd: couldn't connect to the cluster!
> >> >>>
> >> >>> Any help will be welcome.
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >>
> >> >>
> >> >>
> ############################################################################
> >> >> Iban Cabrillo Bartolome
> >> >> Instituto de Fisica de Cantabria (IFCA)
> >> >> Santander, Spain
> >> >> Tel: +34942200969
> >> >> PGP PUBLIC KEY:
> >> >> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> >> >>
> >> >>
> >> >>
> ############################################################################
> >> >> Bertrand Russell:
> >> >> "El problema con el mundo es que los estúpidos están seguros de todo
> y
> >> >> los
> >> >> inteligentes están llenos de dudas"
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> >
> ############################################################################
> >> > Iban Cabrillo Bartolome
> >> > Instituto de Fisica de Cantabria (IFCA)
> >> > Santander, Spain
> >> > Tel: +34942200969
> >> > PGP PUBLIC KEY:
> >> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> >> >
> >> >
> ############################################################################
> >> > Bertrand Russell:
> >> > "El problema con el mundo es que los estúpidos están seguros de todo y
> >> > los
> >> > inteligentes están llenos de dudas"
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >
> >
> >
> >
> > --
> >
> ############################################################################
> > Iban Cabrillo Bartolome
> > Instituto de Fisica de Cantabria (IFCA)
> > Santander, Spain
> > Tel: +34942200969
> > PGP PUBLIC KEY:
> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> >
> ############################################################################
> > Bertrand Russell:
> > "El problema con el mundo es que los estúpidos están seguros de todo y
> los
> > inteligentes están llenos de dudas"
>



-- 
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to