cephvolume:~ # cinder-manage service list (from cinder)
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/base.py:20:
DeprecationWarning: The oslo namespace package is deprecated. Please use
oslo_config instead.
  from oslo.config import cfg
2015-11-16 13:01:42.203 23787 DEBUG oslo_db.api
[req-b2aece98-8f3d-4a2c-b50b-449281d8aeed - - - - -] Loading backend
'sqlalchemy' from 'cinder.db.sqlalchemy.api' _load_backend
/usr/lib/python2.7/dist-packages/oslo_db/api.py:214
2015-11-16 13:01:42.428 23787 DEBUG oslo_db.sqlalchemy.session
[req-b2aece98-8f3d-4a2c-b50b-449281d8aeed - - - - -] MySQL server mode set
to
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
_check_effective_sql_mode
/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:513
Binary           Host                                 Zone
Status     State Updated At
cinder-volume    cloudvolume01@iscsi-cloudvolume01    nova
enabled    :-)   2015-11-16 12:01:36
cinder-scheduler cinder01                             nova
enabled    XXX   2015-10-05 18:44:25
cinder-scheduler cloud01                              nova
enabled    XXX   2015-10-29 13:05:42
cinder-volume    cephvolume                           nova
disabled   XXX   2015-10-02 08:33:06
*cinder-volume    cephvolume                           nova
enabled    :-)   2015-11-16 12:01:38                    (this should be the
right one)*
cinder-volume    cloudvolume01@iscsi-cloudvolume01    nova
enabled    XXX   2015-10-01 14:50:32
cinder-scheduler cinder01                             nova
enabled    :-)   2015-11-16 12:01:41


2015-11-16 12:42 GMT+01:00 M Ranga Swami Reddy <swamire...@gmail.com>:

> Hi,
> Can you share the output of below command:
>
> cinder-manage service list
>
>
> On Mon, Nov 16, 2015 at 4:45 PM, Iban Cabrillo <cabri...@ifca.unican.es>
> wrote:
>
>> cloud:~ # cinder list
>>
>> +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
>> |                  ID                  |   Status  | Display Name | Size
>> | Volume Type | Bootable | Attached to |
>>
>> +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
>> | 6e1c86d5-efb6-469a-bbad-58b1011507bf | available |  volumetest  |  5
>> |     rbd     |  false   |             |
>>
>> +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
>> cloud:~ # nova volume-attach 08f6fef5-7c98-445b-abfe-636c4c6fee89
>> 6e1c86d5-efb6-469a-bbad-58b1011507bf auto
>> +----------+--------------------------------------+
>> | Property | Value                                |
>> +----------+--------------------------------------+
>> | device   | /dev/xvdd                            |
>> | id       | 6e1c86d5-efb6-469a-bbad-58b1011507bf |
>> | serverId | 08f6fef5-7c98-445b-abfe-636c4c6fee89 |
>> | volumeId | 6e1c86d5-efb6-469a-bbad-58b1011507bf |
>> +----------+--------------------------------------+
>>
>>
>> I just increase the cinder-volume log to see anything wrong...
>>
>> 2015-11-16 12:01:13.333 21152 DEBUG oslo_messaging._drivers.amqp [-]
>> unpacked context: {u'read_only': False, u'domain': None, u'project_name':
>> u'ifca.es:service:ge', u'user_id': u'b7aadbb4a85745feb498b74e437129cc',
>> u'show_deleted': False, u'roles': [u'_member_'], u'user_identity':
>> u'b7aadbb4a85745feb498b74e437129cc ce2dd2951bd24c1ea3b43c3b3716f604 - - -',
>> u'project_domain': None, u'timestamp': u'2015-11-16T11:01:13.282590',
>> u'auth_token': u'***', u'remote_address': u'10.10.11.1', u'quota_class':
>> None, u'resource_uuid': None, u'project_id':
>> u'ce2dd2951bd24c1ea3b43c3b3716f604', u'is_admin': False, u'user':
>> u'b7aadbb4a85745feb498b74e437129cc', u'service_catalog':
>> [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'region': u'RegionOne', u'publicURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'internalURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'id': u'1b7f14c87d8c42ad962f4d3a5fd13a77'}], u'type': u'compute', u'name':
>> u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'
>> https://keystone.ifca.es:35357/v2.0', u'region': u'RegionOne',
>> u'publicURL': u'https://keystone.ifca.es:5000/v2.0', u'internalURL': u'
>> https://keystone.ifca.es:5000/v2.0', u'id':
>> u'510c45b865ba4f40997b91a85552f3e2'}], u'type': u'identity', u'name':
>> u'keystone'}], u'request_id': u'req-3b848e28-6cad-4a11-a68c-3ebff034b91e',
>> u'user_domain': None, u'read_deleted': u'no', u'tenant':
>> u'ce2dd2951bd24c1ea3b43c3b3716f604'} unpack_context
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:197
>> 2015-11-16 12:01:13.581 21152 DEBUG cinder.volume.manager
>> [req-3b848e28-6cad-4a11-a68c-3ebff034b91e b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Volume
>> 6e1c86d5-efb6-469a-bbad-58b1011507bf: creating export initialize_connection
>> /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:1084
>> 2015-11-16 12:01:13.605 21152 DEBUG oslo_concurrency.processutils
>> [req-3b848e28-6cad-4a11-a68c-3ebff034b91e b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Running cmd (subprocess): ceph mon
>> dump --format=json --id cinder --conf /etc/ceph/ceph.conf execute
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:199
>> 2015-11-16 12:01:13.938 21152 DEBUG oslo_concurrency.processutils
>> [req-3b848e28-6cad-4a11-a68c-3ebff034b91e b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] CMD "ceph mon dump --format=json
>> --id cinder --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225
>> 2015-11-16 12:01:13.941 21152 DEBUG cinder.volume.drivers.rbd
>> [req-3b848e28-6cad-4a11-a68c-3ebff034b91e b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] connection data:
>> {'driver_volume_type': 'rbd', 'data': {'secret_type': 'ceph', 'name':
>> u'volumes/volume-6e1c86d5-efb6-469a-bbad-58b1011507bf', 'secret_uuid':
>> '67a6d4a1-e53a-42c7-9bc9-9bcc4191d7e4', 'auth_enabled': True, 'hosts':
>> [u'10.10.3.1', u'10.10.3.2', u'10.10.3.3'], 'auth_username': 'cinder',
>> 'ports': [u'6789', u'6789', u'6789']}} initialize_connection
>> /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:768
>> 2015-11-16 12:01:14.063 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-70a45729-21c0-419b-ae09-a691f29a5970 - - - - -] UNIQUE_ID is
>> 7c262c46392f4b0c8d92cdd5fb136a4c. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:01:14.067 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-70a45729-21c0-419b-ae09-a691f29a5970 - - - - -] UNIQUE_ID is
>> e7b4bfe62cfa42ccbd5f5895d6807cb6. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:01:14.820 21152 DEBUG oslo_messaging._drivers.amqp [-]
>> unpacked context: {u'read_only': False, u'domain': None, u'project_name':
>> u'ifca.es:service:ge', u'user_id': u'b7aadbb4a85745feb498b74e437129cc',
>> u'show_deleted': False, u'roles': [u'_member_'], u'user_identity':
>> u'b7aadbb4a85745feb498b74e437129cc ce2dd2951bd24c1ea3b43c3b3716f604 - - -',
>> u'project_domain': None, u'timestamp': u'2015-11-16T11:01:14.346924',
>> u'auth_token': u'***', u'remote_address': u'10.10.11.1', u'quota_class':
>> None, u'resource_uuid': None, u'project_id':
>> u'ce2dd2951bd24c1ea3b43c3b3716f604', u'is_admin': False, u'user':
>> u'b7aadbb4a85745feb498b74e437129cc', u'service_catalog':
>> [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'region': u'RegionOne', u'publicURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'internalURL': u'
>> https://cloud.ifca.es:8774/v1.1/ce2dd2951bd24c1ea3b43c3b3716f604',
>> u'id': u'1b7f14c87d8c42ad962f4d3a5fd13a77'}], u'type': u'compute', u'name':
>> u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'
>> https://keystone.ifca.es:35357/v2.0', u'region': u'RegionOne',
>> u'publicURL': u'https://keystone.ifca.es:5000/v2.0', u'internalURL': u'
>> https://keystone.ifca.es:5000/v2.0', u'id':
>> u'510c45b865ba4f40997b91a85552f3e2'}], u'type': u'identity', u'name':
>> u'keystone'}], u'request_id': u'req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d',
>> u'user_domain': None, u'read_deleted': u'no', u'tenant':
>> u'ce2dd2951bd24c1ea3b43c3b3716f604'} unpack_context
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:197
>> 2015-11-16 12:01:14.823 21152 WARNING oslo_config.cfg
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
>> "DEFAULT" is deprecated. Use option "lock_path" from group
>> "oslo_concurrency".
>> 2015-11-16 12:01:14.824 21152 DEBUG oslo_concurrency.lockutils
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Acquired file lock
>> "/var/lock/cinder/cinder-6e1c86d5-efb6-469a-bbad-58b1011507bf" after
>> waiting 0.000s acquire
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
>> 2015-11-16 12:01:14.824 21152 DEBUG oslo_concurrency.lockutils
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Lock
>> "6e1c86d5-efb6-469a-bbad-58b1011507bf" acquired by "do_attach" :: waited
>> 0.002s inner
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:444
>> 2015-11-16 12:01:15.373 21152 DEBUG oslo_concurrency.lockutils
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Releasing file lock
>> "/var/lock/cinder/cinder-6e1c86d5-efb6-469a-bbad-58b1011507bf" after
>> holding it for 0.550s release
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:227
>> 2015-11-16 12:01:15.374 21152 DEBUG oslo_concurrency.lockutils
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Lock
>> "6e1c86d5-efb6-469a-bbad-58b1011507bf" released by "do_attach" :: held
>> 0.550s inner
>> /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
>> 2015-11-16 12:01:15.375 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] UNIQUE_ID is
>> 50031f0472224477a5fe424a55b73358. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:01:15.379 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-0cc7d025-2aa2-41cb-9218-2e4a04ff2a8d b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] UNIQUE_ID is
>> 702bb2fe12b64dbfbdeb2cc82547beb2. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:01:45.164 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] Running periodic task
>> VolumeManager._publish_service_capabilities run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:01:45.165 21152 DEBUG cinder.manager
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] Notifying Schedulers
>> of capabilities ... _publish_service_capabilities
>> /usr/lib/python2.7/dist-packages/cinder/manager.py:140
>> 2015-11-16 12:01:45.166 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] UNIQUE_ID is
>> fae6b9633d274ad4941e645ddadd4ae4. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:01:45.171 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] Running periodic task
>> VolumeManager._report_driver_status run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:01:45.172 21152 INFO cinder.volume.manager
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] Updating volume status
>> 2015-11-16 12:01:45.172 21152 DEBUG cinder.volume.drivers.rbd
>> [req-17e48216-69b8-48db-bc86-d772cfd9b0d3 - - - - -] opening connection to
>> ceph cluster (timeout=-1). _connect_to_rados
>> /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:300
>> 2015-11-16 12:02:45.165 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] Running periodic task
>> VolumeManager._publish_service_capabilities run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:02:45.166 21152 DEBUG cinder.manager
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] Notifying Schedulers
>> of capabilities ... _publish_service_capabilities
>> /usr/lib/python2.7/dist-packages/cinder/manager.py:140
>> 2015-11-16 12:02:45.167 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] UNIQUE_ID is
>> 0e76ab62004f4c64aa92eef6360889ae. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:02:45.174 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] Running periodic task
>> VolumeManager._report_driver_status run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:02:45.175 21152 INFO cinder.volume.manager
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] Updating volume status
>> 2015-11-16 12:02:45.176 21152 DEBUG cinder.volume.drivers.rbd
>> [req-d0f83a9c-54dc-47e6-a338-fafae8d6996e - - - - -] opening connection to
>> ceph cluster (timeout=-1). _connect_to_rados
>> /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:300
>> 2015-11-16 12:03:45.166 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] Running periodic task
>> VolumeManager._publish_service_capabilities run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:03:45.167 21152 DEBUG cinder.manager
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] Notifying Schedulers
>> of capabilities ... _publish_service_capabilities
>> /usr/lib/python2.7/dist-packages/cinder/manager.py:140
>> 2015-11-16 12:03:45.168 21152 DEBUG oslo_messaging._drivers.amqp
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] UNIQUE_ID is
>> b3124d95de0b47ce90a3b9184b3ef884. _add_unique_id
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:252
>> 2015-11-16 12:03:45.173 21152 DEBUG cinder.openstack.common.periodic_task
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] Running periodic task
>> VolumeManager._report_driver_status run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:219
>> 2015-11-16 12:03:45.174 21152 INFO cinder.volume.manager
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] Updating volume status
>> 2015-11-16 12:03:45.175 21152 DEBUG cinder.volume.drivers.rbd
>> [req-eb2b62e5-33e2-46c1-80da-1ba81a0d6a2e - - - - -] opening connection to
>> ceph cluster (timeout=-1). _connect_to_rados
>> /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:300
>>
>> Of course the attach has fail again on HyperV....
>>
>> xen be: qdisk-51760: error: Could not open
>> 'volumes/volume-6e1c86d5-efb6-469a-bbad-58b1011507bf': No such file or
>> directory
>>
>> 01:~ # rbd ls -p volumes --id cinder
>> test
>> volume-6e1c86d5-efb6-469a-bbad-58b1011507bf
>> volume-7da08f12-fb0f-4269-931a-d528c1507fee
>>
>>
>> 2015-11-10 21:08 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
>>
>>> Hi Vasily,
>>>    Did you see anything interesting in the logs?? I do not really kown
>>> where else look for. Everything seems to be ok for me.
>>> Any help will be very appreciated.
>>>
>>>
>>> 2015-11-06 15:29 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
>>>
>>>> Hi Vasily,
>>>>   Of course,
>>>> from cinder-volume.log
>>>>
>>>>  2015-11-06 12:28:52.865 366 WARNING oslo_config.cfg
>>>> [req-41a4bbbb-4bec-40d2-a7c1-6e8d73644b4c b7aadbb4a85745feb498b74e437129cc
>>>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
>>>> "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 13:09:31.863 15534 WARNING oslo_config.cfg
>>>> [req-dd47624d-cf25-4beb-9d9e-70f532b2e8f9 - - - - -] Option "lock_path"
>>>> from group "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 13:09:44.375 15544 WARNING oslo_config.cfg
>>>> [req-696a1282-b84c-464c-a220-d4e41a7dbd02 b7aadbb4a85745feb498b74e437129cc
>>>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
>>>> "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 13:11:02.024 15722 WARNING oslo_config.cfg
>>>> [req-db3c3775-3607-4fb7-acc9-5dba207bde56 - - - - -] Option "lock_path"
>>>> from group "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 13:11:40.042 15729 WARNING oslo_config.cfg
>>>> [req-45458cfd-4e3a-4be2-b858-cece77072829 b7aadbb4a85745feb498b74e437129cc
>>>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group
>>>> "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 13:16:49.331 15729 WARNING cinder.quota
>>>> [req-4e2c2f71-5bfa-487e-a99f-a6bb63bf1bc1 - - - - -] Deprecated: Default
>>>> quota for resource: gigabytes_rbd is set by the default quota flag:
>>>> quota_gigabytes_rbd, it is now deprecated. Please use the default quota
>>>> class for default quota.
>>>> 2015-11-06 13:16:49.332 15729 WARNING cinder.quota
>>>> [req-4e2c2f71-5bfa-487e-a99f-a6bb63bf1bc1 - - - - -] Deprecated: Default
>>>> quota for resource: volumes_rbd is set by the default quota flag:
>>>> quota_volumes_rbd, it is now deprecated. Please use the default quota class
>>>> for default quota.
>>>> 2015-11-06 13:18:16.163 16635 WARNING oslo_config.cfg
>>>> [req-503543b9-c2df-4483-a8b3-11f622a9cbe8 - - - - -] Option "lock_path"
>>>> from group "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 14:17:08.288 16970 WARNING oslo_config.cfg
>>>> [req-a4ce4dbf-4119-427b-b555-930e66b9a2e3 58981d56c6cd4c5cacd59e518220a0eb
>>>> 4d778e83692b44778f71cbe44da0bc0b - - -] Option "lock_path" from group
>>>> "DEFAULT" is deprecated. Use option "lock_path" from group
>>>> "oslo_concurrency".
>>>> 2015-11-06 14:17:08.674 16970 WARNING cinder.quota
>>>> [req-fe21f3ad-7160-45b4-8adf-4cbe4bb85fc3 - - - - -] Deprecated: Default
>>>> quota for resource: gigabytes_rbd is set by the default quota flag:
>>>> quota_gigabytes_rbd, it is now deprecated. Please use the default quota
>>>> class for default quota.
>>>> 2015-11-06 14:17:08.676 16970 WARNING cinder.quota
>>>> [req-fe21f3ad-7160-45b4-8adf-4cbe4bb85fc3 - - - - -] Deprecated: Default
>>>> quota for resource: volumes_rbd is set by the default quota flag:
>>>> quota_volumes_rbd, it is now deprecated. Please use the default quota class
>>>> for default quota.
>>>>
>>>> And from nova-compute.log
>>>>
>>>> 2015-11-06 12:28:20.260 25915 INFO oslo_messaging._drivers.impl_rabbit
>>>> [req-dd85618c-ab24-43df-8192-b069d00abeeb - - - - -] Connected to AMQP
>>>> server on rabbitmq01:5672
>>>> 2015-11-06 12:28:51.864 25915 INFO nova.compute.manager
>>>> [req-030d8966-cbe7-46c3-9d95-a1c886553fbd b7aadbb4a85745feb498b74e437129cc
>>>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>>>> 08f6fef5-7c98-445b-abfe-636c4c6fee89] Detach volume
>>>> 4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd
>>>> 2015-11-06 12:29:18.255 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
>>>> available compute resources for node cms01.ifca.es
>>>> 2015-11-06 12:29:18.480 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
>>>> 24, total allocated vcpus: 24
>>>> 2015-11-06 12:29:18.481 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Final resource view:
>>>> name=cms01.ifca.es phys_ram=49143MB used_ram=47616MB phys_disk=270GB
>>>> used_disk=220GB total_vcpus=24 used_vcpus=24
>>>> pci_stats=<nova.pci.stats.PciDeviceStats object at 0x7fc458153d50>
>>>> 2015-11-06 12:29:18.508 25915 INFO nova.scheduler.client.report
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
>>>> updated for ('cms01', 'cms01.ifca.es')
>>>> 2015-11-06 12:29:18.508 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
>>>> updated for cms01:cms01.ifca.es
>>>> 2015-11-06 12:29:49.825 25915 INFO nova.compute.manager
>>>> [req-92d8810c-bea8-4eba-b682-c0d4e9d90c89 b7aadbb4a85745feb498b74e437129cc
>>>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>>>> 08f6fef5-7c98-445b-abfe-636c4c6fee89] Attaching volume
>>>> 4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvdd
>>>> 2015-11-06 12:30:20.389 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
>>>> available compute resources for node cms01.ifca.es
>>>> 2015-11-06 12:30:20.595 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
>>>> 24, total allocated vcpus: 24
>>>> 2015-11-06 12:30:20.596 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Final resource view:
>>>> name=cms01.ifca.es phys_ram=49143MB used_ram=47616MB phys_disk=270GB
>>>> used_disk=220GB total_vcpus=24 used_vcpus=24
>>>> pci_stats=<nova.pci.stats.PciDeviceStats object at 0x7fc458153d50>
>>>> 2015-11-06 12:30:20.622 25915 INFO nova.scheduler.client.report
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
>>>> updated for ('cms01', 'cms01.ifca.es')
>>>> 2015-11-06 12:30:20.623 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Compute_service record
>>>> updated for cms01:cms01.ifca.es
>>>> 2015-11-06 12:31:21.421 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Auditing locally
>>>> available compute resources for node cms01.ifca.es
>>>> 2015-11-06 12:31:21.721 25915 INFO nova.compute.resource_tracker
>>>> [req-0a4b7821-1b11-4ff7-a78d-d7e2b7b5a001 - - - - -] Total usable vcpus:
>>>> 24, total allocated vcpus: 24
>>>>
>>>> .........................
>>>>
>>>> I can attach the full log if you want.
>>>>
>>>> 2015-11-06 13:48 GMT+01:00 Vasiliy Angapov <anga...@gmail.com>:
>>>>
>>>>> There must be something in /var/log/cinder/volume.log or
>>>>> /var/log/nova/nova-compute.log that points to the problem. Can you
>>>>> post it here?
>>>>>
>>>>> 2015-11-06 20:14 GMT+08:00 Iban Cabrillo <cabri...@ifca.unican.es>:
>>>>> > Hi Vasilly,
>>>>> >   Thanks, but I still see the same error:
>>>>> >
>>>>> > cinder.conf (of course I just restart the cinder-volume service)
>>>>> >
>>>>> > # default volume type to use (string value)
>>>>> >
>>>>> > [rbd-cephvolume]
>>>>> > rbd_user = cinder
>>>>> > rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>>>> > volume_backend_name=rbd
>>>>> > volume_driver = cinder.volume.drivers.rbd.RBDDriver
>>>>> > rbd_pool = volumes
>>>>> > rbd_ceph_conf = /etc/ceph/ceph.conf
>>>>> > rbd_flatten_volume_from_snapshot = false
>>>>> > rbd_max_clone_depth = 5
>>>>> > rbd_store_chunk_size = 4
>>>>> > rados_connect_timeout = -1
>>>>> > glance_api_version = 2
>>>>> >
>>>>> >
>>>>> >   xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> > xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> > xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> > xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> > xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> > xen be: qdisk-51760: error: Could not open
>>>>> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file
>>>>> or
>>>>> > directory
>>>>> > xen be: qdisk-51760: initialise() failed
>>>>> >
>>>>> > Regards, I
>>>>> >
>>>>> > 2015-11-06 13:00 GMT+01:00 Vasiliy Angapov <anga...@gmail.com>:
>>>>> >>
>>>>> >> At cinder.conf you should place this options:
>>>>> >>
>>>>> >> rbd_user = cinder
>>>>> >> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>>>> >>
>>>>> >> to [rbd-cephvolume] section instead of DEFAULT.
>>>>> >>
>>>>> >> 2015-11-06 19:45 GMT+08:00 Iban Cabrillo <cabri...@ifca.unican.es>:
>>>>> >> > Hi,
>>>>> >> >   One more step debugging this issue (hypervisor/nova-compute
>>>>> node is
>>>>> >> > XEN
>>>>> >> > 4.4.2):
>>>>> >> >
>>>>> >> >   I think the problem is that libvirt is not getting the correct
>>>>> user or
>>>>> >> > credentials tu access pool, on instance qemu log i see:
>>>>> >> >
>>>>> >> > xen be: qdisk-51760: error: Could not open
>>>>> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such
>>>>> file or
>>>>> >> > directory
>>>>> >> > xen be: qdisk-51760: initialise() failed
>>>>> >> > xen be: qdisk-51760: error: Could not open
>>>>> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such
>>>>> file or
>>>>> >> > directory
>>>>> >> > xen be: qdisk-51760: initialise() failed
>>>>> >> > xen be: qdisk-51760: error: Could not open
>>>>> >> > 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such
>>>>> file or
>>>>> >> > directory
>>>>> >> >
>>>>> >> > But using the user cinder on pool volumes :
>>>>> >> >
>>>>> >> > rbd ls -p volumes --id cinder
>>>>> >> > test
>>>>> >> > volume-4d26bb31-91e8-4646-8010-82127b775c8e
>>>>> >> > volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
>>>>> >> > volume-7da08f12-fb0f-4269-931a-d528c1507fee
>>>>> >> >
>>>>> >> > Using:
>>>>> >> > qemu-img info -f rbd rbd:volumes/test
>>>>> >> > Does not work, but using directly the user cinder and the
>>>>> ceph.conf file
>>>>> >> > works fine:
>>>>> >> >
>>>>> >> > qemu-img info -f rbd
>>>>> rbd:volumes/test:id=cinder:conf=/etc/ceph/ceph.conf
>>>>> >> >
>>>>> >> > I think nova.conf is set correctly (section libvirt):
>>>>> >> > images_rbd_pool = volumes
>>>>> >> > images_rbd_ceph_conf = /etc/ceph/ceph.conf
>>>>> >> > hw_disk_discard=unmap
>>>>> >> > rbd_user = cinder
>>>>> >> > rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
>>>>> >> >
>>>>> >> > And looking at libvirt:
>>>>> >> >
>>>>> >> > # virsh secret-list
>>>>> >> > setlocale: No such file or directory
>>>>> >> >  UUID                                  Usage
>>>>> >> >
>>>>> >> >
>>>>> --------------------------------------------------------------------------------
>>>>> >> >  67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX  ceph client.cinder secret
>>>>> >> >
>>>>> >> >
>>>>> >> > virsh secret-get-value 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
>>>>> >> > setlocale: No such file or directory
>>>>> >> > AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
>>>>> >> > cat /etc/ceph/ceph.client.cinder.keyring
>>>>> >> > [client.cinder]
>>>>> >> > key = AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
>>>>> >> >
>>>>> >> >
>>>>> >> > Any idea will be welcomed.
>>>>> >> > regards, I
>>>>> >> >
>>>>> >> > 2015-11-04 10:51 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es
>>>>> >:
>>>>> >> >>
>>>>> >> >> Dear Cephers,
>>>>> >> >>
>>>>> >> >>    I still can attach volume to my cloud machines, ceph version
>>>>> is
>>>>> >> >> 0.94.5
>>>>> >> >> (9764da52395923e0b32908d83a9f7304401fee43) and Openstack Juno
>>>>> >> >>
>>>>> >> >>    Nova+cinder are able to create volumes on Ceph
>>>>> >> >> cephvolume:~ # rados ls --pool volumes
>>>>> >> >> rbd_header.1f7784a9e1c2e
>>>>> >> >> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
>>>>> >> >> rbd_directory
>>>>> >> >> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
>>>>> >> >> rbd_header.23d5e33b4c15c
>>>>> >> >> rbd_id.volume-4d26bb31-91e8-4646-8010-82127b775c8e
>>>>> >> >> rbd_header.20407190ce77f
>>>>> >> >>
>>>>> >> >> cloud:~ # cinder list
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
>>>>> >> >> |                  ID
>>>>>    |
>>>>> >> >> Status  | Display Name | Size | Volume Type | Bootable |
>>>>> >> >> Attached to              |
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------|---------------------------------------------------------+
>>>>> >> >> | 4d26bb31-91e8-4646-8010-82127b775c8e | in-use |     None     |
>>>>> >> >> 2
>>>>> >> >> |             rbd     |  false       |
>>>>> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
>>>>> >> >> |
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>    nova:~ # nova volume-attach
>>>>> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
>>>>> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e auto
>>>>> >> >>
>>>>> >> >>
>>>>> +----------+------------------------------------------------------------+
>>>>> >> >> | Property |                              Value
>>>>> >> >> |
>>>>> >> >>
>>>>> >> >>
>>>>> +----------+------------------------------------------------------------+
>>>>> >> >> | device      | /dev/xvdd
>>>>> >> >> |
>>>>> >> >> | id             | 4d26bb31-91e8-4646-8010-82127b775c8e |
>>>>> >> >> | serverId   | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb  |
>>>>> >> >> | volumeId | 4d26bb31-91e8-4646-8010-82127b775c8e |
>>>>> >> >> +----------+--------------------------------------+
>>>>> >> >>
>>>>> >> >> From nova-compute (Ubuntu 14.04 LTS \n \l) node I see the
>>>>> >> >> attaching/detaching:
>>>>> >> >> cloud01:~ # dpkg -l | grep ceph
>>>>> >> >> ii  ceph-common                         0.94.5-1trusty
>>>>> >> >> amd64        common utilities to mount and interact with a ceph
>>>>> storage
>>>>> >> >> cluster
>>>>> >> >> ii  libcephfs1                               0.94.5-1trusty
>>>>> >> >> amd64        Ceph distributed file system client library
>>>>> >> >> ii  python-cephfs                         0.94.5-1trusty
>>>>> >> >> amd64        Python libraries for the Ceph libcephfs library
>>>>> >> >> ii  librbd1                                    0.94.5-1trusty
>>>>> >> >> amd64        RADOS block device client library
>>>>> >> >> ii  python-rbd                              0.94.5-1trusty
>>>>> >> >> amd64        Python libraries for the Ceph librbd library
>>>>> >> >>
>>>>> >> >> at cinder.conf
>>>>> >> >>
>>>>> >> >>  rbd_user = cinder
>>>>> >> >> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>>>> >> >>
>>>>> >> >> [rbd-cephvolume]
>>>>> >> >> volume_backend_name=rbd
>>>>> >> >> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>>>>> >> >> rbd_pool = volumes
>>>>> >> >> rbd_ceph_conf = /etc/ceph/ceph.conf
>>>>> >> >> rbd_flatten_volume_from_snapshot = false
>>>>> >> >> rbd_max_clone_depth = 5
>>>>> >> >> rbd_store_chunk_size = 4
>>>>> >> >> rados_connect_timeout = -1
>>>>> >> >> glance_api_version = 2
>>>>> >> >>
>>>>> >> >> in nova.conf
>>>>> >> >> rbd_user=cinder
>>>>> >> >>
>>>>> >> >> # The libvirt UUID of the secret for the rbd_uservolumes
>>>>> >> >> # (string value)
>>>>> >> >> rbd_secret_uuid=67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>>>> >> >>
>>>>> >> >> images_rbd_pool=volumes
>>>>> >> >>
>>>>> >> >> # Path to the ceph configuration file to use (string value)
>>>>> >> >> images_rbd_ceph_conf=/etc/ceph/ceph.conf
>>>>> >> >>
>>>>> >> >> ls -la /etc/libvirt/secrets
>>>>> >> >> total 16
>>>>> >> >> drwx------ 2 root root 4096 Nov  4 10:28 .
>>>>> >> >> drwxr-xr-x 7 root root 4096 Oct 22 13:15 ..
>>>>> >> >> -rw------- 1 root root   40 Nov  4 10:28
>>>>> >> >> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxx.base64
>>>>> >> >> -rw------- 1 root root  170 Nov  4 10:25
>>>>> >> >> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxx.xml
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> 2015-11-04 10:39:42.573 11653 INFO nova.compute.manager
>>>>> >> >> [req-8b2a9793-4b39-4cb0-b291-e492c350387e
>>>>> >> >> b7aadbb4a85745feb498b74e437129cc
>>>>> >> >> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>>>>> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Detach volume
>>>>> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd
>>>>> >> >> 2015-11-04 10:40:43.266 11653 INFO nova.compute.manager
>>>>> >> >> [req-35218de0-3f26-496b-aad9-5c839143da17
>>>>> >> >> b7aadbb4a85745feb498b74e437129cc
>>>>> >> >> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>>>>> >> >> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Attaching volume
>>>>> >> >> 4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvdd
>>>>> >> >>
>>>>> >> >> but one on cloud machine (SL6) the volume y never showed (xvdd).
>>>>> >> >> [root@cloud5 ~]# cat /proc/partitions
>>>>> >> >> major minor  #blocks  name
>>>>> >> >>
>>>>> >> >>  202        0   20971520 xvda
>>>>> >> >>  202       16 209715200 xvdb
>>>>> >> >>  202       32   10485760 xvdc
>>>>> >> >>
>>>>> >> >> Thanks in advance, I
>>>>> >> >>
>>>>> >> >> 2015-11-03 11:18 GMT+01:00 Iban Cabrillo <
>>>>> cabri...@ifca.unican.es>:
>>>>> >> >>>
>>>>> >> >>> Hi all,
>>>>> >> >>>     During last week I been trying to deploy the pre-existing
>>>>> ceph
>>>>> >> >>> cluster with out openstack intance.
>>>>> >> >>>     The ceph-cinder integration was easy (or at least I think
>>>>> so!!)
>>>>> >> >>>     There is only one volume to attach block storage to out
>>>>> cloud
>>>>> >> >>> machines.
>>>>> >> >>>
>>>>> >> >>>     The client.cinder has permission on this volume (following
>>>>> the
>>>>> >> >>> guides)
>>>>> >> >>>     ...............
>>>>> >> >>>     client.cinder
>>>>> >> >>> key: AQAonXXXXXXXRAAPIAj9iErv001a0k+vyFdUg==
>>>>> >> >>> caps: [mon] allow r
>>>>> >> >>> caps: [osd] allow class-read object_prefix rbd_children, allow
>>>>> rwx
>>>>> >> >>> pool=volumes
>>>>> >> >>>
>>>>> >> >>>    ceph.conf file seems to be OK:
>>>>> >> >>>
>>>>> >> >>> [global]
>>>>> >> >>> fsid = 6f5a65a7-316c-4825-afcb-428608941dd1
>>>>> >> >>> mon_initial_members = cephadm, cephmon02, cephmon03
>>>>> >> >>> mon_host = 10.10.3.1,10.10.3.2,10.10.3.3
>>>>> >> >>> auth_cluster_required = cephx
>>>>> >> >>> auth_service_required = cephx
>>>>> >> >>> auth_client_required = cephx
>>>>> >> >>> filestore_xattr_use_omap = true
>>>>> >> >>> osd_pool_default_size = 2
>>>>> >> >>> public_network = 10.10.0.0/16
>>>>> >> >>> cluster_network = 192.168.254.0/27
>>>>> >> >>>
>>>>> >> >>> [osd]
>>>>> >> >>> osd_journal_size = 20000
>>>>> >> >>>
>>>>> >> >>> [client.cinder]
>>>>> >> >>> keyring = /etc/ceph/ceph.client.cinder.keyring
>>>>> >> >>>
>>>>> >> >>> [client]
>>>>> >> >>> rbd cache = true
>>>>> >> >>> rbd cache writethrough until flush = true
>>>>> >> >>> admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
>>>>> >> >>>
>>>>> >> >>>
>>>>> >> >>> The trouble seems that blocks are created using the client.admin
>>>>> >> >>> instead
>>>>> >> >>> of client.cinder
>>>>> >> >>>
>>>>> >> >>> From cinder machine:
>>>>> >> >>>
>>>>> >> >>> cinder:~ # rados ls --pool volumes
>>>>> >> >>> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
>>>>> >> >>> rbd_directory
>>>>> >> >>> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
>>>>> >> >>> rbd_header.23d5e33b4c15c
>>>>> >> >>> rbd_header.20407190ce77f
>>>>> >> >>>
>>>>> >> >>> But if I try to look for using cinder client:
>>>>> >> >>>
>>>>> >> >>>
>>>>> >> >>>   cinder:~ #rados ls --pool volumes --secret client.cinder
>>>>> >> >>>   "empty answer"
>>>>> >> >>>
>>>>> >> >>> cinder:~ # ls -la /etc/ceph
>>>>> >> >>> total 24
>>>>> >> >>> drwxr-xr-x   2 root   root   4096 nov  3 10:17 .
>>>>> >> >>> drwxr-xr-x 108 root   root   4096 oct 29 09:52 ..
>>>>> >> >>> -rw-------   1 root   root     63 nov  3 10:17
>>>>> >> >>> ceph.client.admin.keyring
>>>>> >> >>> -rw-r--r--   1 cinder cinder   67 oct 28 13:44
>>>>> >> >>> ceph.client.cinder.keyring
>>>>> >> >>> -rw-r--r--   1 root   root    454 oct  1 13:56 ceph.conf
>>>>> >> >>> -rw-r--r--   1 root   root     73 sep 27 09:36 ceph.mon.keyring
>>>>> >> >>>
>>>>> >> >>>
>>>>> >> >>> from a client (I have supposed that this machine only need the
>>>>> cinder
>>>>> >> >>> key...)
>>>>> >> >>>
>>>>> >> >>> cloud28:~ # ls -la /etc/ceph/
>>>>> >> >>> total 28
>>>>> >> >>> drwx------   2 root root  4096 nov  3 11:01 .
>>>>> >> >>> drwxr-xr-x 116 root root 12288 oct 30 14:37 ..
>>>>> >> >>> -rw-r--r--   1 nova nova    67 oct 28 11:43
>>>>> ceph.client.cinder.keyring
>>>>> >> >>> -rw-r--r--   1 root root   588 nov  3 10:59 ceph.conf
>>>>> >> >>> -rw-r--r--   1 root root    92 oct 26 16:59 rbdmap
>>>>> >> >>>
>>>>> >> >>> cloud28:~ # rbd -p volumes ls
>>>>> >> >>> 2015-11-03 11:01:58.782795 7fc6c714b840 -1 monclient(hunting):
>>>>> ERROR:
>>>>> >> >>> missing keyring, cannot use cephx for authentication
>>>>> >> >>> 2015-11-03 11:01:58.782800 7fc6c714b840  0 librados:
>>>>> client.admin
>>>>> >> >>> initialization error (2) No such file or directory
>>>>> >> >>> rbd: couldn't connect to the cluster!
>>>>> >> >>>
>>>>> >> >>> Any help will be welcome.
>>>>> >> >>>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> --
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> ############################################################################
>>>>> >> >> Iban Cabrillo Bartolome
>>>>> >> >> Instituto de Fisica de Cantabria (IFCA)
>>>>> >> >> Santander, Spain
>>>>> >> >> Tel: +34942200969
>>>>> >> >> PGP PUBLIC KEY:
>>>>> >> >> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> ############################################################################
>>>>> >> >> Bertrand Russell:
>>>>> >> >> "El problema con el mundo es que los estúpidos están seguros de
>>>>> todo y
>>>>> >> >> los
>>>>> >> >> inteligentes están llenos de dudas"
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > --
>>>>> >> >
>>>>> >> >
>>>>> ############################################################################
>>>>> >> > Iban Cabrillo Bartolome
>>>>> >> > Instituto de Fisica de Cantabria (IFCA)
>>>>> >> > Santander, Spain
>>>>> >> > Tel: +34942200969
>>>>> >> > PGP PUBLIC KEY:
>>>>> >> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>>> >> >
>>>>> >> >
>>>>> ############################################################################
>>>>> >> > Bertrand Russell:
>>>>> >> > "El problema con el mundo es que los estúpidos están seguros de
>>>>> todo y
>>>>> >> > los
>>>>> >> > inteligentes están llenos de dudas"
>>>>> >> >
>>>>> >> > _______________________________________________
>>>>> >> > ceph-users mailing list
>>>>> >> > ceph-users@lists.ceph.com
>>>>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>> >> >
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> >
>>>>> ############################################################################
>>>>> > Iban Cabrillo Bartolome
>>>>> > Instituto de Fisica de Cantabria (IFCA)
>>>>> > Santander, Spain
>>>>> > Tel: +34942200969
>>>>> > PGP PUBLIC KEY:
>>>>> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>>> >
>>>>> ############################################################################
>>>>> > Bertrand Russell:
>>>>> > "El problema con el mundo es que los estúpidos están seguros de todo
>>>>> y los
>>>>> > inteligentes están llenos de dudas"
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> ############################################################################
>>>> Iban Cabrillo Bartolome
>>>> Instituto de Fisica de Cantabria (IFCA)
>>>> Santander, Spain
>>>> Tel: +34942200969
>>>> PGP PUBLIC KEY:
>>>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>>
>>>> ############################################################################
>>>> Bertrand Russell:
>>>> *"El problema con el mundo es que los estúpidos están seguros de todo y
>>>> los inteligentes están llenos de dudas*"
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> ############################################################################
>>> Iban Cabrillo Bartolome
>>> Instituto de Fisica de Cantabria (IFCA)
>>> Santander, Spain
>>> Tel: +34942200969
>>> PGP PUBLIC KEY:
>>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>
>>> ############################################################################
>>> Bertrand Russell:
>>> *"El problema con el mundo es que los estúpidos están seguros de todo y
>>> los inteligentes están llenos de dudas*"
>>>
>>
>>
>>
>> --
>>
>> ############################################################################
>> Iban Cabrillo Bartolome
>> Instituto de Fisica de Cantabria (IFCA)
>> Santander, Spain
>> Tel: +34942200969
>> PGP PUBLIC KEY:
>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>
>> ############################################################################
>> Bertrand Russell:
>> *"El problema con el mundo es que los estúpidos están seguros de todo y
>> los inteligentes están llenos de dudas*"
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>


-- 
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to