Thanks, Josh.  I am able to boot from by RBD Cinder volumes now.

Thanks,
Andy

On Oct 21, 2013, at 1:38 PM, Josh Durgin <josh.dur...@inktank.com> wrote:

> On 10/21/2013 10:35 AM, Andrew Richards wrote:
>> Thanks for the response Josh!
>> 
>> If the Ceph CLI tool still needs to be there for Cinder in Havana, then
>> am I correct in assuming that I still also need to export
>> "CEPH_ARGS='--id volumes'" in my cinder init script for the sake of
>> cephx like I had to do in Grizzly?
> 
> No, that's no longer necessary.
> 
> Josh
> 
>> Thanks,
>> Andy
>> 
>> On Oct 21, 2013, at 12:26 PM, Josh Durgin <josh.dur...@inktank.com
>> <mailto:josh.dur...@inktank.com>> wrote:
>> 
>>> On 10/21/2013 09:03 AM, Andrew Richards wrote:
>>>> Hi Everybody,
>>>> 
>>>> I'm attempting to get Ceph working for CentOS 6.4 running RDO Havana for
>>>> Cinder volume storage and boot-from-volume, and I keep bumping into a
>>>> very unhelpful errors on my nova-compute test node and my cinder
>>>> controller node.
>>>> 
>>>> Here is what I see on my cinder-volume controller (Node #1) when I try
>>>> to attach a RBD-backed Cinder volume to a Nova VM using either the GUI
>>>> or nova volume-attach (/var/log/cinder/volume.log):
>>>> 
>>>> 2013-10-20 18:21:05.880 13668 ERROR cinder.openstack.common.rpc.amqp
>>>> [req-bd62cb07-42e7-414a-86dc-f26f7a569de6
>>>> 9bfee22cd15b4dc0a2e203d7c151edbc 8431635821f84285afdd0f5faf1ce1aa]
>>>> Exception during message handling
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> Traceback (most recent call last):
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File
>>>> "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py",
>>>> line 441, in _process_data
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> **args)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File
>>>> "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py",
>>>> line 148, in dispatch
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> return getattr(proxyobj, method)(ctxt, **kwargs)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 808, in
>>>> wrapper
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> return func(self, *args, **kwargs)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line
>>>> 624, in initialize_connection
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> conn_info = self.driver.initialize_connection(volume, connector)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py",
>>>> line 665, in initialize_connection
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> hosts, ports = self._get_mon_addrs()
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py",
>>>> line 312, in _get_mon_addrs
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> out, _ = self._execute(*args)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in
>>>> execute
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> return processutils.execute(*cmd, **kwargs)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File
>>>> "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py",
>>>> line 158, in execute
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> shell=shell)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib/python2.6/site-packages/eventlet/green/subprocess.py",
>>>> line 25, in __init__
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> errread, errwrite)
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> raise child_exception
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> OSError: [Errno 2] No such file or directory
>>>> 2013-10-20 18:21:05.880 13668 TRACE cinder.openstack.common.rpc.amqp
>>>> 2013-10-20 18:21:05.883 13668 ERROR cinder.openstack.common.rpc.common
>>>> [req-bd62cb07-42e7-414a-86dc-f26f7a569de6
>>>> 9bfee22cd15b4dc0a2e203d7c151edbc 8431635821f84285afdd0f5faf1ce1aa]
>>>> Returning exception [Errno 2] No such file or directory to caller
>>>> 
>>>> 
>>>> Here is what I see on my nova-compute node (Node #2) when I try to boot
>>>> from volume (/var/log/nova/compute.log):
>>>> 
>>>> ERROR nova.compute.manager [req-ced59268-4766-4f57-9cdb-4ba451b0faaa
>>>> 9bfee22cd15b4dc0a2e203d7c151edbc 8431635821f84285afdd0f5faf1ce1aa]
>>>> [instance: c80a053f-b84c-401c-8e29-022d4c6f56a0] Error: The server has
>>>> either erred or is incapable of performing the requested operation.
>>>> (HTTP 500) (Request-ID: req-44557bfa-6777-41a6-8183-e08dedf0611b)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0] Traceback (most recent call last):
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1028,
>>>> in _build_instance
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     context, instance, bdms)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1393,
>>>> in _prep_block_device
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     instance=instance)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1371,
>>>> in _prep_block_device
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]
>>>> self._await_block_device_map_created) +
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283,
>>>> in attach_block_devices
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     block_device_mapping)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 170,
>>>> in attach
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     connector)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in
>>>> wrapper
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     res = method(self, ctx,
>>>> volume_id, *args, **kwargs)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in
>>>> initialize_connection
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     connector)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 306,
>>>> in initialize_connection
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     {'connector':
>>>> connector})[1]['connection_info']
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 237,
>>>> in _action
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     return
>>>> self.api.client.post(url, body=body)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210,
>>>> in post
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     return self._cs_request(url,
>>>> 'POST', **kwargs)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in
>>>> _cs_request
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     **kwargs)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]   File
>>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in
>>>> request
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]     raise
>>>> exceptions.from_response(resp, body)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0] ClientException: The server has
>>>> either erred or is incapable of performing the requested operation.
>>>> (HTTP 500) (Request-ID: req-44557bfa-6777-41a6-8183-e08dedf0611b)
>>>> 2013-10-17 15:01:45.060 18546 TRACE nova.compute.manager [instance:
>>>> c80a053f-b84c-401c-8e29-022d4c6f56a0]
>>>> 
>>>> 
>>>> More info on my setup:
>>>> 
>>>> * I'm running the most recent release of Dumpling libraries (0.67.4)
>>>>   and python-ceph, but I do not have ceph itself installed as it
>>>>   should not be needed on Havana
>>> 
>>> This is where the error is coming from - although most of the rbd
>>> driver was converted to use librbd, there are a couple things that
>>> still rely on the cli tools, since they had some functionality that
>>> wasn't present in librados in cuttlefish or bobtail.
>>> 
>>> Specifically, the attach is failing because cinder is trying to use
>>> the 'ceph' command to get a list of all monitors to pass to nova.
>>> Installing
>>> ceph-common on the node running cinder-volume should fix it.
>>> 
>>>> * cephx is in use, and I set up pools and keyrings per the guide at
>>>> http://ceph.com/docs/next/rbd/rbd-openstack/
>>> 
>>> These aren't updated for havana yet, but hopefully your email helps in
>>> the interim. This is the main new requirement - librbd python bindings
>>> and cli tools are needed for cinder-volume.
>>> 
>>> Josh
>>> 
>>>> * I'm using the ceph-extras repo to install the backported QEMU
>>>>   packages as described at
>>>> http://openstack.redhat.com/Using_Ceph_for_Block_Storage_with_RDO
>>>> * I'm also using Neutron+OVS and thus edited my qemu.conf according to
>>>>   this libvirt wiki page
>>>> http://wiki.libvirt.org/page/Guest_won't_start_-_warning:_could_not_open_/dev/net/tun_('generic_ethernet'_interface)
>>>> <http://wiki.libvirt.org/page/Guest_won%27t_start_-_warning:_could_not_open_/dev/net/tun_%28%27generic_ethernet%27_interface%29>
>>>>   
>>>> <http://wiki.libvirt.org/page/Guest_won%27t_start_-_warning:_could_not_open_/dev/net/tun_%28%27generic_ethernet%27_interface%29>
>>>> * I am presently not configuring Nova to put its ephemeral disk image
>>>>   on RBD (new option in Havana)
>>>> 
>>>> 
>>>> Things I've been able to do so far:
>>>> 
>>>> * I stored an image to Glance backed by RBD
>>>> * I used that image to create a Cinder volume backed by RBD
>>>> * I instantiated a working ephemeral VM with Nova based on the image
>>>>   from Glance backed by RBD
>>>> * I created a libvirt VM manually with virsh on the same compute node
>>>>   that attached a volume from the Cinder RBD pool
>>>> * I created a VM with Nova (ephemeral local boot) that I was then able
>>>>   to use virsh to successfully attach a Cinder volume backed by RBD
>>>> * I can use qemu-img to create volumes in the cinder RBD pool, but
>>>>   only if I have the client.admin keyring installed on the compute node
>>>> 
>>>> 
>>>> The error traced above is what happens every time I try to boot from
>>>> volume using that RBD-backed Cinder volume spawned from the RBD-backed
>>>> Glance image.  What did work led me to believe that QEMU was the
>>>> problem, so I tried the following:
>>>> 
>>>> * I changed the user and group qemu runs as from root to nova and from
>>>>   nova to qemu to see if their permissions had any affect; no change
>>>> * I tried the above tests with matching permissions on the contents of
>>>>   /etc/ceph/ (ceph.conf and the keyrings for admin, cinder, and
>>>>   glance); no change
>>>> 
>>>> 
>>>> It seems like Nova is somehow failing to get from Cinder that Cinder is
>>>> using RBD for its backend, but I can't understand why.  All my configs
>>>> align with every instance of documentation I've been able to find for
>>>> making OpenStack work with Ceph.  Has anyone done what I'm trying to do
>>>> on CentOS 6 or even on some version of Fedora?  I am cross-posting this
>>>> on the OpenStack listserv as well.
>>>> 
>>>> Thanks for your time,
>>>> 
>>>> Andy
>>> 
>> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to