Hi, I am trying to deploy cinder-backup service and c-vol uses multi-backend, the configuration as follows:
... enabled_backends = ceph1, ceph2 backup_driver = cinder.backup.drivers.ceph backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user = admin backup_ceph_chunk_size = 134217728 backup_ceph_pool = cinder-backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 ... [ceph1] rbd_ceph_conf=/etc/ceph/server-31.conf backend_host=cinder rbd_user=admin volume_backend_name=ceph1 volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_pool=cinder-volumes rbd_secret_uuid=035adbab-a410-4ec6-a3f1-d1eaaac4db6a rbd_store_chunk_size=4 rbd_cluster_name=server-31 [ceph2] rbd_ceph_conf=/etc/ceph/server-32.conf backend_host=cinder rbd_user=admin volume_backend_name=ceph2 volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_pool=cinder-volumes rbd_secret_uuid=7058cdfc-8297-4c06-b965-0604cc7991fc rbd_store_chunk_size=4 rbd_cluster_name=server-32 the server-31 and server-32 are different ceph cluster. All cinder services look OK: cinder service-list +------------------+-----------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------+------+---------+-------+----------------------------+-----------------+ | cinder-backup | server-31 | nova | enabled | up | 2017-02-07T02:13:51.000000 | - | | cinder-scheduler | server-33 | nova | enabled | up | 2017-02-07T02:13:52.000000 | - | | cinder-volume | cinder@ceph1 | nova | enabled | up | 2017-02-07T02:13:43.000000 | - | | cinder-volume | cinder@ceph2 | nova | enabled | up | 2017-02-07T02:13:43.000000 | - | +------------------+-----------------+------+---------+-------+----------------------------+-----------------+ Firstly, create two volume with different backend: cinder create --volume-type ceph1 --name ceph1 1 cinder create --volume-type ceph2 --name ceph2 2 Then try to create backup: cinder backup-create ceph1 cinder backup-create ceph2 The result show that the ceph1 is ok but the ceph2 always fails to create backup, and the log show the image not found error: 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd [req-8abc2a96-1152-4906-8dfa-9bdb8f7001cd c12f90f5257a446682aa7cd11b2d1a97 78b0cf3fa582430ba9be966c7af441bc - - -] error opening rbd image volume-380ba471-b7f0-4e8f-a55a-5e0c39fa1349 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd Traceback (most recent call last): 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd File "/usr/lib/python2.7/site-packages/os_brick/initiator/linuxrbd.py", line 98, in __init__ 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd read_only=read_only) 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd File "rbd.pyx", line 1042, in rbd.Image.__init__ (rbd.c:8581) 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd ImageNotFound: error opening image volume-380ba471-b7f0-4e8f-a55a-5e0c39fa1349 at snapshot None 2017-02-06 15:08:21.185 121910 ERROR os_brick.initiator.linuxrbd It seems that the backup service connected wrong ceph cluster to get volume image. I wonder if the backup service support multibackend or I did something wrong configuration. Thank you very much!
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
