Hi,
has it worked for any other glance image? The snapshot shouldn't make
any difference, I just tried the same in a lab cluster. Have you
checked on the client side (OpenStack) for anything in dmesg etc.? Can
you query any information from that image? For example:
rbd info images_meta/image_name
rbd status images_meta/image_name
Is the Ceph cluster healthy? Maybe you have inactive PGs on the glance pool?
Zitat von "Yuta Kambe (Fujitsu)" <yuta.ka...@fujitsu.com>:
Hi everyone.
I am trying Image Live-Migration but it is not working well and I
would like some advice.
https://docs.ceph.com/en/latest/rbd/rbd-live-migration/
I use Ceph as a backend for OpenStack Glance.
I tried to migrate the Pool of Ceph used in Glance to the new Pool.
Source Pool:
- images_meta : metadata pool, Replication
- images_data : data pool, Erasure Code
Target Pool:
- images_meta: metadata pool, Replication (Same as source Pool)
- images_data_hdd: data pool, Erasure Code
The following command I executed, but did not return a response.
rbd migration prepare images_meta/image_name images_meta/image_name
--data-pool images_data_hdd
I checked the logs in /var/log/messages and /var/log/ceph, but no
useful information was available.
I would like some advice on this.
- Are there any other logs I should check?
- Is there a case where the rbd migration command cannot be executed?
The following is supplemental information.
- ceph version 17.2.8
- The migration of the OpenStack Nova image was successful with the
same Pool configuration and command.
- I don't know if it is related, but there is a snapshot in the
image of Glance, and unprotect of the snapshot is also unresponsive.
rbd snap unprotect images_meta/image_name@snap
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io