Hi,

Are you exporting rbd image while a VM is running upon it ?
As far as I know, rbd export is not consistent

You should not export an image, but only snapshots:
- create a snapshot of the image
- export the snapshot (rbd export pool/image@snap - | ..)
- drop the snapshot

Regards,


On 3/10/20 8:31 PM, Matt Dunavant wrote:
> Hello,
> 
> I think I've been running into an rbd export/import bug and wanted to see if 
> anybody else had any experience.
> 
> We're using rbd images for VM drives both with and without custom stripe 
> sizes. When we try to export/import the drive to another ceph cluster, the VM 
> always comes up in a busted state it can't recover from. This happens both 
> when doing this export/import through stdin/stdout and when using a middle 
> machine as a temp space. I remember doing this a few times in previous 
> versions without error, so I'm not sure if this is a regression or I'm doing 
> something different. I'm still testing this to try and track down where the 
> issue is but wanted to post this here to see if anybody else has any 
> experience. 
> 
> Example command: rbd -c /etc/ceph/cluster1.conf export pool/testvm.boot - | 
> rbd -c /etc/ceph/cluster2.conf import - pool/testvm.boot
> 
> Current cluster is on 14.2.8 and using Ubuntu 18.04 w/ 5.3.0-40-generic. 
> 
> Let me know if I can provide any more details to help track this down.
> 
> Thanks,
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to