Hello,

I think I've been running into an rbd export/import bug and wanted to see if 
anybody else had any experience.

We're using rbd images for VM drives both with and without custom stripe sizes. 
When we try to export/import the drive to another ceph cluster, the VM always 
comes up in a busted state it can't recover from. This happens both when doing 
this export/import through stdin/stdout and when using a middle machine as a 
temp space. I remember doing this a few times in previous versions without 
error, so I'm not sure if this is a regression or I'm doing something 
different. I'm still testing this to try and track down where the issue is but 
wanted to post this here to see if anybody else has any experience. 

Example command: rbd -c /etc/ceph/cluster1.conf export pool/testvm.boot - | rbd 
-c /etc/ceph/cluster2.conf import - pool/testvm.boot

Current cluster is on 14.2.8 and using Ubuntu 18.04 w/ 5.3.0-40-generic. 

Let me know if I can provide any more details to help track this down.

Thanks,
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to