Lars, I just got done doing this after generating about a dozen CephFS subtrees 
for different Kubernetes clients. 

tl;dr: there is no way for files to move between filesystem formats (ie CephFS 
,> RBD) without copying them.

If you are doing the same thing, there may be some relevance for you in 
https://github.com/kubernetes/enhancements/pull/643. It’s worth checking to see 
if it meets your use case if so.

In any event, what I ended up doing was letting Kubernetes create the new PV 
with the RBD provisioner, then using find piped to cpio to move the file 
subtree. In a non-Kubernetes environment, one would simply create the 
destination RBD as usual. It should be most performant to do this on a monitor 
node.

cpio ensures you don’t lose metadata. It’s been fine for me, but if you have 
special xattrs that the clients of the files need, be sure to test that those 
are copied over. It’s very difficult to move that metadata once a file is 
copied and even harder to deal with a situation where the destination volume 
went live and some files on the destination are both newer versions and missing 
metadata. 

Brian

> On May 15, 2019, at 6:05 AM, Lars Täuber <taeu...@bbaw.de> wrote:
> 
> Hi,
> 
> is there a way to migrate a cephfs to a new data pool like it is for rbd on 
> nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/
> 
> Thanks
> Lars
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to