> Op 10 juni 2016 om 0:04 schreef Brian Kroth <bpkr...@gmail.com>:
> 
> 
> I'd considered a similar migration path in the past (slowly rotate 
> updated osds into the pool and old ones out), but then after watching 
> some of the bugs and discussions regarding ceph cache tiering and the 
> like between giant and hammer/jewel, I was starting to lean more towards 
> the rbd -c oldcluster.conf export | rbd -c newcluster.conf import.  That 
> way would give you time to test out a completely independent setup for a 
> while, do a rbd version format switch along the way, and whatever else 
> you needed to do.  Could even failback (probably with some data loss) if 
> necessary.  In theory this could also be done with minimal downtime 
> using the snapshot diff syncing process [1], no?
> 
> Anyways, anyone have any operational experience with the rbd export | 
> rbd import method between clusters to share?
> 

I did this where I had to merge two Ceph clusters in to one cluster. In that 
case there were some 25TB RBD images involved.

It worked, but the copy just took a long time, that's all.

Wido

> Thanks,
> Brian
> 
> [1] <http://ceph.com/planet/convert-rbd-to-format-v2/>
> 
> Michael Kuriger <mk7...@yp.com> 2016-06-09 16:44:
> >This is how I did it.  I upgraded my old cluster first (live one by one) .  
> >Then I added my new OSD servers to my running cluster.  Once they were all 
> >added I set the weight to 0 on all my original osd's.  This causes a lot of 
> >IO but all data will be migrated to the new servers.  Then you can remove 
> >the old OSD servers from the cluster.
> >
> >
> >
> > 
> >Michael Kuriger
> >Sr. Unix Systems Engineer
> >
> >-----Original Message-----
> >From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> >Wido den Hollander
> >Sent: Thursday, June 09, 2016 12:47 AM
> >To: Marek Dohojda; ceph-users@lists.ceph.com
> >Subject: Re: [ceph-users] Migrating from one Ceph cluster to another
> >
> >
> >> Op 8 juni 2016 om 22:49 schreef Marek Dohojda 
> >> <mdoho...@altitudedigital.com>:
> >>
> >>
> >> I have a ceph cluster (Hammer) and I just built a new cluster
> >> (Infernalis).  This cluster contains VM boxes based on KVM.
> >>
> >> What I would like to do is move all the data from one ceph cluster to
> >> another.  However the only way I could find from my google searches
> >> would be to move each image to local disk, copy this image across to
> >> new cluster, and import it.
> >>
> >> I am hoping that there is a way to just synch the data (and I do
> >> realize that KVMs will have to be down for the full migration) from
> >> one cluster to another.
> >>
> >
> >You can do this with the rbd command using export and import.
> >
> >Something like:
> >
> >$ rbd export image1 -|rbd import image1 -
> >
> >Where you have both RBD commands connect to a different Ceph cluster. See 
> >--help on how to do that.
> >
> >You can run this in a loop with the output of 'rbd ls'.
> >
> >But that's about the only way.
> >
> >Wido
> >
> >> Thank you
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_lis
> >> tinfo.cgi_ceph-2Dusers-2Dceph.com&d=CwICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1A
> >> amzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=lhisR2C1GH95fR5NYNEGWebX
> >> LILh56cyhY8u9v56o6M&s=ddR_8bexw5SKK1wD5UNp9Oijw0Z0I9RnhaIJbcfUS-8&e=
> >_______________________________________________
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=CwICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=lhisR2C1GH95fR5NYNEGWebXLILh56cyhY8u9v56o6M&s=ddR_8bexw5SKK1wD5UNp9Oijw0Z0I9RnhaIJbcfUS-8&e=
> >_______________________________________________
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to