[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
Thanks all, I'm going to investigate rbd-mirror further. K. On 14-05-2020 09:30, Anthony D'Atri wrote: > It’s entirely possible — and documented — to mirror individual images. Your > proposal to use snapshots is reinventing the wheel, but with less efficiency. > > https://docs.ceph.com/docs/nau

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Zhenshi Zhou
rbd-mirror can work on a single image in the pool. and I did a test on image copy from 13.2 to 14.2. however, the data new in the source image didn't copy to the destination image. I'm not sure if this is normal. Kees Meijs 于2020年5月14日周四 下午3:24写道: > I need to mirror single RBDs while rbd-mirror:

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Konstantin Shalygin
On 5/14/20 1:27 PM, Kees Meijs wrote: Thank you very much. That's a good question. The implementations of OpenStack and Ceph and "the other" OpenStack and Ceph are, apart from networking, completely separate. Actually I was thinking you perform OpenStack and Ceph upgrade, not migration to oth

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
It’s entirely possible — and documented — to mirror individual images. Your proposal to use snapshots is reinventing the wheel, but with less efficiency. https://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/#image-configuration ISTR that in Octopus the need for RBD journals is gone, but am no

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Eugen Block
You can also mirror on a per-image basis. Zitat von Kees Meijs : I need to mirror single RBDs while rbd-mirror: "mirroring is configured on a per-pool basis" (according documentation). On 14-05-2020 09:13, Anthony D'Atri wrote: So? ___ ceph-users

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
I need to mirror single RBDs while rbd-mirror: "mirroring is configured on a per-pool basis" (according documentation). On 14-05-2020 09:13, Anthony D'Atri wrote: > So? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to cep

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
So? > > Hi Anthony, > > Thanks as well. > > Well, it's a one-time job. > > K. > > On 14-05-2020 09:10, Anthony D'Atri wrote: >> Why not use rbd-mirror to handle the volumes? > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
Hi Anthony, Thanks as well. Well, it's a one-time job. K. On 14-05-2020 09:10, Anthony D'Atri wrote: > Why not use rbd-mirror to handle the volumes? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@cep

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
Why not use rbd-mirror to handle the volumes? > On May 13, 2020, at 11:27 PM, Kees Meijs wrote: > > Hi Konstantin, > > Thank you very much. That's a good question. > > The implementations of OpenStack and Ceph and "the other" OpenStack and > Ceph are, apart from networking, completely separate

[ceph-users] Re: Migrating clusters (and versions)

2020-05-13 Thread Kees Meijs
Hi Konstantin, Thank you very much. That's a good question. The implementations of OpenStack and Ceph and "the other" OpenStack and Ceph are, apart from networking, completely separate. In terms of OpenStack I can recreate the compute instances and storage volumes but obviously need to copy the

[ceph-users] Re: Migrating clusters (and versions)

2020-05-13 Thread Konstantin Shalygin
On 5/8/20 2:32 AM, Kees Meijs wrote: I'm in the middle of an OpenStack migration (obviously Ceph backed) and stumble into some huge virtual machines. To ensure downtime is kept to a minimum, I'm thinking of using Ceph's snapshot features using rbd export-diff and import-diff. However, is it saf