On Wed, Aug 21, 2019 at 9:33 PM Zaharo Bai (白战豪)-云数据中心集团
<baizhanha...@inspur.com> wrote:
>
> I tested and combed the current migration process. If I read and write the 
> new image during the migration process and then use migration_abort, the 
> newly written data will be lost. Do we have a solution to this problem?
That's a good point that we didn't consider. I've opened a tracker
ticket against the issue [1].

>
> -----邮件原件-----
> 发件人: Jason Dillaman [mailto:jdill...@redhat.com]
> 发送时间: 2019年8月22日 8:38
> 收件人: Zaharo Bai (白战豪)-云数据中心集团 <baizhanha...@inspur.com>
> 抄送: ceph-users <ceph-us...@ceph.com>
> 主题: Re: About image migration
>
> On Wed, Aug 21, 2019 at 8:35 PM Zaharo Bai (白战豪)-云数据中心集团
>
> <baizhanha...@inspur.com> wrote:
> >
> > So, what is the current usage scenario for our online migration? I 
> > understand that the current online migration of the community has not been 
> > truly online, and the upper layer (iscsi target, openstack, etc.) must be 
> > required to do some operations such as switching and data maintenance.
> > Is there a way to achieve full online migration in the rbd layer, the upper 
> > application is not aware, just need to call librbd's CLI or API, or is it 
> > necessary to do so, because doing so will inevitably change the 
> > architecture of ceph.
>
> If using RBD under a higher-level layer, the upper layers would need to know 
> about the migration to update their internal data structures to point to the 
> correct (new) image. The only case where this really isn't necessary is when 
> live-migrating an image "in-place" (i.e. you keep it in the same pool w/ the 
> same name).
>
> > -----邮件原件-----
> > 发件人: Jason Dillaman [mailto:jdill...@redhat.com]
> > 发送时间: 2019年8月21日 20:44
> > 收件人: Zaharo Bai (白战豪)-云数据中心集团 <baizhanha...@inspur.com>
> > 抄送: ceph-users <ceph-us...@ceph.com>
> > 主题: Re: About image migration
> >
> > On Tue, Aug 20, 2019 at 10:04 PM Zaharo Bai (白战豪)-云数据中心集团
> > <baizhanha...@inspur.com> wrote:
> > >
> > > Hi jason:
> > >
> > >          I have a question I would like to ask you, Is the current image 
> > > migration and openstack adapted? according to my understanding, 
> > > openstack’s previous live-migration logic is implemented in cinder, just 
> > > call librbd rbd_read/write API to do Data migration.
> >
> > I believe the existing Cinder volume block live-migration is just a wrapper 
> > around QEMU's block live-migration functionality, so indeed it would just 
> > be a series of RBD read/write API calls between two volumes. The built-in 
> > RBD live-migration is similar but it copies snapshots and preserves image 
> > sparseness during the migration process.
> > Because Cinder is using the QEMU block live-migration functionality, it's 
> > not tied into RBD's live-migration.
> >
> > >
> > >
> > >
> > > Best wishes
> > >
> > > Zaharo
> >
> >
> >
> > --
> > Jason
>
>
>
> --
> Jason

[1] https://tracker.ceph.com/issues/41394

-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to