> -----Original Message-----
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 22 August 2016 15:00
> To: Jason Dillaman <dilla...@redhat.com>
> Cc: Nick Fisk <n...@fisk.me.uk>; ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD Watch Notify for snapshots
> 
> On Fri, Jul 8, 2016 at 5:02 AM, Jason Dillaman <jdill...@redhat.com> wrote:
> > librbd pseudo-automatically handles this by flushing the cache to the
> > snapshot when a new snapshot is created, but I don't think krbd does
> > the same. If it doesn't, it would probably be a nice addition to the
> > block driver to support the general case.
> >
> > Baring that (or if you want to involve something like fsfreeze), I
> > think the answer depends on how much you are willing to write some
> > custom C/C++ code (I don't think the rados python library exposes
> > watch/notify APIs). A daemon could register a watch on a custom
> > per-host/image/etc object which would sync the disk when a
> > notification is received. Prior to creating a snapshot, you would need
> > to send a notification to this object to alert the daemon to 
> > sync/fsfreeze/etc.
> 
> If there is a filesystem on top of /dev/rbdX, which isn't suspended, how 
> would krbd driver flushing the page cache help?  In order for
> the block device level snapshot to be consistent, the filesystem needs to be 
> quiesced - fsfreeze or something resembling it is the only
> answer here.

I'm guessing whatever your virtualisation/backup software is, that it 
communicates with the qemu guest agent to call fsfreeze. That’s assuming librbd 
is being used with qemu in this scenario. The question is should the storage 
layer be able to inititate this, or is it best to come from hypervisor/backup 
software.

But I agree, I think there is a difference to the block device being consistent 
in terms of flushing any caches vs the contents of it. I guess you also have 
another layer, being the applications, they potentially also need to be 
informed that there will be a snapshot taken so they can flush any application 
buffers.

> 
> Thanks,
> 
>                 Ilya

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to