If Ceph snapshots work like VM snapshots (and I don't have any reason to
believe otherwise), the snapshot will never grow larger than the size of
the base image. If the same blocks are rewritten, then they are just
rewritten in the snapshot and don't take any extra space. The snapshot
functions differently than a log (like databases). No matter how many times
a block is rewritten the read access should be roughly the same. Only when
you increase the number of snapshots on a single image is there the
potential for increased read latency. I believe there is a Ceph blueprint
to implement a snapshot bitmap which would make such look ups very
inexpensive as it would all be done efficiently in memory.

There is some concern that large snapshot files (and thin or sparse
provisioning) increase fragmentation, but if you are running a large VM
environment, there is no such thing as sequential access on your storage
systems anyways.

On Fri, Dec 26, 2014 at 6:33 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On Tue, 16 Dec 2014 11:50:37 AM Robert LeBlanc wrote:
> > COW into the snapshot (like VMware, Ceph, etc):
> > When a write is committed, the changes are committed to a diff file and
> the
> > base file is left untouched. This only has a single write penalty, if you
> > want to discard the child, it is fast as you just delete the diff file.
> The
> > negative side effects is that reads may have to query each diff file
> before
> > being satisfied, and if you want to delete the snapshot, but keep the
> > changes (merge the snapshot into the base), then you have to copy all the
> > diff blocks into the base image.
>
>
> Sorry to revive an old thread ...
>
> Does this mean with ceph snapshots if you leave the hanging around then the
> snapshot file will get larger and larger as writes are made? and reads will
> slow down?
>
> So not a good idea to leave snapshots of a VM undeleted for long?
> --
> Lindsay
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to