CEPH 12.2.2

So I have a snapshot of an image:

$ rbd du ssd-volumes/volume-4fed3f31-0802-4850-91bc-17e6da05697d
NAME
              PROVISIONED  USED
volume-4fed3f31-0802-4850-91bc-17e6da05697d@snapshot-d2145e21-99a7-4e2e-9138-ab3e975f8113
    20480M 8088M
volume-4fed3f31-0802-4850-91bc-17e6da05697d
                    20480M 1152M
<TOTAL>
                    20480M 9240M

I create a clone, and then flatten it.

Here's a clone before flatten:
$ rbd info ssd-volumes/volume-0578b38a-8db2-481f-ad43-65d21b09c89b
rbd image 'volume-0578b38a-8db2-481f-ad43-65d21b09c89b':
        size 20480 MB in 5120 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.33717c18c5a7e1
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
        flags:
        parent:
ssd-volumes/volume-4fed3f31-0802-4850-91bc-17e6da05697d@snapshot-d2145e21-99a7-4e2e-9138-ab3e975f8113
        overlap: 20480 MB

$ rbd du ssd-volumes/volume-0578b38a-8db2-481f-ad43-65d21b09c89b
NAME                                        PROVISIONED USED
volume-0578b38a-8db2-481f-ad43-65d21b09c89b      20480M    0

After flatten:
$ rbd info ssd-volumes/volume-0578b38a-8db2-481f-ad43-65d21b09c89b
rbd image 'volume-0578b38a-8db2-481f-ad43-65d21b09c89b':
        size 20480 MB in 5120 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.33717c18c5a7e1
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
        flags:

$ rbd du ssd-volumes/volume-0578b38a-8db2-481f-ad43-65d21b09c89b
NAME                                        PROVISIONED   USED
volume-0578b38a-8db2-481f-ad43-65d21b09c89b      20480M 20480M

Clone is now using 100% provisioned space.
I was under impression that flattened clones should be sparse. Am I wrong
or missing something?

Don't think this is relevant, but snapshot and clone creation are managed
by cinder in Openstack Newton release.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to