You can also mount the rbd with the discard option. It works the same way as 
you would mount an ssd to free up the space when you delete things. I use the 
discard option on my ext4 rbds on Ubuntu and it frees up the used Ceph space 
immediately.

Sent from my iPhone

On May 19, 2016, at 12:30 PM, Albert Archer 
<albertarche...@gmail.com<mailto:albertarche...@gmail.com>> wrote:

Thank you for your great support .

Best Regards
Albert

On Thu, May 19, 2016 at 10:41 PM, Udo Lembke 
<ulem...@polarzone.de<mailto:ulem...@polarzone.de>> wrote:
Hi Albert,
to free unused space you must enable trim (or do an fstrim) in the vm - and all 
things in the storage chain must support this.
The normal virtio-driver don't support trim, but if you use scsi-disks with 
virtio-scsi-driver you can use it.
Work well but need some time for huge filesystems.

Udo


On 19.05.2016 19:58, Albert Archer wrote:
Hello All.
I am newbie in ceph. and i use jewel release for testing purpose. it
seems every thing is OK, HEALTH_OK , all of OSDs are in UP and IN state.
I create some RBD images (rbd create .... ) and map to some ubuntu
host .
I can read and write data to my volume , but when i delete some content
from volume (e,g some huge files,...), populated capacity of cluster
does not free and None of objects were clean.
what is the problem ???

Regards
[https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif]Albert



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to