On Wed, 16 Apr 2014 13:12:15 -0500 John-Paul Robinson wrote:

> So having learned some about fstrim, I ran it on an SSD backed file
> system and it reported space freed. I ran it on an RBD backed file
> system and was told it's not implemented. 
> 
> This is consistent with the test for FITRIM. 
> 
> $ cat /sys/block/rbd3/queue/discard_max_bytes
> 0
> 
This looks like you're using the kernelspace RBD interface.

And very sadly, trim/discard is not implemented in it, which is a bummer
for anybody running for example a HA NFS server with RBD as the backing
storage. Even sadder is the fact that this was last brought up a year or
even longer ago.

Only the userspace (librbd) interface supports this, however the client
(KVM as prime example) of course need to use a pseudo disk interface that
ALSO supports it. The standard virtio-block does not, while the very slow
IDE emulation does as well as the speedier virtio-scsi (however that isn't
configurable with ganeti for example).

Regards,

Christian

> On my SSD backed device I get:
> 
> $ cat /sys/block/sda/queue/discard_max_bytes
> 2147450880
> 
> Is this just not needed by RBD or is cleanup handled in a different way?
> 
> I'm wondering what will happen to a thin provisioned RBD image overtime
> on a file system with lots of file create delete activity.  Will the
> storage in the ceph pool stay allocated to this application (the file
> system) in that case?
> 
> Thanks for any additional insights.
> 
> ~jpr
> 
> On 04/15/2014 04:16 PM, John-Paul Robinson wrote:
> > Thanks for the insight.
> >
> > Based on that I found the fstrim command for xfs file systems. 
> >
> > http://xfs.org/index.php/FITRIM/discard
> >
> > Anyone had experiences using the this command with RBD image backends?
> >
> > ~jpr
> >
> > On 04/15/2014 02:00 PM, Kyle Bader wrote:
> >>> I'm assuming Ceph/RBD doesn't have any direct awareness of this since
> >>> the file system doesn't traditionally have a "give back blocks"
> >>> operation to the block device.  Is there anything special RBD does in
> >>> this case that communicates the release of the Ceph storage back to
> >>> the pool?
> >> VMs running a 3.2+ kernel (iirc) can "give back blocks" by issuing
> >> TRIM.
> >>
> >> http://wiki.qemu.org/Features/QED/Trim
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to