On Thu, 17 Apr 2014 08:14:04 -0500 John-Paul Robinson wrote:
> So in the mean time, are there any common work-arounds?
>
> I'm assuming monitoring imageused/imagesize ratio and if its greater
> than some tolerance create a new image and move file system content over
> is an effective, if crude a
So in the mean time, are there any common work-arounds?
I'm assuming monitoring imageused/imagesize ratio and if its greater
than some tolerance create a new image and move file system content over
is an effective, if crude approach. I'm not clear on how to measure the
amount of storage an image
lzer
Sent: Wednesday, April 16, 2014 5:36 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] question on harvesting freed space
On Wed, 16 Apr 2014 13:12:15 -0500 John-Paul Robinson wrote:
So having learned some about fstrim, I ran it on an SSD backed file
system and it reported space fre
2014 5:36 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] question on harvesting freed space
On Wed, 16 Apr 2014 13:12:15 -0500 John-Paul Robinson wrote:
> So having learned some about fstrim, I ran it on an SSD backed file
> system and it reported space freed. I ran it on an
On Wed, 16 Apr 2014 13:12:15 -0500 John-Paul Robinson wrote:
> So having learned some about fstrim, I ran it on an SSD backed file
> system and it reported space freed. I ran it on an RBD backed file
> system and was told it's not implemented.
>
> This is consistent with the test for FITRIM.
>
So having learned some about fstrim, I ran it on an SSD backed file
system and it reported space freed. I ran it on an RBD backed file
system and was told it's not implemented.
This is consistent with the test for FITRIM.
$ cat /sys/block/rbd3/queue/discard_max_bytes
0
On my SSD backed device
Thanks for the insight.
Based on that I found the fstrim command for xfs file systems.
http://xfs.org/index.php/FITRIM/discard
Anyone had experiences using the this command with RBD image backends?
~jpr
On 04/15/2014 02:00 PM, Kyle Bader wrote:
>> I'm assuming Ceph/RBD doesn't have any direct
> I'm assuming Ceph/RBD doesn't have any direct awareness of this since
> the file system doesn't traditionally have a "give back blocks"
> operation to the block device. Is there anything special RBD does in
> this case that communicates the release of the Ceph storage back to the
> pool?
VMs ru
Hi,
If I have an 1GB RBD image and format it with say xfs of ext4, then I
basically have thin provisioned disk. It takes up only as much space
from the Ceph pool as is needed to hold the data structure of the empty
file system.
If I add files to my file systems and then remove them, how does Cep