[ceph-users] Re: Disk consume for CephFS

2020-09-17 Thread fotofors
Yes, I know this option isn't safe, however, in my current situation, I can't increase it. I probably have some files under 4K, however, when I cleaned zero files I didn't saw any changes in statistics. My current `ceph df details` below: # ceph df detail --- RAW STORAGE --- CLASS SIZE AVA

[ceph-users] Re: Disk consume for CephFS

2020-09-15 Thread Stefan Kooman
On 2020-09-15 02:09, Nathan Fish wrote: > What about hardlinks, are there any of those? Are there lots of > directories or tiny (<4k) files? The default allocation size of bluestore depends on the disk type. We have this in our config: # 4096 B instead of 16K (SSD) / 64K (HDD) to avoid large ove

[ceph-users] Re: Disk consume for CephFS

2020-09-14 Thread tri
I suggest trying the rsync --sparse option. Typically, qcow2 files (tend to be large) are sparse files. Without the sparse option, the files expand in their destination. September 14, 2020 6:15 PM, fotof...@gmail.com wrote: > Hello. > > I'm using the Nautilus Ceph version for some huge folder

[ceph-users] Re: Disk consume for CephFS

2020-09-14 Thread Nathan Fish
What about hardlinks, are there any of those? Are there lots of directories or tiny (<4k) files? Also, size=2 is not very safe. You want size=3, min_size=2 if you are doing replication. On Mon, Sep 14, 2020 at 6:15 PM wrote: > > Hello. > > I'm using the Nautilus Ceph version for some huge folder