Hi Igor,
you're right indeed, the db volume is 100G in size for the hdd osds.
Knowing this, the actual raw use is 783G - 7x100G = 83G which is pretty close
to the sum of the files in the HDD pools times the pool size which is roughly
25x3=75G.
Thanks a lot for your explanation of this tiny but
Hi Georg,
I suspect your db device size is around 100GiB size? And actual total
hdd class size is rather 700 GiB (100 GiBĀ * 7 osds) less than reported
19 TiB.
Is the above correct? If so then high raw size(s) is caused by osd stats
reporting design - it unconditionally includes full db volu
Glad to see I am not the only one with unexpected increased disk usage. I do
have a case for a few months now where the reported size on disk is 10 times
higher than it should be. Unfortunately no solution so far. Therefore, I am
very curious whether the min alloc size will solve your problem I
Hi!
>>> Thank you!
>>> The output of both commands are below.
>>> I still dont understand why there are 21T used data (because 5.5T*3 =
>>> 16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
>>> osd output tells we have 25T free space.
>>
>> As I know MAX AVAIL is calculated wi
On 6.12.19 17:01, Aleksey Gutikov wrote:
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we h
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we have 25T free space.
As I know MAX AVAIL
Home directories probably means lots of small objects. Default minimum
allocation size of BlueStore on HDD is 64 kiB, so there's a lot of overhead
for everything smaller;
Details: google bluestore min alloc size, can only be changed during OSD
creation
Paul
--
Paul Emmerich
Looking for help wi
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we have 25T free space.
$ sudo ceph df
RAW STORAGE:
CLASS SIZEAVAIL
On 6.12.19 13:29, Jochen Schulz wrote:
Hi!
We have a ceph cluster with 42 OSD in production as a server providing
mainly home-directories of users. Ceph is 14.2.4 nautilus.
We have 3 pools. One images (for rbd images) a cephfs_metadata and a
cephfs_data pool.
Our raw data is about 5.6T. All po