Another space "leak" might be due BlueStore misbehavior that takes DB
partition(s) space into account when calculating total store size. And
all this space is immediately marked as used even for an empty store. So
if you have 3 OSD with 10 Gb DB device each you unconditionally get 30
Gb used sp
I didn't know about ceph df detail, that's quite useful, thanks.
I was thinking that the problem had to do with some sort of internal
fragmentation, because the filesystem in question does have millions
(2.9 M or threabouts) of files, however, even if 4k is lost for each
file, that only amount
Could you be running into block size (minimum allocation unit)
overhead? Default bluestore block size is 4k for hdd and 64k for ssd.
This is exacerbated if you have tons of small files. I tend to see
this when "ceph df detail" sum of raw used in pools is less than the
global raw bytes used.
On Mon
Each OSD lives on a separate HDD in bluestore with the journals on 2GB
partitions on a shared SSD.
On 16/02/18 21:08, Gregory Farnum wrote:
What does the cluster deployment look like? Usually this happens when
you’re sharing disks with the OS, or have co-located file journals or
something.
On
What does the cluster deployment look like? Usually this happens when
you’re sharing disks with the OS, or have co-located file journals or
something.
On Fri, Feb 16, 2018 at 4:02 AM Flemming Frandsen <
flemming.frand...@stibosystems.com> wrote:
> I'm trying out cephfs and I'm in the process of co
I'm trying out cephfs and I'm in the process of copying over some
real-world data to see what happens.
I have created a number of cephfs file systems, the only one I've
started working on is the one called jenkins specifically the one named
jenkins which lives in fs_jenkins_data and fs_jenkins