May be some one can spot a new light,
1. Only SSD-cache OSDs affected by this issue
2. Total cache OSD count is 12x60GiB, backend filesystem is ext4
3. I have created 2 cache tier pools with replica size=3 on that OSD,
both with pg_num:400, pgp_num:400
4. There was a crush ruleset:
superuser@ad
On Tue, Mar 24, 2015 at 12:13 AM, Christian Balzer wrote:
> On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
>
>> Yes I read it and do no not understand what you mean when say *verify
>> this*? All 3335808 inodes are definetly files and direcories created by
>> ceph OSD process:
>>
> What
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
> Yes I read it and do no not understand what you mean when say *verify
> this*? All 3335808 inodes are definetly files and direcories created by
> ceph OSD process:
>
What I mean is how/why did Ceph create 3+ million files, where in the t
Yes I read it and do no not understand what you mean when say *verify this*?
All 3335808 inodes are definetly files and direcories created by ceph
OSD process:
*tune2fs 1.42.5 (29-Jul-2012)*
Filesystem volume name:
Last mounted on: /var/lib/ceph/tmp/mnt.05NAJ3
Filesystem UUID: e4dcc
On Mon, 23 Mar 2015 15:26:07 +0300 Kamil Kuramshin wrote:
> Yes, I understand that.
>
> The initial purpose of first email was just an advise for new comers. My
> fault was in that I was selected ext4 for SSD disks as backend.
> But I did not foresee that inode number can reach its limit before
Yes, I understand that.
The initial purpose of first email was just an advise for new comers. My
fault was in that I was selected ext4 for SSD disks as backend.
But I did not foresee that inode number can reach its limit before the
free space :)
And maybe there must be some sort of warning n
You could fix this by changing your block size when formatting the
mount-point with the mkfs -b command. I had this same issue when dealing
with the filesystem using glusterfs and the solution is to either use a
filesystem that allocates inodes automatically or change the block size
when you build
In my case there was cache pool for ec-pool serving RBD-images, and
object size is 4Mb, and client was an /kernel-rbd /client
each SSD disk is 60G disk, 2 disk per node, 6 nodes in total = 12 OSDs
in total
23.03.2015 12:00, Christian Balzer пишет:
Hello,
This is rather confusing, as cache-t
Hello,
This is rather confusing, as cache-tiers are just normal OSDs/pools and
thus should have Ceph objects of around 4MB in size by default.
This is matched on what I see with Ext4 here (normal OSD, not a cache
tier):
---
size:
/dev/sde1 2.7T 204G 2.4T 8% /var/lib/ceph/osd/ceph-0
ino