[ceph-users] qd=1 bs=4k tuning on a toy cluster

2022-01-16 Thread Tyler Stachecki
I'm curious to hear if anyone has looked into kernel scheduling tweaks or changes in order to improve qd=1/bs=4k performance (while we patiently wait for Seastar!) Using this toy cluster: 3x OSD nodes: Atom C3758 (8 core, 2.2GHz) with 1x Intel S4500 SSD each Debian Bullseye with Linux 5.15.14 and

[ceph-users] Re: cephfs: [ERR] loaded dup inode

2022-01-16 Thread Patrick Donnelly
Hi Dan, On Fri, Jan 14, 2022 at 6:32 AM Dan van der Ster wrote: > We had this long ago related to a user generating lots of hard links. > Snapshots will have a similar effect. > (in these cases, if a user deletes the original file, the file goes > into stray until it is "reintegrated"). > > If yo

[ceph-users] Re: *****SPAM***** Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Marc
> We have test the direct random write for the disk (without Ceph) and it is > 200 MB/s. Wonder why we got 80MB/s from Ceph. > I don't think so, random writes to a hdd do no result in 200MB/s. This is what I get out of 7k sas randwrite-4k-seq: (groupid=1, jobs=1): err= 0: pid=10989: Sun Sep 13

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Kai Börnert
Hi, to have a fair test you need to replicate the power loss scenarios ceph does cover and you are currently not: No memory caches in the os or an the disk are allowed to be used, ceph has to ensure that an object written is actually written, even if a node of your cluster explodes right at

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Behzad Khoshbakhti
Hi Marc, Thanks for your prompt response. We have test the direct random write for the disk (without Ceph) and it is 200 MB/s. Wonder why we got 80MB/s from Ceph. Your help is much appreciated. Regards, Behzad On Sun, Jan 16, 2022 at 11:56 AM Marc wrote: > > > > Detailed (somehow) problem de

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Marc
> Detailed (somehow) problem description: > Disk size: 1.2 TB > Ceph version: Pacific > Block size: 4 MB > Operation: Sequential write > Replication factor: 1 > Direct disk performance: 245 MB/s > Ceph controlled disk performance: 80 MB/s you are comparing sequential io against random. You shou

[ceph-users] Re: Direct disk/Ceph performance

2022-01-16 Thread Behzad Khoshbakhti
And here is the disk information that we base our testing: HPE EG1200FDJYT 1.2TB 10kRPM 2.5in SAS-6G Enterprise On Sun, Jan 16, 2022 at 11:23 AM Behzad Khoshbakhti wrote: > Hi all, > > We are curious about the single disk performance which we experience > performance degradation when the disk is

[ceph-users] dashboard fails with error code 500 on a particular file system

2022-01-16 Thread E Taka
Dashboard → Filesystems → (filesystem name) → Directories fails on a particular file system with error "500 - Internal Server Error". The log shows: Jan 16 11:22:18 ceph00 bash[96786]: File "/usr/share/ceph/mgr/dashboard/services/cephfs.py", line 57, in opendir Jan 16 11:22:18 ceph00 bash[96

[ceph-users] Direct disk/Ceph performance

2022-01-16 Thread Behzad Khoshbakhti
Hi all, We are curious about the single disk performance which we experience performance degradation when the disk is controlled via Ceph. Problem description: We are curious about the Ceph write performance and we have found that when we request data to be written via Ceph, it is not using full