I opened a thread recently here asking about what can be generally
accepted as 'ceph overhead' when using the file system. I wonder if the
performance loss I have on a cephfs 1x replication pool compared to
native performance is really so much. 5,6x to 2x slower than native disk
performance
 
 
  
            4k r ran. 

4k w ran. 

4k r seq. 

4k w seq. 

1024k r ran. 

1024k w ran. 

1024k r seq. 

1024k w seq. 

      size 

lat 

iops 

kB/s 

lat 

iops 

kB/s 

lat 

iops 

MB/s 

lat 

iops 

MB/s 

lat 

iops 

MB/s 

lat 

iops 

MB/s 

lat 

iops 

MB/s 

lat 

iops 

MB/s 

Cephfs 

ssd rep. 3 

      2.78 

1781 

7297 

1.42 

700 

2871 

0.29 

3314 

13.6 

0.04 

889 

3.64 

4.3 

231 

243 

0.08 

132 

139 

4.23 

235 

247 

6.99 

142 

150 

Cephfs 

ssd rep. 1 

      0.54 

1809 

7412 

0.8 

1238 

5071 

0.29 

3325 

13.6 

0.56 

1761 

7.21 

4.27 

233 

245 

4.34 

229 

241 

4.21 

236 

248 

4.34 

229 

241 

Samsung 

MZK7KM480 

480GB 

   0.09 

10.2k 

41600 

0.05 

17.9k 

73200 

0.05 

18k 

77.6 

0.05 

18.3k 

75.1 

2.06 

482 

506 

2.16 

460 

483 

1.98 

502 

527 

2.13 

466 

489 


(4 nodes, CentOS7, luminous)

  _____  

From: Maged Mokhtar [mailto:mmokh...@petasan.org] 
Sent: 15 January 2019 22:55
To: Ketil Froyn; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Recommendations for sharing a file system to a
heterogeneous client network?






Hi Ketil,


I have not tested the creation/deletion but the read/write performance
was much better then the link you posted. Using CTDB setup based on
Robert's presentation, we were getting 800 MB/s write performance for
queue depth =1 and  2.2 GB/s  queue depth= 32  from a single CTDB/Samba
gateway. For the QD=32 test we used 2 Windows clients to the same
gateway (to avoid limitation from the Windows side). Tests were done
using Microsoft diskspd tool at 4M blocks with cache off.  Gateway had
2x40 G nics : one for Windows network the other for CephFS client, each
was doing 20 Gbps (50% utilization) cpu was 24 cores running at 85%
utilization taken by the smbd process. We used Ubuntu 16.04 CTDB/Samba
with a SUSE SLE15 kernel for kernel client. Ceph was Luminous 12.2.7.


Maged





On 15/01/2019 22:04, Ketil Froyn wrote:


   Robert,

   Thanks, this is really interesting. Do you also have any details on
   how a solution like this performs? I've been reading a thread about
   samba/cephfs performance, and the stats aren't great - especially
   when creating/deleting many files - but being a rookie, I'm not 100%
   clear on the hardware differences being benchmarked in the mentioned
   test.

   http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026841.h
   tml
   

   Regards, Ketil

   On Tue, Jan 15, 2019, 16:38 Robert Sander
   <r.san...@heinlein-support.de wrote:
   

      Hi Ketil,
      
      use Samba/CIFS with multiple gateway machines clustered with CTDB.
      CephFS can be mounted with Posix ACL support.
      
      Slides from my last Ceph day talk are available here:
      https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-unlimited-
      fileserver-with-samba-ctdb-and-cephfs
      
      Regards
      -- 
      Robert Sander
      Heinlein Support GmbH
      Schwedter Str. 8/9b, 10119 Berlin
      
      https://www.heinlein-support.de
      
      Tel: 030 / 405051-43
      Fax: 030 / 405051-19
      
      Amtsgericht Berlin-Charlottenburg - HRB 93818 B
      Geschäftsführer: Peer Heinlein - Sitz: Berlin
      
      _______________________________________________
      ceph-users mailing list
      ceph-users@lists.ceph.com
      http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
      


    
   _______________________________________________
   ceph-users mailing list
   ceph-users@lists.ceph.com
   http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Maged Mokhtar
CEO PetaSAN
4 Emad El Deen Kamel
Cairo 11371, Egypt
www.petasan.org
+201006979931
skype: maged.mokhtar


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to