Thanks for reply, we are using now ceph 0.80.1 firefly, is this options
available?

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mateusz Skała
Sent: Tuesday, October 28, 2014 9:27 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Scrub proces, IO performance

 

Hello,

We are using Ceph as a storage backend for KVM, used for hosting MS Windows
RDP, Linux for web applications with MySQL database and file sharing from
Linux. Wen scrub or deep-scrub process is active, RDP sessions are freezing
for a few seconds and web applications have big replay latency. 

New we have disabled scrubbing  and deep-scrubbing process between  6AM -
10PM, when majority of users doesn't work, but user experience is still
poor, like I write above. We are considering disabling scrubbing process at
all. Does a new version 0.87 with addresses scrubbing priority is going to
solve our problem (according to  <http://tracker.ceph.com/issues/6278>
http://tracker.ceph.com/issues/6278)? Can we switch off scrubbing at all?
How we can change our configuration to lower scrubbing performance impact?
Does changing block size  can lower scrubbing impact or increase
performance? 

 

Our Ceph cluster configuration :

 

* we are using ~216 RBD disks for KVM VM's

* ~11TB used, 3.593TB data, replica count 3

* we have 5 mons, 32 OSD

* 3 pools/ 4096pgs (only one - RBD in use)

* 6 nodes (5osd+mon, 1 osd only) in two racks

* 1 SATA disk for system, 1 SSD disk for journal and 4 or 6 SATA disk for
OSD

* 2 networks on 2 NIC 1Gbps (cluster + public)  on all nodes.

* 2x 10GBps links between racks

* without scrub max 45 iops

* when scrub running 120 - 180 iops

 

 

ceph.conf 

 

mon initial members = ceph35, ceph30, ceph20, ceph15, ceph10

mon host = 10.20.8.35, 10.20.8.30, 10.20.8.20, 10.20.8.15, 10.20.8.10

 

public network = 10.20.8.0/22

cluster network = 10.20.4.0/22

 

filestore xattr use omap = true

filestore max sync interval = 15

 

osd journal size = 10240

osd pool default size = 3

osd pool default min size = 1

osd pool default pg num = 2048

osd pool default pgp num = 2048

osd crush chooseleaf type = 1

osd recovery max active = 1

osd recovery op priority = 1

osd max backfills = 1

 

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

 

rbd default format = 2

 

Regards,

Mateusz

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to