Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
I have 3 pools. 0 rbd,1 cephfs_data,2 cephfs_metadata cephfs_data has 1024 as a pg_num, total pg number is 2113 POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RDWR_OPS WR cephfs_data 4000M1000 0 2000 0

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
Patrick, I did timing tests. Rsync is not a tools that should I trust for speed test. I simply do "cp" and extra write tests to ceph cluster. It is very very fast indeed. Rsync itself copies an 1GB file slowly and it takes 5-7 seconds to complete. Cp itself does it in 0,901s. (Not even 1 second

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
>> Are you sure? Your config didn't show this. Yes. I have dedicated 10GbE network between ceph nodes. Each ceph node has seperate network that have 10GbE network card and speed. Do I have to set anything in the config for 10GbE? >> What kind of devices are they? did you do the journal test? Th

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
than filestore though if you use a large block size. At the moment, It looks good but, can you explain a bit more on block size? (or a reference page could also work) Gencer. -Original Message- From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de] Sent: Tuesday, July 18, 2017 5

Re: [ceph-users] pgs not deep-scrubbed for 86400

2017-07-19 Thread Gencer W . Genç
Exactly have this issue (or not?) at the moment. Mine says "906 pgs not scrubbed for 86400". But it is decrementing slowly (very slowly). I cannot find any documentation for exact "pgs not srubbed for" phrase on the web but only this. Log like this: 2017-07-19 15:05:10.125041 [INF] 3.5