Interesting. I have all the inodes in cache on my nodes so I expect the 
bottleneck to be filesystem metadata -> journal writes. Unless something else 
is going on in here ;-)
Jan

> On 10 Nov 2015, at 13:19, Nick Fisk <n...@fisk.me.uk> wrote:
> 
> I’m looking at iostat and most of the IO is read, so I think it would still 
> take a while if it was still single threaded
>  
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
> avgqu-sz   await r_await w_await  svctm  %util
> sda               0.00     0.50    0.00    5.50     0.00    22.25     8.09    
>  0.00    0.00    0.00    0.00   0.00   0.00
> sdb               0.00     0.50    0.00    5.50     0.00    22.25     8.09    
>  0.00    0.00    0.00    0.00   0.00   0.00
> sdc               0.00   356.00  498.50    3.00  1994.00  1436.00    13.68    
>  1.24    2.48    2.38   18.00   1.94  97.20
> sdd               0.50     0.00  324.50    0.00  1484.00     0.00     9.15    
>  0.97    2.98    2.98    0.00   2.98  96.80
> sde               0.00     0.00  300.50    0.00  1588.00     0.00    10.57    
>  0.98    3.25    3.25    0.00   3.25  97.80
> sdf               0.00    13.00  197.00   95.50  1086.00  1200.00    15.63   
> 121.41  685.70    4.98 2089.91   3.42 100.00
> md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00    
>  0.00    0.00    0.00    0.00   0.00   0.00
> md0               0.00     0.00    0.00    5.50     0.00    22.00     8.00    
>  0.00    0.00    0.00    0.00   0.00   0.00
> sdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00    
>  0.00    0.00    0.00    0.00   0.00   0.00
> sdm               0.00     0.00  262.00    0.00  1430.00     0.00    10.92    
>  0.99    3.78    3.78    0.00   3.76  98.60
> sdi               0.00   113.00  141.00  337.00   764.00  3340.00    17.17    
> 98.93  191.24    3.65  269.73   2.06  98.40
> sdk               1.00    42.50  378.50   74.50  2004.00   692.00    11.90   
> 145.21  278.94    2.68 1682.44   2.21 100.00
> sdn               0.00     0.00  250.50    0.00  1346.00     0.00    10.75    
>  0.97    3.90    3.90    0.00   3.88  97.20
> sdj               0.00    67.50   94.00  287.50   466.00  2952.00    17.92   
> 144.55  589.07    5.43  779.90   2.62 100.00
> sdh               0.00    85.50  158.00  176.00   852.00  2120.00    17.80   
> 144.49  500.04    5.05  944.40   2.99 100.00
> sdl               0.00     0.00  173.00    9.50   956.00   300.00    13.76    
>  2.85   15.64    5.73  196.00   5.41  98.80
>  
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of Jan Schermer
> Sent: 10 November 2015 12:07
> To: Nick Fisk <n...@fisk.me.uk <mailto:n...@fisk.me.uk>>
> Cc: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Chown in Parallel
>  
> I would just disable barriers and enable them afterwards(+sync), should be a 
> breeze then.
>  
> Jan
>  
> On 10 Nov 2015, at 12:58, Nick Fisk <n...@fisk.me.uk 
> <mailto:n...@fisk.me.uk>> wrote:
>  
> I’m currently upgrading to Infernalis and the chown stage is taking a log 
> time on my OSD nodes. I’ve come up with this little one liner to run the 
> chown’s in parallel
>  
> find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown 
> -R ceph:ceph
>  
> NOTE: You still need to make sure the other directory’s in the /var/lib/ceph 
> folder are updated separately but this should speed up the process for 
> machines with larger number of disks.
>  
> Nick
> 
> <image001.jpg> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://xo4t.mj.am/link/xo4t/qknh5q8/1/MyrIHaeADwpTQ6E9Py2DNg/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t>
>  
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to