HI, Nathan, thanks for your quick reply!
comand 'ceph status' outputs warning including about ten clients failing to 
respond to cache pressure;
in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five 
seconds as follow,
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00 6400.00    0.00 49992.00     0.00    15.62     
0.96    0.15    0.15    0.00   0.08  51.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.80    0.00    1.09    0.65    0.00   94.47


Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00 2098.00    0.00 16372.00     0.00    15.61     
0.28    0.14    0.14    0.00   0.11  23.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.93    0.00    1.28    1.40    0.00   93.39


Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00 4488.00    0.00 35056.00     0.00    15.62     
0.60    0.13    0.13    0.00   0.10  42.80


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.96    0.00    0.86    1.15    0.00   94.03


Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00 3666.00    6.00 28768.00    28.00    15.68     
0.50    0.14    0.14    0.67   0.10  35.60


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.72    0.00    0.27    0.04    0.00   96.97


Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00   14.00   14.00   108.00    80.00    13.43     
0.01    0.29    0.29    0.29   0.29   0.80


any clue?


Brs


| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/27/2019 00:04,Nathan Fish<lordci...@gmail.com> wrote:
I would start by viewing "ceph status", drive IO with: "iostat -x 1 
/dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status" warns 
that the MDS cache is oversized, that may be an easy fix.



On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover <renjianxinlo...@163.com> wrote:

hello,
       recently, after deleting some fs data in a small-scale ceph cluster, 
some clients IO performance became bad, specially latency. for example, opening 
a tiny text file by vim maybe consumed nearly twenty  seconds, i am not clear 
about how to diagnose the cause, could anyone give some guidence?


Brs
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to