Quoting renjianxinlover (renjianxinlo...@163.com):
> hi,Stefan
> could you please provide further guidence?
https://docs.ceph.com/docs/master/cephfs/troubleshooting/#slow-requests-mds
Do a "dump ops in flight" to see what's going on on the MDS.
https://docs.ceph.com/docs/master/cephfs/trou
hi,Stefan
could you please provide further guidence?
Brs
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/28/2019 21:44,renjianxinlover wrote:
Sorry what i said was fuzzy before.
Currently, my mds is running with certain osds at same node in which SSD drive
serves as ca
Sorry what i said was fuzzy before.
Currently, my mds is running with certain osds at same node in which SSD drive
serves as cache device.
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 12/28/2019 15:49,Stefan Kooman wrote:
Quoting renjianxinlover (renjianxinlo...@163.com):
HI
Quoting renjianxinlover (renjianxinlo...@163.com):
> HI, Nathan, thanks for your quick reply!
> comand 'ceph status' outputs warning including about ten clients failing to
> respond to cache pressure;
> in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within
> five seconds as f
HI, Nathan, thanks for your quick reply!
comand 'ceph status' outputs warning including about ten clients failing to
respond to cache pressure;
in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five
seconds as follow,
Device: rrqm/s wrqm/s r/s w/srkB
I would start by viewing "ceph status", drive IO with: "iostat -x 1
/dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status"
warns that the MDS cache is oversized, that may be an easy fix.
On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover
wrote:
> hello,
>recently, after de