Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-31 Thread ste...@bit.nl
Quoting renjianxinlover (renjianxinlo...@163.com): > hi,Stefan > could you please provide further guidence? https://docs.ceph.com/docs/master/cephfs/troubleshooting/#slow-requests-mds Do a "dump ops in flight" to see what's going on on the MDS. https://docs.ceph.com/docs/master/cephfs/trou

Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-29 Thread renjianxinlover
hi,Stefan could you please provide further guidence? Brs | | renjianxinlover | | renjianxinlo...@163.com | 签名由网易邮箱大师定制 On 12/28/2019 21:44,renjianxinlover wrote: Sorry what i said was fuzzy before. Currently, my mds is running with certain osds at same node in which SSD drive serves as ca

Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-28 Thread renjianxinlover
Sorry what i said was fuzzy before. Currently, my mds is running with certain osds at same node in which SSD drive serves as cache device. | | renjianxinlover | | renjianxinlo...@163.com | 签名由网易邮箱大师定制 On 12/28/2019 15:49,Stefan Kooman wrote: Quoting renjianxinlover (renjianxinlo...@163.com): HI

Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-27 Thread Stefan Kooman
Quoting renjianxinlover (renjianxinlo...@163.com): > HI, Nathan, thanks for your quick reply! > comand 'ceph status' outputs warning including about ten clients failing to > respond to cache pressure; > in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within > five seconds as f

Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-27 Thread renjianxinlover
HI, Nathan, thanks for your quick reply! comand 'ceph status' outputs warning including about ten clients failing to respond to cache pressure; in addition, in mds node, 'iostat -x 1' shows drive io usage of mds within five seconds as follow, Device: rrqm/s wrqm/s r/s w/srkB

Re: [ceph-users] cephfs kernel client io performance decreases extremely

2019-12-26 Thread Nathan Fish
I would start by viewing "ceph status", drive IO with: "iostat -x 1 /dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status" warns that the MDS cache is oversized, that may be an easy fix. On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover wrote: > hello, >recently, after de