-- Forwarded message -
发件人: opengers
Date: 2021年6月22日周二 上午11:12
Subject: Re: [ceph-users] In "ceph health detail", what's the diff between
MDS_SLOW_METADATA_IO and MDS_SLOW_REQUEST?
To: Patrick Donnelly
Thanks for the answer, I still have some confusion
*$ *ceph health detail
HEALTH_WARN 1 MDSs report slow metadata IOs; 1 MDSs report slow
requests MDS_SLOW_METADATA_IO
1 MDSs report slow metadata IOs
mds.fs-01(mds.0): 3 slow metadata IOs are blocked > 30 secs, oldest
blocked for 51123 secs MDS_SLOW_REQUEST 1 MDSs report slow requests
ceph: 14.2.x
kernel: 4.15
In cephfs, due to the need for cache consistency, When a client is
executing buffer IO, another client will hang when reading and writing the
same file
It seems that lazyio can solve this problem, lazyio allows multiple clients
to execute buffer IO at the same time(relax
In other words, I want to figure out when "total_time" is calculated from
and when it ends
opengers 于2020年12月24日周四 上午11:14写道:
> Hello everyone,I enabled rgw ops log by setting "rgw_enable_ops_log =
> true",There is a "total_time" field in rgw ops log
Hello everyone,I enabled rgw ops log by setting "rgw_enable_ops_log =
true",There is a "total_time" field in rgw ops log
But I want to figure out whether "total_time" includes the period of time
when rgw returns a response to the client?
___
ceph-users
Thanks for your explanation, it is useful to me
Aleksey Gutikov 于2019年12月3日周二 下午9:14写道:
>
> > According to my understanding, osd's heartbeat partners only come from
> > those osds who assume the same pg
> Hello,
>
> That was my initial assumption too.
> But according to my experience set of hear
Hello:
According to my understanding, osd's heartbeat partners only come from
those osds who assume the same pg
See below(# ceph osd tree), osd.10 and osd.0-6 cannot assume the same pg,
because osd.10 and osd.0-6 are from different root tree, and pg in my
cluster doesn't map across root trees(# cep