Sure, will try with "*ceph osd crush reweight 0.0" *and update the status.
Thanks
Swami
On Fri, Dec 2, 2016 at 8:15 PM, David Turner
wrote:
> If you want to reweight only once when you have a failed disk that is
> being balanced off of, set the crush weight for that osd to 0.0. Then when
> you
Hi Sage,
I think we can refactor the io priority strategy at the same time
based on our consideration below?
2016-12-03 17:21 GMT+08:00 Ning Yao :
> Hi, all
>
> Currently, we can modify osd_client_op_priority to assign different
> clients' ops with different priority such like we can assign high
Thank you Abhishek, I will take a look at Realm soon. BTW, what's your
point on the multi-tenancy combined nginx rules solution?
AFAIK, Ceph's multi-tenancy feature seems like a replacement of adding
prefix for user/bucket name manually. It only avoids name conflict across
different tenants, but l
Hi, all:
I have two small test on our cephfs cluster:
time for i in {1..1}; do echo hello > file${i}; done && time rm * &&
time for i in {1..1}; do echo hello > file${i}; done && time rm *
Client A : use kernel client
Client B : use fuse client
First I c