Re: [ceph-users] node and its OSDs down...

2016-12-03 Thread M Ranga Swami Reddy
Sure, will try with "*ceph osd crush reweight 0.0" *and update the status. Thanks Swami On Fri, Dec 2, 2016 at 8:15 PM, David Turner wrote: > If you want to reweight only once when you have a failed disk that is > being balanced off of, set the crush weight for that osd to 0.0. Then when > you

Re: [ceph-users] Ceph QoS user stories

2016-12-03 Thread Ning Yao
Hi Sage, I think we can refactor the io priority strategy at the same time based on our consideration below? 2016-12-03 17:21 GMT+08:00 Ning Yao : > Hi, all > > Currently, we can modify osd_client_op_priority to assign different > clients' ops with different priority such like we can assign high

Re: [ceph-users] How to create two isolated rgw services in one ceph cluster?

2016-12-03 Thread piglei
Thank you Abhishek, I will take a look at Realm soon. BTW, what's your point on the multi-tenancy combined nginx rules solution? AFAIK, Ceph's multi-tenancy feature seems like a replacement of adding prefix for user/bucket name manually. It only avoids name conflict across different tenants, but l

[ceph-users] Ceph Fuse Strange Behavior Very Strange

2016-12-03 Thread Winger Cheng
Hi, all: I have two small test on our cephfs cluster: time for i in {1..1}; do echo hello > file${i}; done && time rm * && time for i in {1..1}; do echo hello > file${i}; done && time rm * Client A : use kernel client Client B : use fuse client First I c