Hi ceph users:
I want to create a customized crush rule for my EC pool (with replica_size =
11) to distribute replicas into 6 different Racks.
I use the following rule at first:
Step take default // root
Step choose firstn 6 type rack // 6 racks, I have and only have 6 racks
Step chooseleaf i
will not
> just fix the bad mappings you're seeing, it will also change the mappings
> that succeeded with a lower value. Once you've set this parameter, it cannot
> be modified.
>
> Would you mind sharing the erasure code profile you plan to work with ?
>
> C
:02 PM, "Loic Dachary" wrote:
>
>
>On 09/09/2014 14:21, Lei Dong wrote:
>> Thanks loic!
>>
>> Actually I've found that increase choose_local_fallback_tries can
>>help(chooseleaf_tries helps not so significantly), but I'm afraid when
>>os
According to my understanding, the weight of a host is the sum of all osd
weights on this host. So you just reweight any osd on this host, the
weight of this host is reweighed.
Thanks
LeiDong
On 10/20/14, 7:11 AM, "Erik Logtenberg" wrote:
>Hi,
>
>Simple question: how do I reweight a host in cru
ts have weight "1" which is different from their
>associated osd.
>
>If the weight of the host -should- be the sum of all osd weights on this
>hosts, then my question becomes: how do I make that so for the three
>hosts where this is currently not the case?
>
>Thanks,
&g
I think you should send the data (uid & display-name) as arguments. I
successfully create user via adminOps without any problems.
ThanksLeidong
On Monday, November 17, 2014 2:39 PM, Wido den Hollander
wrote:
On 16-11-14 07:05, Yehuda Sadeh wrote:
> On Sat, Nov 15, 2014 at 6:20 AM, W
I tried this at firefly not dumpling. Sorry for the confusion.
ThanksLeidong
On Monday, November 17, 2014 2:52 PM, Wido den Hollander
wrote:
On 17-11-14 07:44, Lei Dong wrote:
> I think you should send the data (uid & display-name) as arguments. I
> successfully creat
We've encountered this problem a lot. As far as I know the best practice should
be making the distribution of PG across OSDs as even as you can after you
create the pool and before you write any data.
1. the disk utilization = (PGs per OSD) * (files per PG). Ceph is good at
making (files per PG