lto:c...@mknet.nl]
Sent: Tuesday, July 21, 2020 11:49 AM
To: ceph-users@ceph.io
Cc: Dominic Hilsbos
Subject: RE: [ceph-users] Re: osd out vs crush reweight]
Hi Dominiq
I must say that I inherited this cluster and did not develop the cursh
rule used. The rule reads:
"r
ould need to weigh in.
>
> I am somewhat curious though; you define racks, and even rooms in your
> tree, but your failure domain is set to host. Is that intentional?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air Inte
et.nl]
Sent: Tuesday, July 21, 2020 10:14 AM
To: ceph-users@ceph.io
Cc: Dominic Hilsbos
Subject: Re: [ceph-users] Re: osd out vs crush reweight]
Dominic
The crush rule dump and tree are attached (hope that works). All pools use
crush_rule 1
Marcel
> Marcel;
>
> Sorry, could also send
ir International, Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: dhils...@performair.com [mailto:dhils...@performair.com]
> Sent: Tuesday, July 21, 2020 9:41 AM
> To: c...@mknet.nl; ceph-users@ceph.io
> Subject: [ceph-users]
...@performair.com]
Sent: Tuesday, July 21, 2020 9:41 AM
To: c...@mknet.nl; ceph-users@ceph.io
Subject: [ceph-users] Re: osd out vs crush reweight]
Marcel;
Thank you for the information.
Could you send the output of:
ceph osd crush rule dump
Thank you,
Dominic L. Hilsbos, MBA
Director - Information
[mailto:c...@mknet.nl]
Sent: Tuesday, July 21, 2020 9:38 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: osd out vs crush reweight]
Hi Dominic,
This cluster is running 14.2.8 (nautilus)
There's 172 osds divided over 19 nodes.
There are currently 10 pools.
All pools have 3 replica'
Hi Dominic,
This cluster is running 14.2.8 (nautilus)
There's 172 osds divided over 19 nodes.
There are currently 10 pools.
All pools have 3 replica's of data
There are 3968 PG's (the cluster is not yet fully in use. The number of
PGs is expected to grow)
Marcel
> Marcel;
>
> Short answer; yes
Marcel;
Short answer; yes, it might be expected behavior.
PG placement is highly dependent on the cluster layout, and CRUSH rules. So...
Some clarifying questions.
What version of Ceph are you running?
How many nodes do you have?
How many pools do you have, and what are their failure domains?