[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread DHilsbos
lto:c...@mknet.nl] Sent: Tuesday, July 21, 2020 11:49 AM To: ceph-users@ceph.io Cc: Dominic Hilsbos Subject: RE: [ceph-users] Re: osd out vs crush reweight] Hi Dominiq I must say that I inherited this cluster and did not develop the cursh rule used. The rule reads: "r

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread Marcel Kuiper
ould need to weigh in. > > I am somewhat curious though; you define racks, and even rooms in your > tree, but your failure domain is set to host. Is that intentional? > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air Inte

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread DHilsbos
et.nl] Sent: Tuesday, July 21, 2020 10:14 AM To: ceph-users@ceph.io Cc: Dominic Hilsbos Subject: Re: [ceph-users] Re: osd out vs crush reweight] Dominic The crush rule dump and tree are attached (hope that works). All pools use crush_rule 1 Marcel > Marcel; > > Sorry, could also send

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread Marcel Kuiper
ir International, Inc. > dhils...@performair.com > www.PerformAir.com > > > > -Original Message- > From: dhils...@performair.com [mailto:dhils...@performair.com] > Sent: Tuesday, July 21, 2020 9:41 AM > To: c...@mknet.nl; ceph-users@ceph.io > Subject: [ceph-users]

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread DHilsbos
...@performair.com] Sent: Tuesday, July 21, 2020 9:41 AM To: c...@mknet.nl; ceph-users@ceph.io Subject: [ceph-users] Re: osd out vs crush reweight] Marcel; Thank you for the information. Could you send the output of: ceph osd crush rule dump Thank you, Dominic L. Hilsbos, MBA Director - Information

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread DHilsbos
[mailto:c...@mknet.nl] Sent: Tuesday, July 21, 2020 9:38 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: osd out vs crush reweight] Hi Dominic, This cluster is running 14.2.8 (nautilus) There's 172 osds divided over 19 nodes. There are currently 10 pools. All pools have 3 replica'

[ceph-users] Re: osd out vs crush reweight]

2020-07-21 Thread Marcel Kuiper
Hi Dominic, This cluster is running 14.2.8 (nautilus) There's 172 osds divided over 19 nodes. There are currently 10 pools. All pools have 3 replica's of data There are 3968 PG's (the cluster is not yet fully in use. The number of PGs is expected to grow) Marcel > Marcel; > > Short answer; yes

[ceph-users] Re: osd out vs crush reweight

2020-07-21 Thread DHilsbos
Marcel; Short answer; yes, it might be expected behavior. PG placement is highly dependent on the cluster layout, and CRUSH rules. So... Some clarifying questions. What version of Ceph are you running? How many nodes do you have? How many pools do you have, and what are their failure domains?