@Wido Den Hollander
Regarding the amonut of PGs, and I quote from the docs:
"If you have more than 50 OSDs, we recommend approximately 50-100placement
groups per OSD to balance out resource usage, datadurability and distribution."
(https://docs.ceph.com/docs/master/rados/operations/placement-gro
My full OSD list (also here as pastebin https://paste.ubuntu.com/p/XJ4Pjm92B5/ )
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE
VAR PGS STATUS
14 hdd 9.09470 1.0 9.1 TiB 6.9 TiB 6.8 TiB 71 KiB 18 GiB 2.2 TiB
75.34 1.04 69 up
19 hdd 9.09470
> How is that possible? I dont know how much more proof I need to present that
> there's a bug.
I also think there's a bug in the balancer plugin as it seems to have
stopped for me also. I'm on Luminous though, so not sure if that will
be the same bug.
The balancer used to work flawlessly, giving
Hi Anthony!
Mon, 9 Dec 2019 17:11:12 -0800
Anthony D'Atri ==> ceph-users
:
> > How is that possible? I dont know how much more proof I need to present
> > that there's a bug.
>
> FWIW, your pastes are hard to read with all the ? in them. Pasting
> non-7-bit-ASCII?
I don't see much "?" in
> How is that possible? I dont know how much more proof I need to present that
> there's a bug.
FWIW, your pastes are hard to read with all the ? in them. Pasting
non-7-bit-ASCII?
> |I increased PGs and see no difference.
From what pgp_num to what new value? Numbers that are not a power of 2
It's only getting worse after raising PGs now.
Anything between: 96 hdd 9.09470 1.0 9.1 TiB 4.9 TiB 4.9 TiB 97 KiB 13
GiB 4.2 TiB 53.62 0.76 54 up
and
89 hdd 9.09470 1.0 9.1 TiB 8.1 TiB 8.1 TiB 88 KiB 21 GiB 1001 GiB
89.25 1.27 87 up
How is that possible? I dont
@Wido Den Hollander
Still think this is acceptable?
51 hdd 9.09470 1.0 9.1 TiB 6.1 TiB 6.1 TiB 72 KiB 16 GiB 3.0 TiB
67.23 0.98 68 up 52 hdd 9.09470 1.0 9.1 TiB 6.7 TiB 6.7 TiB 3.5
MiB 18 GiB 2.4 TiB 73.99 1.08 75 up
53 hdd 9.09470 1.0 9.1 TiB 8.0 TiB 7.9 T
I never had those issues with Luminous, never once, since Nautilus this is a
constant headache.My issue is that I have OSDs that are over 85% whilst others
are at 63%. My issue is that every time I do a rebalance or add new disks ceph
moves PGs on near full OSDs and almost causes pool failures.
On 12/7/19 3:39 PM, Philippe D'Anjou wrote:
> @Wido Den Hollander
>
> First of all the docs say: "In most cases, this distribution is
> “perfect,” which an equal number of PGs on each OSD (+/-1 PG, since they
> might not divide evenly)."
> Either this is just false information or very badly sta
@Wido Den Hollander
First of all the docs say: " In most cases, this distribution is “perfect,”
whichan equal number of PGs on each OSD (+/-1 PG, since they might notdivide
evenly)."Either this is just false information or very badly stated.
I increased PGs and see no difference.
I pointed out
On 12/7/19 1:42 PM, Philippe D'Anjou wrote:
> @Wido Den Hollander
>
> That doesn't explain why its between 76 and 92 PGs, that's major not equal.
The balancer will balance the PGs so that all OSDs have an almost equal
data usage. It doesn't balance that all OSDs have an equal amount of PGs.
T
@Wido Den Hollander
That doesn't explain why its between 76 and 92 PGs, that's major not equal.
Raising PGs to 100 is an old statement anyway, anything 60+ should be fine. Not
an excuse for distribution failure in this case.I am expecting more or less
equal PGs/OSD
_
On 12/7/19 11:42 AM, Philippe D'Anjou wrote:
> Hi,
> the docs say the upmap mode is trying to achieve perfect distribution as
> to have equal amount of PGs/OSD.
> This is what I got(v14.2.4):
>
> 0 ssd 3.49219 1.0 3.5 TiB 794 GiB 753 GiB 38 GiB 3.4 GiB 2.7
> TiB 22.20 0.32 82 up
>
Hi,the docs say the upmap mode is trying to achieve perfect distribution as to
have equal amount of PGs/OSD.This is what I got(v14.2.4):
0 ssd 3.49219 1.0 3.5 TiB 794 GiB 753 GiB 38 GiB 3.4 GiB 2.7 TiB
22.20 0.32 82 up
1 ssd 3.49219 1.0 3.5 TiB 800 GiB 751 GiB 45 GiB 3.7
14 matches
Mail list logo