I am having with the change from pg 8 to pg 16
[@c01 ceph]# ceph osd df | egrep '^ID|^19|^20|^21|^30'
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
19 ssd 0.48000 1.0 447GiB 161GiB 286GiB 35.91 0.84 35
20 ssd 0.48000 1.0 447GiB 170GiB 277GiB 38.09 0.89 36
2
Den sön 6 jan. 2019 kl 13:22 skrev Marc Roos :
>
> >If I understand the balancer correct, it balances PGs not data.
> >This worked perfectly fine in your case.
> >
> >I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
> >help to bump the PGs.
> >
> I am not sure if I should i
>If I understand the balancer correct, it balances PGs not data.
>This worked perfectly fine in your case.
>
>I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
>help to bump the PGs.
>
I can remember someone writing something smart about how to increase
your
pg's. Lets say
On Sat, 5 Jan 2019, 13:38 Marc Roos
> I have straw2, balancer=on, crush-compat and it gives worst spread over
> my ssd drives (4 only) being used by only 2 pools. One of these pools
> has pg 8. Should I increase this to 16 to create a better result, or
> will it never be any better.
>
> For now I
If I understand the balancer correct, it balances PGs not data.
This worked perfectly fine in your case.
I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
help to bump the PGs.
Kevin
Am Sa., 5. Jan. 2019 um 14:39 Uhr schrieb Marc Roos :
>
>
> I have straw2, balancer=on, crush-co