[ceph-users] Re: Advice on enabling autoscaler

2022-02-09 Thread Maarten van Ingen
Hi, Thanks so far for the suggestions. We have enabled the balancer first to make sure PG distribution is more optimal. After a few additions/replacements and data growth it was not optimal. We enabled upmap as this was suggested to be better than the default setting. To limit simultaneous data

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Mark Nelson
On 2/7/22 12:34 PM, Alexander E. Patrakov wrote: пн, 7 февр. 2022 г. в 17:30, Robert Sander : And keep in mind that when PGs are increased that you also may need to increase the number of OSDs as one OSD should carry a max of around 200 PGs. But I do not know if that is still the case with curr

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Alexander E. Patrakov
пн, 7 февр. 2022 г. в 17:30, Robert Sander : > And keep in mind that when PGs are increased that you also may need to > increase the number of OSDs as one OSD should carry a max of around 200 > PGs. But I do not know if that is still the case with current Ceph versions. This is just the default li

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
> On 02/07/2022 1:51 PM Maarten van Ingen wrote: > One more thing -- how many PGs do you have per OSD right now for the nvme and > hdd roots? > Can you share the output of `ceph osd df tree` ? > > >> This is only 1347 lines of text, you sure you want that :-) On a summary > >> for HDD we have b

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
Hi Dan, -- Hi, OK, you don't need to set 'warn' mode -- the autoscale status already has the info we need. One more thing -- how many PGs do you have per OSD right now for the nvme and hdd roots? Can you share the output of `ceph osd df tree` ? >> This is only 1347 lines of text, you sure yo

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
Hi Robert, Am 07.02.22 um 13:15 schrieb Maarten van Ingen: > As it's just a few pools affected, doing a manual increase would be and > option for me as well, if recommended. > > As you can see one pool is basically lacking pg's while the others are mostly > increasing due to the much higher tar

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
Hi, OK, you don't need to set 'warn' mode -- the autoscale status already has the info we need. One more thing -- how many PGs do you have per OSD right now for the nvme and hdd roots? Can you share the output of `ceph osd df tree` ? Generally, the autoscaler is trying to increase your pools s

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Robert Sander
Am 07.02.22 um 13:15 schrieb Maarten van Ingen: As it's just a few pools affected, doing a manual increase would be and option for me as well, if recommended. As you can see one pool is basically lacking pg's while the others are mostly increasing due to the much higher target_bytes compared t

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
Hi Dan, Here's the output. I removed pool names on purpose. SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE 19 100.0T 3.0 11098T 0.0270 1.0 256off 104.5G 1024G 3.0

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
Dear Maarten, For a cluster that size, I would not immediately enable the autoscaler but first enabled it in "warn" mode to sanity check what it would plan to do: # ceph osd pool set pg_autoscale_mode warn Please share the output of "ceph osd pool autoscale-status" so we can help guide what y