Hi,
Thanks so far for the suggestions.
We have enabled the balancer first to make sure PG distribution is more
optimal. After a few additions/replacements and data growth it was not optimal.
We enabled upmap as this was suggested to be better than the default setting.
To limit simultaneous data
On 2/7/22 12:34 PM, Alexander E. Patrakov wrote:
пн, 7 февр. 2022 г. в 17:30, Robert Sander :
And keep in mind that when PGs are increased that you also may need to
increase the number of OSDs as one OSD should carry a max of around 200
PGs. But I do not know if that is still the case with curr
пн, 7 февр. 2022 г. в 17:30, Robert Sander :
> And keep in mind that when PGs are increased that you also may need to
> increase the number of OSDs as one OSD should carry a max of around 200
> PGs. But I do not know if that is still the case with current Ceph versions.
This is just the default li
> On 02/07/2022 1:51 PM Maarten van Ingen wrote:
> One more thing -- how many PGs do you have per OSD right now for the nvme and
> hdd roots?
> Can you share the output of `ceph osd df tree` ?
>
> >> This is only 1347 lines of text, you sure you want that :-) On a summary
> >> for HDD we have b
Hi Dan,
--
Hi,
OK, you don't need to set 'warn' mode -- the autoscale status already has the
info we need.
One more thing -- how many PGs do you have per OSD right now for the nvme and
hdd roots?
Can you share the output of `ceph osd df tree` ?
>> This is only 1347 lines of text, you sure yo
Hi Robert,
Am 07.02.22 um 13:15 schrieb Maarten van Ingen:
> As it's just a few pools affected, doing a manual increase would be and
> option for me as well, if recommended.
>
> As you can see one pool is basically lacking pg's while the others are mostly
> increasing due to the much higher tar
Hi,
OK, you don't need to set 'warn' mode -- the autoscale status already has the
info we need.
One more thing -- how many PGs do you have per OSD right now for the nvme and
hdd roots?
Can you share the output of `ceph osd df tree` ?
Generally, the autoscaler is trying to increase your pools s
Am 07.02.22 um 13:15 schrieb Maarten van Ingen:
As it's just a few pools affected, doing a manual increase would be and option
for me as well, if recommended.
As you can see one pool is basically lacking pg's while the others are mostly
increasing due to the much higher target_bytes compared t
Hi Dan,
Here's the output. I removed pool names on purpose.
SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS
PG_NUM NEW PG_NUM AUTOSCALE
19 100.0T 3.0 11098T 0.0270 1.0
256off
104.5G 1024G 3.0
Dear Maarten,
For a cluster that size, I would not immediately enable the autoscaler but
first enabled it in "warn" mode to sanity check what it would plan to do:
# ceph osd pool set pg_autoscale_mode warn
Please share the output of "ceph osd pool autoscale-status" so we can help
guide what y
10 matches
Mail list logo