Hi Mac
We've also tweaked
osd-recovery-max-single-start => 2
osd-recovery-sleep-hdd => 0.05
to speed things up.
On 2020-10-20 16:04, Mac Wynkoop wrote:
OK, so for interventions, I've pushed these configs out:
ceph config set mon.* target_max_misplaced_ratio 0.05 > 0.20
ceph config get o
OK, so for interventions, I've pushed these configs out:
ceph config set mon.* target_max_misplaced_ratio 0.05 > 0.20
ceph config get osd.* osd_max_backfills 1 > 4
ceph config set osd.* osd_recovery_max_active 1 > 4
And also ran injectargs to push the changes to the OSDs hot. I'll monitor
it for
The default for max misplaced objects is this (5%):
ceph-node1:~ # ceph config get mon target_max_misplaced_ratio
0.05
You can increase this for the splitting process but I would recommend
to rollback as soon as the splitting has finished.
Zitat von Lindsay Mathieson :
On 20/10/2020 1
On 20/10/2020 11:38 pm, Mac Wynkoop wrote:
Autoscaler isn't on, what part of Ceph is handling the increase of pgp_num?
Because I'd like to turn up the rate at which it splits the PG's, but if
autoscaler isn't doing it, I'd have no clue what to adjust. Any ideas?
Normal recovery ops I imagine -
sh rjenkins pg_num 2048 pgp_num 1024
>> >> pgp_num_target
>> >> >> > 2048 last_change 8458830 lfor 0/0/8445757 flags
>> >> >> > hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576
>> >> >> fast_read
>> >>
;> >
> >> >> > "*ceph osd pool set hou-ec-1.rgw.buckets.data pgp_num 2048*"
> >> >> >
> >> >> > it returns:
> >> >> >
> >> >> > "*set pool 40 pgp_num to 2048*"
> >> >> >
> >> &
ou create the pool. See
Create a
>> >> Pool for details. Once you set placement groups for a pool, you can
>> >> increase the number of placement groups (but you cannot decrease the
>> >> number of placement groups). To increase the number of placement
groups,
>> >> e
gt; >
> >> >> pg_num and pgp_num need to be the same, not?
> >> >>
> >> >> 3.5.1. Set the Number of PGs
> >> >>
> >> >> To set the number of placement groups in a pool, you must specify the
> >> >> number o
r will rebalance. The pgp_num should be equal to the pg_num. To
>> increase the number of placement groups for placement, execute the
>> following:
>>
>> ceph osd pool set {pool-name} pgp_num {pgp_num}
>>
>>
>>
https://access.redhat.com/documentation/en-us/red
you increase the number of placement groups, you must also increase
> >> the number of placement groups for placement (pgp_num) before your
> >> cluster will rebalance. The pgp_num should be equal to the pg_num. To
> >> increase the number of placement groups for placement
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/storage_strategies_guide/placement_groups_pgs
-----Original Message-----
To: norman
Cc: ceph-users
Subject: [ceph-users] Re: pool pgp_num not updated
Hi everyone,
I'm seeing a similar issue here. Any ideas on this
lance. The pgp_num should be equal to the pg_num. To
> increase the number of placement groups for placement, execute the
> following:
>
> ceph osd pool set {pool-name} pgp_num {pgp_num}
>
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/storage_st
/red_hat_ceph_storage/4/html/storage_strategies_guide/placement_groups_pgs
-Original Message-
To: norman
Cc: ceph-users
Subject: [ceph-users] Re: pool pgp_num not updated
Hi everyone,
I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,
On Sun, Sep 6, 2020 at 11:09 PM n
Hi everyone,
I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,
On Sun, Sep 6, 2020 at 11:09 PM norman wrote:
> Hi guys,
>
> When I update the pg_num of a pool, I found it not worked(no
> rebalanced), anyone know the reason? Pool's info:
>
> pool 21 'openstack-volumes-rs' replic
14 matches
Mail list logo