Responding to myself to follow up with what I found.

While going over the release notes for 14.2.3/14.2.4 I found this was a known 
problem that has already been fixed.  Upgrading the cluster to 14.2.4 fixed the 
issue.

Bryan

> On Oct 30, 2019, at 10:33 AM, Bryan Stillwell <bstillw...@godaddy.com> wrote:
> 
> This morning I noticed that on a new cluster the number of PGs for the 
> default.rgw.buckets.data pool was way too small (just 8 PGs), but when I try 
> to split the PGs the cluster doesn't do anything:
> 
> # ceph osd pool set default.rgw.buckets.data pg_num 16
> set pool 13 pg_num to 16
> 
> It seems to set the pg_num_target/pgp_num_target to 16, but pg_num/pgp_num 
> never increase:
> 
> # ceph osd dump | grep default.rgw.buckets.data
> pool 13 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 
> object_hash rjenkins pg_num 8 pgp_num 8 pg_num_target 16 pgp_num_target 16 
> autoscale_mode warn last_change 43217 flags hashpspool,creating stripe_width 
> 0 compression_mode aggressive application rgw
> 
> The clusters has 295 OSDs, so I need to do a bit of splitting today.  Any 
> ideas why the splits aren't starting?
> 
> Thanks,
> Bryan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to