So, my pool size has increased to a point where the autoscaler did suggest an
increase of pg_num (from 100 to 512). Autoscaler mode is “on”, but no change
happens..
ceph osd pool ls detail reports:
…
pool 10 'rbd1' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins
pg_num 100 pgp_n
Hi Burkhard,
I tried using the autoscaler, however it did not give a suggestion to resize
pg_num. Since my pg_num is not a power of 2, I wanted to fix that first,
manually, to only realize that it didn’t work.
Because changing pg_num manually did not work, I am not convinced that the
autoscal
Hi,
On 9/12/19 5:16 AM, Kyriazis, George wrote:
Ok, after all is settled, I tried changing pg_num again on my pool and
it still didn’t work:
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd pool set rbd1 pg_num 128
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd require-osd-releas
Ok, after all is settled, I tried changing pg_num again on my pool and it still
didn’t work:
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd pool set rbd1 pg_num 128
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd require-osd-release nautilus
# ceph osd pool set rbd1 pg_num 128
# ce
No, it’s pg_num first, then pgp_num.
Found the problem, and still slowly working on fixing it.
I upgraded from mimic to nautilus, but forgot to restart the OSD daemons for 2
of the OSDs. “ceph osd tell osd.* version” told me which OSDs had a stale
version.
Then it was just a matter of restart
You don't have to increase pgp_num first?
On Wed, Sep 11, 2019 at 6:23 AM Kyriazis, George
wrote:
> I have the same problem (nautilus installed), but the proposed command
> gave me an error:
>
> # ceph osd require-osd-release nautilus
> Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUT
I have the same problem (nautilus installed), but the proposed command gave me
an error:
# ceph osd require-osd-release nautilus
Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUTILUS feature
#
I created my cluster with mimic and then upgraded to nautilus.
What would be my next step?
On Mon, Jul 1, 2019 at 11:57 AM Brett Chancellor
wrote:
> In Nautilus just pg_num is sufficient for both increases and decreases.
>
>
Good to know, I haven't gotten to Nautilus yet.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
In Nautilus just pg_num is sufficient for both increases and decreases.
On Mon, Jul 1, 2019 at 10:55 AM Robert LeBlanc wrote:
> I believe he needs to increase the pgp_num first, then pg_num.
>
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
I believe he needs to increase the pgp_num first, then pg_num.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Jul 1, 2019 at 7:21 AM Nathan Fish wrote:
> I ran into this recently. Try running "ceph osd require-osd-release
> nautilus".
I ran into this recently. Try running "ceph osd require-osd-release
nautilus". This drops backwards compat with pre-nautilus and allows
changing settings.
On Mon, Jul 1, 2019 at 4:24 AM Sylvain PORTIER wrote:
>
> Hi all,
>
> I am using ceph 14.2.1 (Nautilus)
>
> I am unable to increase the pg_num
Hi all,
I am using ceph 14.2.1 (Nautilus)
I am unable to increase the pg_num of a pool.
I have a pool named Backup, the current pg_num is 64 : ceph osd pool get
Backup pg_num => result pg_num: 64
And when I try to increase it using the command
ceph osd pool set Backup pg_num 512 => result "
12 matches
Mail list logo