I don't recall finding a definitive answer - though it was some time ago.
IIRC, it did work but made the pool fragile; I remember having to rebuild
the pools for my test rig soon after.  Don't quite recall the root cause,
though - could have been newbie operator error on my part.  May have also
had something to do with my cache pool settings; at the time I was doing
heavy benchmarking with a limited-size pool, so it's possible I filled the
cache pool with data while the pg_num change was going on, causing subtle
breakage (despite being explicitly warned to NOT do that).


--
Mike Shuey

On Sat, May 27, 2017 at 8:52 AM, Konstantin Shalygin <k0...@k0ste.ru> wrote:

> # ceph osd pool set cephfs_data_cache pg_num 256
>>
>> Error EPERM: splits in cache pools must be followed by scrubs and
>> leave sufficient free space to avoid overfilling.  use
>> --yes-i-really-mean-it to force.
>>
>>
>> Is there something I need to do, before increasing PGs on a cache
>> pool?  Can this be (safely) done live?
>>
>
> Hello.
> You found answer on this question? I can't google anything about this
> warning.
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to