Hi,
I enabled pg_autoscaler on a specific pool ssd.
I failed to increase pg_num / pgp_num on pools ssd to 1024:
root@ld3955:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 395.8
The documentation tell to size the DB to 4% of the disk data ie 240GB
for a 6 TB disk. Plz gives more explanations when your answer disagree
with the documentation !
Le lun. 25 nov. 2019 à 11:00, Konstantin Shalygin a écrit :
>
> I have an Ceph cluster which was designed for file store. Each host
It's mentioned here among other places
https://books.google.se/books?id=vuiLDwAAQBAJ&pg=PA79&lpg=PA79&dq=rocksdb+sizes+3+30+300+g&source=bl&ots=TlH4GR0E8P&sig=ACfU3U0QOJQZ05POZL9DQFBVwTapML81Ew&hl=en&sa=X&ved=2ahUKEwiPscq57YfmAhVkwosKHY1bB1YQ6AEwAnoECAoQAQ#v=onepage&q=rocksdb%20sizes%203%2030%20300
Agree this needs tidied up in the docs. New users have little chance of
getting it right relying on the docs alone. It's been discussed at
length here several times in various threads but it doesn't always seem
we reach the same conclusion so reading here doesn't guarantee
understanding this co
We encounter a strange behavior on our Mimic 13.2.6 cluster. A any
time, and without any load, some OSDs become unreachable from only
some hosts. It last 10 mn and then the problem vanish.
It 's not always the same OSDs and the same hosts. There is no network
failure on any of the host (because onl