Hi Lars,
I've also seen interim space usage burst during my experiments. Up to 2x
times of max level size when topmost RocksDB level is L3 (i.e. 25GB
max). So I think 2x (which results in 60-64 GB for DB) is a good grade
when your DB is expected to be small and medium sized. Not sure this
mu
Hi,
Tue, 26 Nov 2019 13:57:51 +
Simon Ironside ==> ceph-users@lists.ceph.com :
> Mattia Belluco said back in May:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html
>
> "when RocksDB needs to compact a layer it rewrites it
> *before* deleting the old data; if you'd li
Agree this needs tidied up in the docs. New users have little chance of
getting it right relying on the docs alone. It's been discussed at
length here several times in various threads but it doesn't always seem
we reach the same conclusion so reading here doesn't guarantee
understanding this co
It's mentioned here among other places
https://books.google.se/books?id=vuiLDwAAQBAJ&pg=PA79&lpg=PA79&dq=rocksdb+sizes+3+30+300+g&source=bl&ots=TlH4GR0E8P&sig=ACfU3U0QOJQZ05POZL9DQFBVwTapML81Ew&hl=en&sa=X&ved=2ahUKEwiPscq57YfmAhVkwosKHY1bB1YQ6AEwAnoECAoQAQ#v=onepage&q=rocksdb%20sizes%203%2030%20300
The documentation tell to size the DB to 4% of the disk data ie 240GB
for a 6 TB disk. Plz gives more explanations when your answer disagree
with the documentation !
Le lun. 25 nov. 2019 à 11:00, Konstantin Shalygin a écrit :
>
> I have an Ceph cluster which was designed for file store. Each host
I have an Ceph cluster which was designed for file store. Each host
have 5 SSDs write intensive of 400GB and 20 HDD of 6TB. So each HDD
have a WAL of 5 GB on SSD
If i want to put Bluestore on this cluster, i can only allocate ~75GB
of WAL and DB on SSD for each HDD which is far below the 4% limit