With default memory settings, the general rule is 1GB ram/1TB OSD.  If you
have a 4TB OSD, you should plan to have at least 4GB ram.  This was the
recommendation for filestore OSDs, but it was a bit much memory for the
OSDs.  From what I've seen, this rule is a little more appropriate with
bluestore now and should still be observed.

Please note that memory usage in a HEALTH_OK cluster is not the same amount
of memory that the daemons will use during recovery.  I have seen
deployments with 4x memory usage during recovery.

On Thu, Mar 1, 2018 at 8:11 AM Stefan Kooman <ste...@bit.nl> wrote:

> Quoting Caspar Smit (caspars...@supernas.eu):
> > Stefan,
> >
> > How many OSD's and how much RAM are in each server?
>
> Currently 7 OSDs, 128 GB RAM. Max wil be 10 OSDs in these servers. 12
> cores (at least one core per OSD).
>
> > bluestore_cache_size=6G will not mean each OSD is using max 6GB RAM
> right?
>
> Apparently. Sure they will use more RAM than just cache to function
> correctly. I figured 3 GB per OSD would be enough ...
>
> > Our bluestore hdd OSD's with bluestore_cache_size at 1G use ~4GB of total
> > RAM. The cache is a part of the memory usage by bluestore OSD's.
>
> A factor 4 is quite high, isn't it? Where is all this RAM used for
> besides cache? RocksDB?
>
> So how should I size the amount of RAM in a OSD server for 10 bluestore
> SSDs in a
> replicated setup?
>
> Thanks,
>
> Stefan
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688
> <+31%20318%20648%20688> / i...@bit.nl
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to