Patrick's answer supersedes what I said about RocksDB usage.  My knowledge
was more general for actually storing objects, not the metadata inside of
MDS.  Thank you for sharing Patrick.

On Mon, Feb 26, 2018 at 11:00 AM Patrick Donnelly <pdonn...@redhat.com>
wrote:

> On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> <freyerm...@physik.uni-bonn.de> wrote:
> > Looking with:
> > ceph daemon osd.2 perf dump
> > I get:
> >     "bluefs": {
> >         "gift_bytes": 0,
> >         "reclaim_bytes": 0,
> >         "db_total_bytes": 84760592384,
> >         "db_used_bytes": 78920024064,
> >         "wal_total_bytes": 0,
> >         "wal_used_bytes": 0,
> >         "slow_total_bytes": 0,
> >         "slow_used_bytes": 0,
> > so it seems this is almost exclusively RocksDB usage.
> >
> > Is this expected?
>
> Yes. The directory entries are stored in the omap of the objects. This
> will be stored in the RocksDB backend of Bluestore.
>
> > Is there a recommendation on how much MDS storage is needed for a CephFS
> with 450 TB?
>
> It seems in the above test you're using about 1KB per inode (file).
> Using that you can extrapolate how much space the data pool needs
> based on your file system usage. (If all you're doing is filling the
> file system with empty files, of course you're going to need an
> unusually large metadata pool.)
>
> --
> Patrick Donnelly
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to