Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Am 26.02.2018 um 20:31 schrieb Gregory Farnum: > On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth > mailto:freyerm...@physik.uni-bonn.de>> wrote: > > Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth: > > Am 26.02.2018 um 19:56 schrieb Gregory Farnum: > >> > >> > >> On Mon, F

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Gregory Farnum
On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth < freyerm...@physik.uni-bonn.de> wrote: > Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth: > > Am 26.02.2018 um 19:56 schrieb Gregory Farnum: > >> > >> > >> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth < > freyerm...@physik.uni-bonn.de

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth: > Am 26.02.2018 um 19:56 schrieb Gregory Farnum: >> >> >> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth >> mailto:freyerm...@physik.uni-bonn.de>> wrote: >> >> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly: >> > On Sun, Feb 25, 2018 at

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Am 26.02.2018 um 19:56 schrieb Gregory Farnum: > > > On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth > mailto:freyerm...@physik.uni-bonn.de>> wrote: > > Am 26.02.2018 um 16:59 schrieb Patrick Donnelly: > > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth > > mailto:freyerm...@p

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Gregory Farnum
On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth < freyerm...@physik.uni-bonn.de> wrote: > Am 26.02.2018 um 16:59 schrieb Patrick Donnelly: > > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth > > wrote: > >> Looking with: > >> ceph daemon osd.2 perf dump > >> I get: > >> "bluefs": { > >>

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Patrick Donnelly
On Mon, Feb 26, 2018 at 7:59 AM, Patrick Donnelly wrote: > It seems in the above test you're using about 1KB per inode (file). > Using that you can extrapolate how much space the data pool needs s/data pool/metadata pool/ -- Patrick Donnelly ___ ceph-

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Am 26.02.2018 um 17:31 schrieb David Turner: > That was a good way to check for the recovery sleep.  Does your `ceph status` > show 128 PGs backfilling (or a number near that at least)?  The PGs not > backfilling will say 'backfill+wait'. Yes: pgs: 37778254/593342240 objects degraded (6.

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread David Turner
That was a good way to check for the recovery sleep. Does your `ceph status` show 128 PGs backfilling (or a number near that at least)? The PGs not backfilling will say 'backfill+wait'. On Mon, Feb 26, 2018 at 11:25 AM Oliver Freyermuth < freyerm...@physik.uni-bonn.de> wrote: > Am 26.02.2018 um

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Am 26.02.2018 um 16:59 schrieb Patrick Donnelly: > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth > wrote: >> Looking with: >> ceph daemon osd.2 perf dump >> I get: >> "bluefs": { >> "gift_bytes": 0, >> "reclaim_bytes": 0, >> "db_total_bytes": 84760592384, >>

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread David Turner
Patrick's answer supersedes what I said about RocksDB usage. My knowledge was more general for actually storing objects, not the metadata inside of MDS. Thank you for sharing Patrick. On Mon, Feb 26, 2018 at 11:00 AM Patrick Donnelly wrote: > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Patrick Donnelly
On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth wrote: > Looking with: > ceph daemon osd.2 perf dump > I get: > "bluefs": { > "gift_bytes": 0, > "reclaim_bytes": 0, > "db_total_bytes": 84760592384, > "db_used_bytes": 78920024064, > "wal_total_bytes":

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread David Turner
When a Ceph system is in recovery, it uses much more RAM than it does while running healthy. This increase is often on the order of 4x more memory (at least back in the days of filestore, I'm not 100% certain about bluestore, but I would assume the same applies). You have another thread on the ML

Re: [ceph-users] Storage usage of CephFS-MDS

2018-02-26 Thread Oliver Freyermuth
Dear Cephalopodians, I have to extend my question a bit - in our system with 105,000,000 objects in CephFS (mostly stabilized now after the stress-testing...), I observe the following data distribution for the metadata pool: # ceph osd df | head ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE