Am 26.02.2018 um 20:31 schrieb Gregory Farnum:
> On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> > Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
> >>
> >>
> >> On Mon, F
On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> > Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
> >>
> >>
> >> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth <
> freyerm...@physik.uni-bonn.de
Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
>>
>>
>> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth
>> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>>
>> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
>> > On Sun, Feb 25, 2018 at
Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
>
>
> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> > mailto:freyerm...@p
On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> > wrote:
> >> Looking with:
> >> ceph daemon osd.2 perf dump
> >> I get:
> >> "bluefs": {
> >>
On Mon, Feb 26, 2018 at 7:59 AM, Patrick Donnelly wrote:
> It seems in the above test you're using about 1KB per inode (file).
> Using that you can extrapolate how much space the data pool needs
s/data pool/metadata pool/
--
Patrick Donnelly
___
ceph-
Am 26.02.2018 um 17:31 schrieb David Turner:
> That was a good way to check for the recovery sleep. Does your `ceph status`
> show 128 PGs backfilling (or a number near that at least)? The PGs not
> backfilling will say 'backfill+wait'.
Yes:
pgs: 37778254/593342240 objects degraded (6.
That was a good way to check for the recovery sleep. Does your `ceph
status` show 128 PGs backfilling (or a number near that at least)? The PGs
not backfilling will say 'backfill+wait'.
On Mon, Feb 26, 2018 at 11:25 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um
Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> wrote:
>> Looking with:
>> ceph daemon osd.2 perf dump
>> I get:
>> "bluefs": {
>> "gift_bytes": 0,
>> "reclaim_bytes": 0,
>> "db_total_bytes": 84760592384,
>>
Patrick's answer supersedes what I said about RocksDB usage. My knowledge
was more general for actually storing objects, not the metadata inside of
MDS. Thank you for sharing Patrick.
On Mon, Feb 26, 2018 at 11:00 AM Patrick Donnelly
wrote:
> On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
wrote:
> Looking with:
> ceph daemon osd.2 perf dump
> I get:
> "bluefs": {
> "gift_bytes": 0,
> "reclaim_bytes": 0,
> "db_total_bytes": 84760592384,
> "db_used_bytes": 78920024064,
> "wal_total_bytes":
When a Ceph system is in recovery, it uses much more RAM than it does while
running healthy. This increase is often on the order of 4x more memory (at
least back in the days of filestore, I'm not 100% certain about bluestore,
but I would assume the same applies). You have another thread on the ML
Dear Cephalopodians,
I have to extend my question a bit - in our system with 105,000,000 objects in
CephFS (mostly stabilized now after the stress-testing...),
I observe the following data distribution for the metadata pool:
# ceph osd df | head
ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE
13 matches
Mail list logo