Same situation here after an 18.2.7 to 19.2.2 upgrade - MDS_TRIM warning with
constantly increasing count. Does anyone know if it's safe to just ignore the
warning until the fix comes out with 19.2.3? I'm worried something is going to
break when the count gets large enough.
_
Never mind - I answered my own question.
For epoch 1234:
ceph osd getmap 1234 > osdmap
osdmaptool osdmap --test-map-pgs-dump-all > pgdump.txt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is there a way to get a pg dump for a previous epoch?
I have a situation where I want to make sure all of the PGs that used to be on
a specific OSD are deep scrubbed. However, since that OSD is now down and all
the PGs have been relocated, I don't know what the PG numbers are. I know the
epoch
Is there a simple way to deploy a custom (in-house) mgr module to an
orchestrator managed cluster? I assume the module code would need to be
included in the mgr container image. However, there doesn't seem to be a
straightforward way to do this without having the module merged to upstream
ceph
Is there a way to remove an OSD spec from the mgr? I've got one in there that I
don't want. It shows up when I do "ceph orch osd spec --preview", and I can't
find any way to get rid of it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Does your 32-bit application actually use the inode numbers? Or is it just
trying to read other metadata (such as filenames in a directory, file sizes,
etc)? If it's the latter, you could use LD_PRELOAD to wrap the calls and return
fake/mangled inode numbers (since the application doesn't care a
>From the logs, it sounds like the Ceph stuff is all working but Zabbix_sender
>is failing for some reason. Try running Zabbix_sender manually and see if it
>works or not. See
>https://www.zabbix.com/documentation/4.2/manual/concepts/sender for an example
>on how to do that. Also, make sure you
d? Do these exceptions also apply to
mon_osd_min_in_ratio? Is this in the docs somewhere?
-Original Message-
From: Anthony D'Atri
Sent: Wednesday, October 02, 2019 7:46 PM
To: Darrell Enns
Cc: Paul Emmerich ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: RAM recommendation with larg
OSD/node count? Is
the concern just the large rebalance if a node fails and takes out a large
portion of the OSDs at once?
-Original Message-
From: Paul Emmerich
Sent: Tuesday, October 01, 2019 3:00 PM
To: Darrell Enns
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] RAM recommendation
The standard advice is "1GB RAM per 1TB of OSD". Does this actually still hold
with large OSDs on bluestore? Can it be reasonably reduced with tuning?
>From the docs, it looks like bluestore should target the "osd_memory_target"
>value by default. This is a fixed value (4GB by default), which do
10 matches
Mail list logo