[ceph-users] Re: 2 MDSs behind on trimming on my Ceph Cluster since the upgrade from 18.2.6 (reef) to 19.2.2 (squid)

2025-07-25 Thread Darrell Enns
Same situation here after an 18.2.7 to 19.2.2 upgrade - MDS_TRIM warning with constantly increasing count. Does anyone know if it's safe to just ignore the warning until the fix comes out with 19.2.3? I'm worried something is going to break when the count gets large enough. _

[ceph-users] Re: Get past epoch pg map

2025-06-27 Thread Darrell Enns
Never mind - I answered my own question. For epoch 1234: ceph osd getmap 1234 > osdmap osdmaptool osdmap --test-map-pgs-dump-all > pgdump.txt ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Get past epoch pg map

2025-06-27 Thread Darrell Enns
Is there a way to get a pg dump for a previous epoch? I have a situation where I want to make sure all of the PGs that used to be on a specific OSD are deep scrubbed. However, since that OSD is now down and all the PGs have been relocated, I don't know what the PG numbers are. I know the epoch

[ceph-users] Deploy custom mgr module

2024-10-30 Thread Darrell Enns
Is there a simple way to deploy a custom (in-house) mgr module to an orchestrator managed cluster? I assume the module code would need to be included in the mgr container image. However, there doesn't seem to be a straightforward way to do this without having the module merged to upstream ceph

[ceph-users] Delete OSD spec (mgr)?

2020-08-31 Thread Darrell Enns
Is there a way to remove an OSD spec from the mgr? I've got one in there that I don't want. It shows up when I do "ceph orch osd spec --preview", and I can't find any way to get rid of it. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: CephFS and 32-bit Inode Numbers

2019-10-18 Thread Darrell Enns
Does your 32-bit application actually use the inode numbers? Or is it just trying to read other metadata (such as filenames in a directory, file sizes, etc)? If it's the latter, you could use LD_PRELOAD to wrap the calls and return fake/mangled inode numbers (since the application doesn't care a

[ceph-users] Re: ceph-mgr Module "zabbix" cannot send Data

2019-10-07 Thread Darrell Enns
>From the logs, it sounds like the Ceph stuff is all working but Zabbix_sender >is failing for some reason. Try running Zabbix_sender manually and see if it >works or not. See >https://www.zabbix.com/documentation/4.2/manual/concepts/sender for an example >on how to do that. Also, make sure you

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-03 Thread Darrell Enns
d? Do these exceptions also apply to mon_osd_min_in_ratio? Is this in the docs somewhere? -Original Message- From: Anthony D'Atri Sent: Wednesday, October 02, 2019 7:46 PM To: Darrell Enns Cc: Paul Emmerich ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: RAM recommendation with larg

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-02 Thread Darrell Enns
OSD/node count? Is the concern just the large rebalance if a node fails and takes out a large portion of the OSDs at once? -Original Message- From: Paul Emmerich Sent: Tuesday, October 01, 2019 3:00 PM To: Darrell Enns Cc: ceph-users@ceph.io Subject: Re: [ceph-users] RAM recommendation

[ceph-users] RAM recommendation with large OSDs?

2019-10-01 Thread Darrell Enns
The standard advice is "1GB RAM per 1TB of OSD". Does this actually still hold with large OSDs on bluestore? Can it be reasonably reduced with tuning? >From the docs, it looks like bluestore should target the "osd_memory_target" >value by default. This is a fixed value (4GB by default), which do