IIRC that results in a fair amount of space being used on each of your OSDs and 
your mon DB.  The default IIRC used to be much larger than it is now.  It's a 
tradeoff.

It kinda sounds to me like the OSD in question has probably been down for some 
time and was unexpected resurrected by your recent activity.  Unless you have 
incomplete PGs, honestly I would just zap/purge it and redeploy, unless this is 
more than one or two.

> On Oct 24, 2025, at 2:21 PM, Huseyin Cotuk <[email protected]> wrote:
> 
> Hi again,
> 
> By the way, I ran into a similar problem a few years ago and I have set the 
> following config parameter to 5000 some time ago. 
> 
> osd_map_cache_size 
> <https://docs.ceph.com/en/reef/rados/configuration/osd-config-ref/#confval-osd_map_cache_size>
> The number of OSD maps to keep cached.
> 
> type
> :
> int
> default
> :
> 50
> I could not find any other related config parameters. 
> 
> BR,
> Huseyin Cotuk
> [email protected]
> 
> 
> 
> 
>> On 24 Oct 2025, at 21:14, Huseyin Cotuk <[email protected]> wrote:
>> 
>> Hi Eugen,
>> 
>> I have already tried as the below method I stated before. I got the current 
>> osdmap and tried to set-osdmap via ceph-object-store-tool. But the command 
>> failed with error:
>> 
>> ceph osd getmap 72555 > /tmp/osd_map_72555
>> CEPH_ARGS="--bluestore-ignore-data-csum" ceph-objectstore-tool --data-path 
>> /var/lib/ceph/osd/ceph-11/ --op set-osdmap --file /tmp/osd_map_72555
>> 
>> osdmap (#-1:9c8e9ef2:::osdmap.72555:0#) does not exist.
>> 
>> https://www.mail-archive.com/[email protected]/msg11545.html
>> 
>> BR,
>> Huseyin
>> [email protected]
>> 
>> 
>> 
>> 
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to