Hi

I suggest to upgrade to last Nautilus release!
Also, the last Nautilus release doesn't have fix for trimming osdmaps after PG 
merge [1] (and seems PR's for Nau never be merged). But we push the trimming 
via restart mon leader 💁‍♂️


[1] https://github.com/ceph/ceph/pull/43204
k

Sent from my iPhone

> On 6 Apr 2022, at 21:01, J-P Methot <jp.met...@planethoster.info> wrote:
> Hi,
> 
> 
> On a cluster running Nautilus 14.2.11, the store.db data space usage keeps 
> increasing. It went from 5GB to 20GB in a year.
> 
> We even had the following warning and adjust ‘mon_data_size_warn’ to 20Gi => 
> WARNING: MON_DISK_BIG( mon monitor1 is using a lot of disk space )
> 
> 
> But the disk space increase is constant about 1.5G per month.
> 
> 
> We did a 'ceph-monstore-tool /var/lib/ceph/mon/ceph-monitor1/ dump-keys | awk 
> '{print $1}'| uniq -c’ :
>     285 auth
>       2 config
>      10 health
>    1435 logm
>       3 mdsmap
>     153 mgr
>       1 mgr_command_descs
>       3 mgr_metadata
>      51 mgrstat
>      13 mon_config_key
>       1 mon_sync
>       7 monitor
>       1 monitor_store
>       5 monmap
>     234 osd_metadata
>       1 osd_pg_creating
> 1152444 osd_snap
>  965071 osdmap
>     622 paxos
> 
> It appears that the osd_snap is eating up all the space. We have about 1100 
> snapshots total (they rotate every 72h).
> 
> I took a look at https://tracker.ceph.com/issues/42012 and it might be 
> related. However, from the bug report, that particular issue doesn't seem 
> fixed in Nautilus, but my 14.2.16 cluster that has similar usage doesn't have 
> this issue.
> 
> 
> Did anyone face the same issue and do you have a workaround/solution to avoid 
> mon’s db size increasing constantly ? Could a simple minor version upgrade 
> fix it or would I need to upgrade to Octopus?
> 
> -- 
> Jean-Philippe Méthot
> Senior Openstack system administrator
> Administrateur système Openstack sénior
> PlanetHoster inc.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to