Hi,
Yesterday we got a disk space warning notification on one of the monitor
nodes. Most space consumption was in use by a running Loki container.
Looking into online documentation / trackers to find info on Ceph's
default Loki retention policy left me empty handed. I did fine some
trackers for Prometheus retention policy in the past, so I figured there
might be none yet. The documentation states that you can find the jinja2
templates used for the service here:
src/pybind/mgr/cephadm/templates
So I changed this template to include a retention policy and a compactor
like:
...
orig template
...
compactor:
working_directory: /tmp/loki/compactor
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
limits_config:
retention_period: 2160h
split_queries_by_interval: 24h
ceph config-key set mgr/cephadm/services/loki/loki.yml -i
$PWD/loki_with_retention_policy.yml.j2
and redeployed loki (ceph orch daemon redeploy loki.mon1).
That gained a lot of free space ... as in ... all Loki logs where gone
and it had started from scratch. This "/tmp/loki" working directory is
... odd?
How is this supposed to work in Ceph?
Thanks,
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io