Hi Eugen,
I think I found the root cause. The actual object numbers are below 500k
that I configured. But there are some ghost objects from "ceph df"
triggers the alarm.
I will keep monitor it and post result here.
$ rados -p cached-hdd-cache ls | wc -l
444654
$ ceph df | grep -e "POOL\|cached-hd
Can you manually flush/evict the cache? Maybe reduce max_target_bytes
and max_target_objects to see if that triggers anything. We use
cache_mode writeback, maybe give that a try?
I don't see many differences between our and your cache tier config,
except for cache_mode and we don't have a max
Hi Eugen,
Sorry for the missing information. "cached-hdd-cache" is the overlay tier
of "cached-hdd" and configured in "readproxy" mode.
$ ceph osd dump | grep cached-hdd
pool 24 'cached-hdd' replicated size 3 min_size 2 crush_rule 1 object_hash
rjenkins pg_num 512 pgp_num 512 autoscale_mode warn
I don't see a cache_mode enabled on the pool, did you set one?
Zitat von icy chan :
Hi,
I had configured a cache tier with max object counts 500k. But no evict
happens when the object counts hit the configured maximum.
Anyone experienced this issue? What should I do?
$ ceph health detail
HE