пн, 21 янв. 2019 г. в 19:42, Arthur Zakirov <a.zaki...@postgrespro.ru>: > > On 21.01.2019 17:56, Tomas Vondra wrote: > > I wonder if we could devise some simple cache eviction policy. We don't > > have any memory limit GUC anymore, but maybe we could use unload > > dictionaries that were unused for sufficient amount of time (a couple of > > minutes or so). Of course, the question is when exactly would it happen > > (it seems far too expensive to invoke on each dict access, and it should > > happen even when the dicts are not accessed at all). > > Yes, I thought about such feature too. Agree, it could be expensive > since we need to scan pg_ts_dict table to get list of dictionaries (we > can't scan dshash_table). > > I haven't a good solution yet. I just had a thought to return > max_shared_dictionaries_size. Then we can unload dictionaries (and scan > the pg_ts_dict table) that were accessed a lot time ago if we reached > the size limit. > We can't set exact size limit since we can't release the memory > immediately. So max_shared_dictionaries_size can be renamed to > shared_dictionaries_threshold. If it is equal to "0" then PostgreSQL has > unlimited space for dictionaries.
I want to propose to clean up segments during vacuum/autovacuum. I'm not aware of the politics of cleaning up objects besides relations during vacuum/autovacuum. Could be it a good idea? Vacuum might unload dictionaries when total size of loaded dictionaries exceeds a threshold. When it happens vacuum scans loaded dictionaries and unloads (unpins segments and removes hash table entries) those dictionaries which isn't mapped to any backend process (it happens because dsm_pin_segment() is called) anymore. max_shared_dictionaries_size can be renamed to shared_dictionaries_cleanup_threshold. -- Arthur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company