On Tue, Feb 19, 2019 at 11:15 PM Kyotaro HORIGUCHI <horiguchi.kyot...@lab.ntt.co.jp> wrote: > Difference from v15: > > Removed AllocSet accounting stuff. We use approximate memory > size for catcache. > > Removed prune-by-number(or size) stuff. > > Adressing comments from Tsunakawa-san and Ideriha-san . > > Separated catcache monitoring feature. (Removed from this set) > (But it is crucial to check this feature...) > > Is this small enough ?
The commit message in 0002 says 'This also can put a hard limit on the number of catcache entries.' but neither of the GUCs that you've documented have that effect. Is that a leftover from a previous version? I'd like to see some evidence that catalog_cache_memory_target has any value, vs. just always setting it to zero. I came up with the following somewhat artificial example that shows that it might have value. rhaas=# create table foo (a int primary key, b text) partition by hash (a); [rhaas pgsql]$ perl -e 'for (0..9999) { print "CREATE TABLE foo$_ PARTITION OF foo FOR VALUES WITH (MODULUS 10000, REMAINDER $_);\n"; }' | psql First execution of 'select * from foo' in a brand new session takes about 1.9 seconds; subsequent executions take about 0.7 seconds. So, if catalog_cache_memory_target were set to a high enough value to allow all of that stuff to remain in cache, we could possibly save about 1.2 seconds coming off the blocks after a long idle period. That might be enough to justify having the parameter. But I'm not quite sure how high the value would need to be set to actually get the benefit in a case like that, or what happens if you set it to a value that's not quite high enough. I think it might be good to play around some more with cases like this, just to get a feeling for how much time you can save in exchange for how much memory. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company