Bruce Momjian <pgman@candle.pha.pa.us> writes:
> I am thinking we should scale it based on max_fsm_relations.

Hmm ... tables are not the only factor in the required catcache size,
and max_fsm_relations tells more about the total installation size
than the number of tables in your particular database.  But it's one
possible approach.

I just thought of a more radical idea: do we need a limit on catcache
size at all?  On "normal size" databases I believe that we never hit
5000 entries at all (at least, last time I ran the CATCACHE_STATS code
on the regression tests, we didn't get close to that).  We don't have
any comparable limit in the relcache and it doesn't seem to hurt us,
even though a relcache entry is a pretty heavyweight object.

If we didn't try to enforce a limit on catcache size, we could get rid
of the catcache LRU lists entirely, which'd make for a nice savings in
lookup overhead (the MoveToFront operations in catcache.c are a
nontrivial part of SearchSysCache according to profiling I've done,
so getting rid of one of the two would be nice).

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to