Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Tom Lane wrote: > >> I just thought of a more radical idea: do we need a limit on catcache > >> size at all? On "normal size" databases I believe that we never hit > >> 5000 entries at all (at least, last time I ran the CATCACHE_STATS code > >> on the regression tests, we didn't get close to that). We don't have > >> any comparable limit in the relcache and it doesn't seem to hurt us, > >> even though a relcache entry is a pretty heavyweight object. > > > Well, assuming you never access all those tables, you don't use lots of > > memory, but if you are accessing a lot, it seems memory for all your > > tables is a minimal overhead. > > I re-did the test of running the regression tests with CATCACHE_STATS > enabled. The largest catcache population in any test was 1238 tuples, > and most backends had 500 or less. I'm not sure whether you'd really > want to consider the regression database as representative of small > production databases, but granted that assumption, the current limit of > 5000 tuples isn't limiting anything on small-to-middling databases. > (Note we are counting tables and other cataloged objects, *not* volume > of data stored --- so the regression database could easily be much > bigger than many production DBs by this measure.) > > So I'm pretty strongly inclined to just dike out the limit. If you're > running a database big enough to hit the existing limit, you can well > afford to put more memory into the catcache.
And if we get problem reports, we can fix it. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings