On 5/16/14, 8:15 AM, Hans-Jürgen Schönig wrote:

On 20 Feb 2014, at 01:38, Tom Lane <t...@sss.pgh.pa.us> wrote:
I am really dubious that letting DBAs manage buffers is going to be
an improvement over automatic management.

the reason for a feature like that is to define an area of the application 
which needs more predictable runtime behaviour.
not all tables are created equals in term of importance.

example: user authentication should always be supersonic fast while some 
reporting tables might gladly be forgotten even if they happened to be in use 
recently.

i am not saying that we should have this feature.
however, there are definitely use cases which would justify some more control 
here.
otherwise people will fall back and use dirty tricks sucks as “SELECT count(*)” 
or so to emulate what we got here.

Which is really just an extension of a larger problem: many applications do not 
care one iota about ideal performance; they care about *always* having some 
minimum level of performance. This frequently comes up with the issue of a 
query plan that is marginally faster 99% of the time but sucks horribly for the 
remaining 1%. Frequently it's far better to chose a less optimal query that 
doesn't have a degenerate case.
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to