Large tables, by themselves, are not necessarily a problem. The problem is what
you might be trying to do with them. Depending on the operations you are trying
to do, partitioning the table might help performance or make it worse.
What kind of queries are you running? How many days of history a
If anything was built in the database to handle such connections, I'd recommend
a big, bold warning, recommending the use of client-side pooling if available.
For something like, say, a web-server, pooling connections to the database
provides a massive performance advantage regardless of how goo
If your app is running under Tomcat, connection pooling is extremely easy to
set up from there: It has connection pooling mechanisms built in. Request your
db connections using said mechanisms, instead of doing it manually, make a
couple of changes to server.xml, and the problem goes away. Hundr
Have you read this?
http://blog.endpoint.com/2008/12/why-is-my-function-slow.html
99% of the 'function is slow' problems are caused by this.
Have you checked the difference between explain and prepare + explain execute?
>>> Tyler Hildebrandt 05/25/10 4:59 AM >>>
We're using a function that
First, are you sure you are getting autovacuum to run hourly? Autovacuum will
only vacuum when certain configuration thresholds are reached. You can set it
to only check for those thresholds every so often, but no vacuuming or
analyzing will be done unless they are hit, regardless of how often a
>>> Tory M Blue 02/26/10 12:52 PM >>>
>>
>> This is too much. Since you have 300 connections, you will probably swap
>> because of this setting, since each connection may use this much
>> work_mem. The rule of the thumb is to set this to a lower general value
>> (say, 1-2 MB), and set it per-que
That sure looks like the source of the problem to me too. I've seen similar
behavior in queries not very different from that. It's hard to guess what the
problem is exactly without having more knowledge of the data distribution in
article_words though.
Given the results of analyze, I'd try to
I'm having some performance problems in a few sales reports running on postgres
8.3, running on Redhat 4.1.2. The hardware is a bit old, but it performs well
enough. The reports are the typical sales reporting fare: Gather the sales of a
time period based some criteria, aggregate them by product