On 10/9/07, Campbell, Lance <[EMAIL PROTECTED]> wrote:
I have noticed a slight spike in the amount of CPU usage in the last few
weeks. I am sure it has to do with a change or two that was made to some
queries. What is the best way to log the SQL that is being executed? I
would prefer to l
>>> On Wed, Oct 10, 2007 at 1:31 AM, in message
<[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
wrote:
> On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:
>
>> I have a situation where a query is running much slower than I would
>> expect. The ANALYZE showed that it is hashing some
Hi
I have been having some serious performance issues when using prepared
statements which I can not re-produce when using a direct statement. Let
me try to explain
The query does an order by in descending order on several columns for
which an index exists.
The explain output as follows
rascal
On 10/10/07, Tomáš Vondra <[EMAIL PROTECTED]> wrote:
> Actually we wrote something similar as pgfounie was not as nice as
> today, at that time (2005] - you can find that tool on
> http://opensource.pearshealthcyber.cz/. Actually I'm working on a
> complete rewrite of that tool into Java (new featu
Theo Kramer a écrit :
Hi
I have been having some serious performance issues when using prepared
statements which I can not re-produce when using a direct statement. Let
me try to explain
The query does an order by in descending order on several columns for
which an index exists.
The explain
On 10/9/07, Radhika S <[EMAIL PROTECTED]> wrote:
> Hi,
> Along with my previous message (slow postgres), I notice the shared buffer
> setting for our production database is set to 1000.
> How much higher can I go? I don't know how much my kernel can take?
>
> I am running postgres 7.4.6 on Redhat
On Wed, 2007-10-10 at 09:15 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:31 AM, in message
> <[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
> wrote:
> > On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:
> >
> >> I have a situation where a query is running much slower
On Wed, 10 Oct 2007 12:41:49 +0200
Tomáš Vondra <[EMAIL PROTECTED]> wrote:
> Actually we wrote something similar as pgfounie was not as nice as
> today, at that time (2005] - you can find that tool on
> http://opensource.pearshealthcyber.cz/. Actually I'm working on a
> complete rewrite of th
Simon Riggs <[EMAIL PROTECTED]> writes:
> Basically the planner doesn't ever optimise for the possibility of the
> never-executed case because even a single row returned would destroy
> that assumption.
It's worse than that: the outer subplan *does* return some rows.
I suppose that all of them ha
>>> On Wed, Oct 10, 2007 at 1:07 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
>> Basically the planner doesn't ever optimise for the possibility of the
>> never-executed case because even a single row returned would destroy
>> t
On Wed, 2007-10-10 at 13:30 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:07 PM, in message <[EMAIL PROTECTED]>,
> Tom Lane <[EMAIL PROTECTED]> wrote:
> > Simon Riggs <[EMAIL PROTECTED]> writes:
> >> Basically the planner doesn't ever optimise for the possibility of the
> >> never-
On Wed, 2007-10-10 at 14:07 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Basically the planner doesn't ever optimise for the possibility of the
> > never-executed case because even a single row returned would destroy
> > that assumption.
>
> It's worse than that: the outer
On Wed, 2007-10-10 at 17:00 +0200, Cédric Villemain wrote:
>
> Reading the manual, you can learn that prepared statement can (not)
> follow the same plan as direct query:
> the plan is make before pg know the value of the variable.
>
> See 'Notes' http://www.postgresql.org/docs/8.2/interactiv
>>> On Wed, Oct 10, 2007 at 1:54 PM, in message
<[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
wrote:
>
> But the planner doesn't work on probability. It works on a best-guess
> selectivity, as known at planning time.
The point I'm trying to make is that at planning time the
pg_statistic
On Wed, 2007-10-10 at 14:35 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:54 PM, in message
> <[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
> wrote:
> >
> > But the planner doesn't work on probability. It works on a best-guess
> > selectivity, as known at planning time.
>
On 10/10/07, Theo Kramer <[EMAIL PROTECTED]> wrote:
> When running the query directly from psql it returns the required rows
> in less than 100 milli-seconds.
>
> However, when using a prepared statement from my C application on the
> above query and executing it the query duration is as follows
>
On Wed, 2007-10-10 at 15:55 -0400, Jonah H. Harris wrote:
> On 10/10/07, Theo Kramer <[EMAIL PROTECTED]> wrote:
> > When running the query directly from psql it returns the required rows
> > in less than 100 milli-seconds.
> >
> > However, when using a prepared statement from my C application on th
>>> On Wed, Oct 10, 2007 at 2:52 PM, in message
<[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
wrote:
>
> The fast plan is an all-or-nothing plan. It is *only* faster when the
> number of matched rows is zero. You know it is zero, but currently the
> planner doesn't, nor is it able to make
On Wed, 10 Oct 2007 10:20:02 -0500
"Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> In 7.4, using 25% is often too high a setting for it to handle well,
> and the practical useful maximum is usually under 10,000
> shared_buffers, and often closer to 1,000 to 5,000
Scott - interesting reply. Is this
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> The point I'm trying to make is that at planning time the
> pg_statistic row for this "Charge"."reopHistSeqNo" column showed
> stanullfrac as 0.989; it doesn't seem to have taken this into account
> when making its guess about how many rows would be jo
>>> On Wed, Oct 10, 2007 at 3:32 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
>
> I'd be very hesitant
> to make it choose a plan that is fast only if there were exactly zero
> such rows and is slow otherwise.
I'm not sure why it looks at the slow option at all; it s
On 10/10/07, Josh Trutwin <[EMAIL PROTECTED]> wrote:
> On Wed, 10 Oct 2007 10:20:02 -0500
> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>
> > In 7.4, using 25% is often too high a setting for it to handle well,
> > and the practical useful maximum is usually under 10,000
> > shared_buffers, and ofte
>>> On Wed, Oct 10, 2007 at 3:48 PM, in message
<[EMAIL PROTECTED]>, "Kevin Grittner"
<[EMAIL PROTECTED]> wrote:
>
> This ten times faster
That understates it -- I forgot to get things cached, as I had done
for all the other tests. When cached, this is sub-millisecond,
although not quite the
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> I'm not sure why it looks at the slow option at all; it seems like a remain=
> ing weakness in the OUTER JOIN optimizations.
I think that comes mostly from the fact that you've got non-nullable
targetlist entries in the definition of the CaseTypeHistE
>>> On Wed, Oct 10, 2007 at 3:48 PM, in message
<[EMAIL PROTECTED]>, "Kevin Grittner"
<[EMAIL PROTECTED]> wrote:
> I'm not sure why it looks at the slow option at all; it seems like a
> remaining weakness in the OUTER JOIN optimizations. If I change the query to
> use an inner join between the
Hi List;
I'm preparing to create a test suite of very complex queries that can be
profiled in terms of load and performance. The ultimate goal is to define a
load/performance profile during a run of the old application code base and
then again with changes to the application code base.
I suspe
Thank you scott.
We plan on upgrading to Postgres 8.2 very soon.
Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have
2GB memory ).
The default is 24MB.
Regds,
Radhika
On 10/10/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> On 10/9/07, Radhika S <[EMAIL PROTECTED]> wrote:
On 10/10/07, Radhika S <[EMAIL PROTECTED]> wrote:
> Thank you scott.
>
> We plan on upgrading to Postgres 8.2 very soon.
> Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have
> 2GB memory ).
> The default is 24MB.
On a dedicated db machine with 2 Gigs of ram 500Meg is fine.
On Wed, 10 Oct 2007, Kevin Kempter wrote:
should I be looking at others like pg_statio or pg_stat_all_tables ? If
so, what should I look for in these (or other) tables?
There are a good set of monitoring scripts for performance-oriented things
included with the dbt2 benchmarking package,
htt
On Thu, 2007-10-04 at 08:30 -0400, Alvaro Herrera wrote:
> Henrik wrote:
>
> > Ahh I had exactly 8 joins.
> > Following your suggestion I raised the join_collapse_limit from 8 to 10 and
> > the planners decision sure changed but now I have some crazy nested loops.
> > Maybe I have some statistic
30 matches
Mail list logo