Robert Lichtenberger wrote:
> I am trying to fully understand, how costs for queries are computed.
> Taking the following example:
>
> CREATE TABLE test (name varchar(250) primary key) ;
> INSERT INTO test (name) VALUES(generate_series(1, 1000)::text) ;
> ANALYZE test ;
> EXPLAIN SELECT * FROM
Robert Lichtenberger writes:
> I am trying to fully understand, how costs for queries are computed.
> Taking the following example: ...
> Index Scan using test_pkey on test (cost=0.00..8.27 rows=1 width=3)
>Index Cond: ((name)::text = '4'::text)
> The value I want to understand is 8.27. Fro
I am trying to fully understand, how costs for queries are computed.
Taking the following example:
CREATE TABLE test (name varchar(250) primary key) ;
INSERT INTO test (name) VALUES(generate_series(1, 1000)::text) ;
ANALYZE test ;
EXPLAIN SELECT * FROM test WHERE name = '4' ;
I am getting the o
On Wed, May 10, 2006 at 11:00:14PM +1000, chris smith wrote:
> On 5/10/06, Alban Hertroys <[EMAIL PROTECTED]> wrote:
> >Sim Zacks wrote:
> >> Something such as: with this explain data, adding an index on table tbl
> >> column A would drastically improve the efficiency. Or at least an
> >> applicati
Thanks for the script.
It does a great job of finding exactly which path is taking the most
time. Now for the hard part. Why is that part taking the longest time.
Richard Huxton wrote:
I've got a short perl script that I throw explain output into. It's not
brilliant - can give false positives,
Now you're talking about data warehouse design and not optimizing
queries, though they are obviously interrelated. A human looking at the
explain data would not be able to determine that it would be better to
have a summary table either.
However, first you would want to optimize your queries a
Alban Hertroys wrote:
Sim Zacks wrote:
Something such as: with this explain data, adding an index on table
tbl column A would drastically improve the efficiency. Or at least an
application that would say, the least efficient part of your query is
on this part of the code so that you could more
Sim Zacks wrote:
I disagree with you that a human brain would be better then a machine
for optimizing purposes. If the system is programmed to optimize
correctly, then it will when to stick data into a temp table and update
columns instead of doing a select because x number of joins are too muc
I agree with you that an index isn't always the answer, that was more of
an example. I was thinking more along the lines of an intelligent part
of the database that has access to the statistics and would be able to
spit out recommendations for the query.
Such as, I type in a monster query and
On 5/10/06, Alban Hertroys <[EMAIL PROTECTED]> wrote:
Sim Zacks wrote:
> Something such as: with this explain data, adding an index on table tbl
> column A would drastically improve the efficiency. Or at least an
> application that would say, the least efficient part of your query is on
> this pa
Sim Zacks wrote:
Something such as: with this explain data, adding an index on table tbl
column A would drastically improve the efficiency. Or at least an
application that would say, the least efficient part of your query is on
this part of the code so that you could more easily figure out what
I am looking at the explain data for my query and it mostly
understandable thanks to an excellent article by Jim Nasby,
http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10120
It is very time consuming and confusing walking through the explain. Is
there a possibility (or does it ev
In article <[EMAIL PROTECTED]>,
Martijn van Oosterhout writes:
> There's one corner case you need to make sure you handle. In the plan
> that started this thread there's a query node marked (never executed).
> That will affect yout regex a bit. an that case you should probably
> return NULLs. (It
On Fri, Feb 11, 2005 at 03:50:05PM +0100, Harald Fuchs wrote:
> In article <[EMAIL PROTECTED]>,
> Martijn van Oosterhout writes:
> > At the moment people are talking about parsing strings to get the
> > output. That output has the same issues as what's being proposed here,
> > we're just saving th
In article <[EMAIL PROTECTED]>,
Martijn van Oosterhout writes:
> To be honest, I'm not sure this a real problem. You could simply label
> the first columns a rownumber and a depth number.
[See below ]
> At the moment people are talking about parsing strings to get the
> output. That output has
Tom Lane <[EMAIL PROTECTED]> writes:
> Well, the point is that there are potentially three types of statements
> involved:
>
> 1. SELECTs
> 2. Utility statements that can return tuples (EXPLAIN, SHOW, etc)
> 3. Utility statements that can't return tuples (ALTER, etc)
I'm not s
On Thu, Feb 10, 2005 at 10:30:26PM -0500, Tom Lane wrote:
> Well, the point is that there are potentially three types of statements
> involved:
>
> 1. SELECTs
> 2. Utility statements that can return tuples (EXPLAIN, SHOW, etc)
> 3. Utility statements that can't return tuples (ALT
On Thu, Feb 10, 2005 at 07:34:07PM -0500, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > I've been wondering about that. A while ago the change was made from
> > outputting a NOTICE with the EXPLAIN output to returning a resultset.
> > If you could agree on what columns to return it might no
Tom Lane <[EMAIL PROTECTED]> writes:
> Martijn van Oosterhout writes:
> > I've been wondering about that. A while ago the change was made from
> > outputting a NOTICE with the EXPLAIN output to returning a resultset.
> > If you could agree on what columns to return it might not be so hard
> > fo
David Fetter <[EMAIL PROTECTED]> writes:
> On Thu, Feb 10, 2005 at 09:51:18PM -0500, Tom Lane wrote:
>> ... it loses the distinction between utility statements that
>> can return tuples and those that can't.
> Where is this distinction in SPI?
Well, the point is that there are potentially three t
David Fetter <[EMAIL PROTECTED]> writes:
> On Thu, Feb 10, 2005 at 08:11:42PM -0500, Tom Lane wrote:
>> Seems we have three possibilities to fix this:
>>
>> 1. Alter SPI_execute to say SPI_OK_SELECT after executing a utility
>> statement that returns tuples.
> This doesn't sound good.
It does se
On Thu, Feb 10, 2005 at 08:11:42PM -0500, Tom Lane wrote:
> Michael Fuhr <[EMAIL PROTECTED]> writes:
> > Do PL/Perl and the other PLs require any changes to make this work?
> > I tried $rv = spi_exec_query("EXPLAIN $query") but $rv contained
> > only the following:
>
> > $rv->{processed} = 0
> > $
Michael Fuhr <[EMAIL PROTECTED]> writes:
> Do PL/Perl and the other PLs require any changes to make this work?
> I tried $rv = spi_exec_query("EXPLAIN $query") but $rv contained
> only the following:
> $rv->{processed} = 0
> $rv->{status} = SPI_OK_UTILITY
Looking around, it seems that the PLs (an
On Thursday February 10 2005 5:01, David Fetter wrote:
> On Thu, Feb 10, 2005 at 05:19:41PM -0500, Tom Lane wrote:
> > David Fetter <[EMAIL PROTECTED]> writes:
> > > I'd be delighted to, but I'm not sure how to see to it
> > > that EXPLAIN gets the CMD_SELECT flag. What all files
> > > need to cha
Martijn van Oosterhout writes:
> I've been wondering about that. A while ago the change was made from
> outputting a NOTICE with the EXPLAIN output to returning a resultset.
> If you could agree on what columns to return it might not be so hard
> for the EXPLAIN to return full tuples...
The major
On Thu, Feb 10, 2005 at 05:19:41PM -0500, Tom Lane wrote:
> David Fetter <[EMAIL PROTECTED]> writes:
> > I'd be delighted to, but I'm not sure how to see to it that EXPLAIN
> > gets the CMD_SELECT flag. What all files need to change to effect
> > this? Or have I drastically misunderstood what's i
On Fri, Feb 11, 2005 at 12:06:45AM +0100, Martijn van Oosterhout wrote:
> On Thu, Feb 10, 2005 at 11:38:47PM +0100, Harald Fuchs wrote:
> > But I think it should be the other way round: EXPLAIN ANALYZE should
> > return its output in table form, and there should be a function to
> > convert that t
On Thu, Feb 10, 2005 at 11:38:47PM +0100, Harald Fuchs wrote:
> But I think it should be the other way round: EXPLAIN ANALYZE should
> return its output in table form, and there should be a function to
> convert that table into what EXPLAIN ANALYZE outputs now.
I've been wondering about that. A w
On Thu, Feb 10, 2005 at 03:48:48PM -0700, Michael Fuhr wrote:
>
> Below is a simple proof of concept for an explain() function
> in PL/pgSQL.
Do PL/Perl and the other PLs require any changes to make this work?
I tried $rv = spi_exec_query("EXPLAIN $query") but $rv contained
only the following:
$
On Thu, Feb 10, 2005 at 03:38:05PM -0500, Tom Lane wrote:
> Michael Fuhr <[EMAIL PROTECTED]> writes:
>
> > Any suggestions for the meantime?
>
> Update to CVS tip ;-)
Done. Below is a simple proof of concept for an explain() function
in PL/pgSQL. It's not necessarily correct -- it just shows w
In article <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> writes:
> Michael Fuhr <[EMAIL PROTECTED]> writes:
>> On Wed, Feb 09, 2005 at 11:36:22PM -0500, Tom Lane wrote:
>>> (And no, EXECUTE doesn't help.) This seems like an oversight. We
>>> already have some understanding in the backend tha
David Fetter <[EMAIL PROTECTED]> writes:
> I'd be delighted to, but I'm not sure how to see to it that EXPLAIN
> gets the CMD_SELECT flag. What all files need to change to effect
> this? Or have I drastically misunderstood what's involved?
It doesn't. See
http://archives.postgresql.org/pgsql-c
On Thu, Feb 10, 2005 at 03:38:05PM -0500, Tom Lane wrote:
> Michael Fuhr <[EMAIL PROTECTED]> writes:
> > On Wed, Feb 09, 2005 at 11:36:22PM -0500, Tom Lane wrote:
> >> (And no, EXECUTE doesn't help.) This seems like an oversight.
> >> We already have some understanding in the backend that certain
Michael Fuhr <[EMAIL PROTECTED]> writes:
> On Wed, Feb 09, 2005 at 11:36:22PM -0500, Tom Lane wrote:
>> (And no, EXECUTE doesn't help.) This seems like an oversight. We
>> already have some understanding in the backend that certain utility
>> commands return query results; the SPI code should be
On Thu, Feb 10, 2005 at 08:04:52AM -0700, Ed L. wrote:
> On Wednesday February 9 2005 10:10, Michael Fuhr wrote:
> >
> > Any suggestions for the meantime?
>
> Well, this would be a bit unsightly and not exactly elegant, but
> I'd think you could do this via DBI from within plperl or maybe
> plpe
On Wednesday February 9 2005 10:10, Michael Fuhr wrote:
>
> > (And no, EXECUTE doesn't help.) This seems like an
> > oversight. We already have some understanding in the
> > backend that certain utility commands return query results;
> > the SPI code should be letting those results be scanned as
On Wed, Feb 09, 2005 at 11:36:22PM -0500, Tom Lane wrote:
> Michael Fuhr <[EMAIL PROTECTED]> writes:
> > I started looking at this earlier. Is it even possible to get
> > EXPLAIN output via SPI (and thus the PLs' interfaces to SPI)?
>
> Hmm ... my first reaction was "sure", but:
[snip]
> ERROR:
Michael Fuhr <[EMAIL PROTECTED]> writes:
> I started looking at this earlier. Is it even possible to get
> EXPLAIN output via SPI (and thus the PLs' interfaces to SPI)?
Hmm ... my first reaction was "sure", but:
regression=# create or replace function foo() returns setof text as $$
regression$#
"Ed L." <[EMAIL PROTECTED]> writes:
> and I don't know what else.
How about being able to search for queries where expected rows and actual rows
are far apart.
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your free space ma
On Wed, Feb 09, 2005 at 06:31:11PM -0800, David Fetter wrote:
> On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
> >
> > SELECT sql, op, index, relation, actual_first
> > FROM pg_explain_analyze('SELECT * FROM foo')
> > WHERE op = 'Index Scan'
> > AND actual_first > 1.0;
> >
On Wednesday February 9 2005 8:07, Ed L. wrote:
> >
> > Well, I'm a little bored; I've got tomorrow off, and this
> > seems like it might be doable in the kind of high-level
> > PL/Foo's with which I'm familiar. What would the returning
> > rowtype for
> > pg_explain_analyze(TEXT) be?
>
> LOL. If
On Wednesday February 9 2005 7:31, David Fetter wrote:
> On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
> > Thinking about how to make this analysis faster and less
> > labor- intensive ...
> >
> >SELECT node_id, op, parent_node_id, index, relation,
> > cost_first, cost_last,
David Fetter wrote:
On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
Thinking about how to make this analysis faster and less labor-
intensive ...
I know of no other way to get the detailed performance data provided
via EXPLAIN ANALYZE without just painfully disassembling a query.
It se
On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
> Thinking about how to make this analysis faster and less labor-
> intensive ...
>
> I know of no other way to get the detailed performance data provided
> via EXPLAIN ANALYZE without just painfully disassembling a query.
> It seems it wou
Thinking about how to make this analysis faster and less labor-
intensive ...
I know of no other way to get the detailed performance data provided
via EXPLAIN ANALYZE without just painfully disassembling a query.
It seems it would be pretty useful w/r/t performance monitoring to
be able to ret
On Wed, Feb 09, 2005 at 03:38:32PM -0700, Ed L. wrote:
> On Wednesday February 9 2005 3:13, Martijn van Oosterhout wrote:
> >
> > In general, the EXPLAIN ANALYZE output follows the planner
> > output as close as possible. If you look at the original query
> > posted, it showed an Index Scan costing
On Wednesday February 9 2005 3:13, Martijn van Oosterhout wrote:
>
> In general, the EXPLAIN ANALYZE output follows the planner
> output as close as possible. If you look at the original query
> posted, it showed an Index Scan costing 4.63..4.63 which means
> the index scan is taking (on average) 4
On Wed, Feb 09, 2005 at 02:37:39PM -0700, Ed L. wrote:
> Very helpful, thanks. So time spent in that node & its children
> = first number of "actual time" * loops? That seems consistent
> with the fact that reindexing the index led to the huge speedup.
> If the second number of the "actual ti
"Ed L." <[EMAIL PROTECTED]> writes:
> If the second number of the "actual time" part means time
> elapsed in this node and its children until the last row was
> returned, why does it say "actual time=4.63..4.63" instead of
> "actual time=4.63..4767.62"?
The reason we do that is to make the "act
On Wednesday February 9 2005 2:21, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> >
> > 21 -> Nested Loop (cost=0.00..108.85 rows=1
> > width=1196) (actual time=4769.59..4769.59 rows=0 loops=1) 22
> > -> Nested Loop (cost=0.00..64.78 rows=4 width=131)
> > (actual time=0.41..7
"Ed L." <[EMAIL PROTECTED]> writes:
> I'm trying to optimize a large query by looking at
> EXPLAIN ANALYZE output. Here's what I think may be
> the most relevant snippet:
> 21 -> Nested Loop (cost=0.00..108.85 rows=1 width=1196) (actual
> time=4769.59..4769.59 rows=0 loops=1)
> 22
On Wed, Feb 09, 2005 at 01:11:36PM -0700, Ed L. wrote:
> On Wednesday February 9 2005 12:56, Alvaro Herrera wrote:
> > On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
> > > sped up the query to sub-second. This is a 7.3.4 cluster.
> > > I wonder if this that 7.3 index bloat bug?
> >
> > Ha
On Wednesday February 9 2005 12:56, Alvaro Herrera wrote:
> On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
> > sped up the query to sub-second. This is a 7.3.4 cluster.
> > I wonder if this that 7.3 index bloat bug?
>
> Hard to say, because you didn't provide more info (such as if
> the i
On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
> On Wednesday February 9 2005 12:04, you wrote:
> > 4796.0 ms are the time it took for 34's index scan to complete
> > the 1014 loops, I'd think.
>
> Interesting. Reindexing the index
>
> idx_queryoutcome_occurrencehistory_key
>
>
On Wed, Feb 09, 2005 at 11:00:00AM -0700, Ed L. wrote:
> 34 -> Index Scan using idx_queryoutcome_occurrencehistory_key on
> queryoutcome (cost=0.00..10.28 rows=28 width=1065) (actual time=4.63..4.63
> rows=0 loops=1014)
I'm trying to optimize a large query by looking at
EXPLAIN ANALYZE output. Here's what I think may be
the most relevant snippet:
21 -> Nested Loop (cost=0.00..108.85 rows=1 width=1196) (actual
time=4769.59..4769.59 rows=0 loops=1)
22 -> Nested Loop (cost=0.00..64.78 rows=4 wi
Hi,
Trying to understand the planner estimate costs ... one index scan
seems to be much more expensive then another. Here are the facts:
tiger=# \dbk_inv
Table "bk_inv"
Attribute | Type | Modifier
---+--+--
store | varchar(5) |
57 matches
Mail list logo