Hi,
up to parallel executions, when we had node in explain analyze showing
"loops=x" with x more than 1, it meant that the "actual time" had to be
multiplied by loops to get real time spent in a node.
For example, check step 13 in https://explain.depesz.com/s/gNBd
It shows time of 3ms, but loops
Rhhh Lin writes:
> So I decided to intervene and terminate some active sessions (Which were
> active and not waiting) which were all querying this table with a similar
> query, by using pg_backend_terminate(pid). Some of these sessions were in an
> active state for a few days executing this que
That helps, thanks Peter.
Regards,
Ruan
From: pgsql-general-ow...@postgresql.org
on behalf of Peter J. Holzer
Sent: 03 November 2017 08:49
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN command just hangs...
On 2017-11-02 20:51:23 +
art the server, implement a vacuum
execution and see where I am at then.
Regards,
Ruan
From: Justin Pryzby
Sent: 03 November 2017 21:33
To: Rhhh Lin
Cc: Tom Lane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN command just hangs...
On Fri, Nov 03,
On Fri, Nov 03, 2017 at 09:12:02PM +, Rhhh Lin wrote:
> I checked for dead tuples against that particular table initially as I have
> seen performance problems before in a related 'busy' environment which needed
> its frequency of vacuuming to be increased. So I have a query to check for
> t
y repercussions...
Regards,
Ruan
From: Justin Pryzby
Sent: 02 November 2017 21:49
To: Rhhh Lin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN command just hangs...
On Thu, Nov 02, 2017 at 09:13:05PM +, Rhhh Lin wrote:
> Yes, it may be an issue with the index, but I&
ot;timestamp", but right now I cannot
check and verify because it just hangs...
Regard,
Ruan
From: Tom Lane
Sent: 02 November 2017 21:40
To: Rhhh Lin
Cc: Justin Pryzby; pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN command just hangs.
On 2017-11-02 20:51:23 +, Rhhh Lin wrote:
[...]
> where timestamp BETWEEN 150667656 AND 150875022
[...]
> *Also, as a sidenote - can someone please expand on why one (I was not
> involved
> in the creation of this DB/schema definition) would choose to have the
> definition of the time
On Thu, Nov 02, 2017 at 09:13:05PM +, Rhhh Lin wrote:
> Yes, it may be an issue with the index, but I'd like to have some evidence
> towards that before dropping and recreating (It does not appear that bloat is
> a problem here or dead tuples either).
Why do you say those aren't an issue? Jus
Rhhh Lin writes:
> The EXPLAIN statement is currently 'f' for waiting and 'active' via
> pg_stat_activity, so it is doing something. The ps command does not show me
> anything more verbose.
ps would confirm for sure whether it was eating CPU time, whereas I do not
particularly trust pg_stat_ac
: Justin Pryzby
Sent: 02 November 2017 21:02
To: Rhhh Lin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN command just hangs...
On Thu, Nov 02, 2017 at 08:51:23PM +, Rhhh Lin wrote:
> However, this query will run for days without completing. I suspect it has to
> do w
On Thu, Nov 02, 2017 at 08:51:23PM +, Rhhh Lin wrote:
> However, this query will run for days without completing. I suspect it has to
> do with the timestamp predicate and lack of using an appropriate index access
> path. This is what I need to verify/establish.
Perhaps the timestamp index i
Hi all,
version = Postgres 9.3.10
I have a table with approx. 5 million rows. It is defined something like the
below.
col: type:
timestamp bigint
measurement_id integer
value numeric(24,5)
minval numeric(24,5)
maxval numeric(24,5)
There are two BTree indexes in place on the PK ("timestamp", "me
On Wed, Nov 1, 2017 at 1:19 PM, David G. Johnston
wrote:
> On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe
> wrote:
>>
>> So some of my output from an explain analyze here has a line that says
>> this:
>>
>> ex Scan using warranty_order_item_warranty_order_id_idx on
>> warranty_order_item woi_1 (c
On Wed, Nov 1, 2017 at 12:25 PM, Justin Pryzby wrote:
> On Wed, Nov 01, 2017 at 12:19:21PM -0700, David G. Johnston wrote:
> > On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe
> > wrote:
> >
> > > So some of my output from an explain analyze here has a line that says
> > > this:
> > >
> > > ex Sca
On Wed, Nov 01, 2017 at 12:19:21PM -0700, David G. Johnston wrote:
> On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe
> wrote:
>
> > So some of my output from an explain analyze here has a line that says
> > this:
> >
> > ex Scan using warranty_order_item_warranty_order_id_idx on
> > warranty_order
On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe
wrote:
> So some of my output from an explain analyze here has a line that says
> this:
>
> ex Scan using warranty_order_item_warranty_order_id_idx on
> warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
> time=0.110..0.111 rows=0
So some of my output from an explain analyze here has a line that says this:
ex Scan using warranty_order_item_warranty_order_id_idx on
warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
time=0.110..0.111 rows=0 loops=1,010,844)
How can you have 1M loops over 0 rows?
Running
=?UTF-8?Q?Benoit_Lobr=C3=A9au?= writes:
> In this video (PostgreSQL at 10 TB and beyond: https://youtu.be/8mKpfutwD0U
> at 6mn05), the speaker says that on very big databases explain analyse can
> give unrealistic results because it doesn't read toasted data.
> I suppose it's because of this: "Th
Hi,
In this video (PostgreSQL at 10 TB and beyond: https://youtu.be/8mKpfutwD0U
at 6mn05), the speaker says that on very big databases explain analyse can
give unrealistic results because it doesn't read toasted data.
I suppose it's because of this: "The big values of TOASTed attributes will
only
On 2016-12-23 10:35:26 -0600, Chris Richards wrote:
> Howdy. I was noticing a significant problem with a query on one of my tables.
> I
> tried recreating the problem and wasn't able to do so on a different install,
> and so a few days later I ran the same query on the problem table. Lo' and
> beh
Howdy. I was noticing a significant problem with a query on one of my
tables. I tried recreating the problem and wasn't able to do so on a
different install, and so a few days later I ran the same query on the
problem table. Lo' and behold, there wasn't a problem anymore. I'm at a
loss to why. The
On 6 February 2013 11:03, Alexander Farber wrote:
> begin
>
> insert into pref_ban2 select
> id,
> first_name,
> last_name,
> city,
> last_ip
>
Hello
2013/2/6 Alexander Farber :
> Hello,
>
> I've read in the docs, that every table should
> better have primary key and so I've rearranged
> my 8.4.13 database: added primary keys to
> each table (some of the primary keys are
> pairs of columns) and dropped all other indices.
>
> And I've pro
Hello,
I've read in the docs, that every table should
better have primary key and so I've rearranged
my 8.4.13 database: added primary keys to
each table (some of the primary keys are
pairs of columns) and dropped all other indices.
And I've probably dropped few indices too many,
because a stored
高健 wrote:
> Please let me dig it a little more:
>
> I think that when a session is accessing a postgersql table. It will be
> influenced by the
> followings:
>
>
>
> Really old data (needed to be vacuumed, eg: old image at one day ago).
>
> Recent data (committed and uncommitted), because t
Hi Laurenz:
Thank you for your kind reply.
Please let me dig it a little more:
I think that when a session is accessing a postgersql table. It will be
influenced by the followings:
Really old data (needed to be vacuumed, eg: old image at one day ago).
Recent data (committed and uncommi
高健 wrote:
> I have one question about the visibility of explain plan.
>
> Firstly , I was inserting into data to a table. I use : [ insert into
> ptest select * from
> test02; ]
>
> And test02 table has 10,000,000 records. And ptest is a parent table, which
> has two distribution
Hi all:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table,
which has two distribution child table --- cte
On Mon, Jun 25, 2012 at 4:09 PM, Tom Lane wrote:
> The short answer though is that this is probably coming from CPU cost
> components not disk-access components.
Yes of course they are cpu costs, but I'm not able to understand which
ones. Is there a way to make PostgreSQL to log the values of the
Luca Ferrari writes:
> Now, what is the .35 in the cost of the bitmap index scan nodes?
If you're going to get picky about second-order cost components,
it's time to start reading the source code:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/optimizer/path/costsize.c;h
Hi all,
imagine the following simple situation:
# CREATE TABLE test( pk serial not null, description text, primary key(pk));
# INSERT INTO test(pk) VALUES(generate_series(1,100 ) );
# VACUUM FULL ANALYZE test;
# EXPLAIN SELECT * FROM test WHERE pk = 1 OR pk = 100;
2012/4/26 Tom Lane
>
> I've applied a patch for this. Thanks for the report!
>
>regards, tom lane
>
Thanks for Your time :)
Regards,
Bartek
Bartosz Dmytrak writes:
> [ EXPLAIN VERBOSE fails for ]
> WITH t as (
> INSERT INTO "tblD1" (id, "Data1")
> VALUES ('a', 123)
> RETURNING *)
> UPDATE "tblBase"
> SET "SomeData" = 123
> WHERE id = 'a';
I've applied a patch for this. Thanks for the report!
regards, tom la
2012/4/20 Tom Lane
> Will look into it.
>
>
Thanks again for Your time :)
Regards,
Bartek
Bartosz Dmytrak writes:
> This e-mail is reposted form pgadmin support mailing list. This problem
> looks like related with postgres not pgAdmin.
> [ EXPLAIN VERBOSE yields a "cache lookup failed" error ]
Huh, yeah, that's a backend bug all right. I thought at first you might
have a catalog-corr
"Albe Laurenz" writes:
> Andreas wrote:
>> Both select where shown as 'Index Scan'. But the second select is not
> a real index scan,
> A full scan of the index is also an index scan.
Yes. In particular it won't visit the heap for rows that don't satisfy
the index condition. So as long as the
Andreas wrote:
> PostgreSQL 9.1.2 on i686-pc-linux-gnu, compiled by gcc-4.4.real
(Debian 4.4.5-8) 4.4.5, 32-bit
>
> id | integer | not null Vorgabewert nextval('a_id_seq'::regclass)
> a | integer | not null
> b | integer | not null
> Indexe:
> "a_pkey" PRIMARY KEY, btree (id)
> "a_a_k
Hi all.
PostgreSQL 9.1.2 on i686-pc-linux-gnu, compiled by gcc-4.4.real (Debian
4.4.5-8) 4.4.5, 32-bit
id | integer | not null Vorgabewert nextval('a_id_seq'::regclass)
a | integer | not null
b | integer | not null
Indexe:
"a_pkey" PRIMARY KEY, btree (id)
"a_a_key" UNIQUE CONSTRAINT
Thanks,
It does look like an incorrect prediction. Looking again, I think it's the row
estimate for the join that's out - the planner estimates one row returned, in
which case a nested join would probably make sense, whereas in fact there are
23.
However it's a generated (user created) query,
Hi,
I have a query I'm trying to optimise. It takes just under a second to run, not
too bad for my users but I'm worried that as the size of the data increases, it
will get worse. Of course the plan may change when that happens but I'd also
like to learn a bit more about optimisation anyway.
T
On 08/24/11 9:18 AM, Carlos Henrique Reimer wrote:
I need to improve performance for a particular SQL command but facing
difficulties to understand the explain results.
you're joining like 50 tables in nested selects, getting 200 rows, and
its only taking 3.5 seconds? where's the problem?
Hi,
I need to improve performance for a particular SQL command but facing
difficulties to understand the explain results.
Is there somewhere a tool could help on this?
I've stored the SQL code and corresponding explain analyze at
SQL: http://www.opendb.com.br/v1/sql.txt
Explain: http://www.open
On 04/27/2011 12:45 PM, SUBHAM ROY wrote:
I am using postgres 8.4.8.
|EXPLAIN (ANALYZE,BUFFERS)| is not working. So is there any patch
available for it? If so, kindly specify from where to get it and how to
install it.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
Have you tried just plain
exp
On Wed, Apr 27, 2011 at 20:45, SUBHAM ROY wrote:
> I am using postgres 8.4.8.
> EXPLAIN (ANALYZE,BUFFERS) is not working. So is there any patch available
> for it? If so, kindly specify from where to get it and how to install it.
This functionality is new in PostgreSQL 9.0. You need to upgrade to
I am using postgres 8.4.8.
EXPLAIN (ANALYZE,BUFFERS) is not working. So is there any patch available
for it? If so, kindly specify from where to get it and how to install it.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
Reg Me Please wrote:
Here it comes:
Aggregate (cost=227.59..227.61 rows=1 width=8)
-> Nested Loop (cost=0.00..227.34 rows=49 width=8)
-> Seq Scan on T2 (cost=0.00..1.07 rows=6 width=4)
Filter: (fld1 = 'VEND'::text)
-> Index Scan using i_T1_partial on T1
On Friday 09 January 2009 20:00:57 Tom Lane wrote:
> Reg Me Please writes:
> > Aggregate (cost=227.59..227.61 rows=1 width=8)
> >-> Nested Loop (cost=0.00..227.34 rows=49 width=8)
> > -> Seq Scan on T2 (cost=0.00..1.07 rows=6 width=4)
> >Filter: (fld1 = 'VEND'::t
On Friday 09 January 2009 20:00:36 Thomas Pundt wrote:
> Reg Me Please wrote:
> > Here it comes:
> >
> > Aggregate (cost=227.59..227.61 rows=1 width=8)
> >-> Nested Loop (cost=0.00..227.34 rows=49 width=8)
> > -> Seq Scan on T2 (cost=0.00..1.07 rows=6 width=4)
> >
Reg Me Please writes:
> Aggregate (cost=227.59..227.61 rows=1 width=8)
>-> Nested Loop (cost=0.00..227.34 rows=49 width=8)
> -> Seq Scan on T2 (cost=0.00..1.07 rows=6 width=4)
>Filter: (fld1 = 'VEND'::text)
> -> Index Scan using i_T1_partial on T1 (cos
Here it comes:
Aggregate (cost=227.59..227.61 rows=1 width=8)
-> Nested Loop (cost=0.00..227.34 rows=49 width=8)
-> Seq Scan on T2 (cost=0.00..1.07 rows=6 width=4)
Filter: (fld1 = 'VEND'::text)
-> Index Scan using i_T1_partial on T1 (cost=0.00..37.61 row
Could you provide the output of EXPLAIN ANALYZE with your query?
On Fri, Jan 9, 2009 at 7:06 PM, Reg Me Please wrote:
> Hi.
>
> For an INNER JOINed query, EXPLAIN says that a "nested loop" is responsible
> for the big part of the time needed to run.
>
> The 2 tables JOINed are:
>
> T1: multi-mill
Hi.
For an INNER JOINed query, EXPLAIN says that a "nested loop" is responsible
for the big part of the time needed to run.
The 2 tables JOINed are:
T1: multi-million rows
T2: few dozens rows
The join is though a single column in both sides and it's NOT a PK in either
table. But I have indexes
On Wed, Jan 7, 2009 at 10:53 AM, hubert depesz lubaczewski
wrote:
>
> p.s. as you can clearly see from another mails from this list - the list
> uses english language. please use it, or if you'd like to use spanish
> (i'm not sure if your language if spanish, i don't know it) - you can
> use list
2009/1/7 Gustavo Rosso :
> Es 1.213ms un valor correcto para realizar un insert en una tabla obtenido
> por explain analyze.
eso es un poco mas de un milisegundo, no parece mucho... en todo caso
el tiempo que demore un INSERT dependera en parte de la cantidad de
indices, FK y triggers que tenga un
On Wed, Jan 07, 2009 at 01:30:54PM -0300, Gustavo Rosso wrote:
> Es 1.213ms un valor correcto para realizar un insert en una tabla
> obtenido por explain analyze.
trochę długo, ale to zależy od sprzętu. dodatkowo - może masz tam
triggery? klucze obce? ciężko powiedzieć. pokaż schemat tabeli i te
Es 1.213ms un valor correcto para realizar un insert en una tabla
obtenido por explain analyze.
Gustavo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, Aug 18, 2008 at 3:01 PM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]> wrote:
> this:
>
> begin;
> explain select * from catalog_items limit 5;
> commit;
>
> return this:
>
> -- Executing query:
> begin;
> explain select * from catalog_items limit 5;
> commit;
> Query result with 2 rows discard
this:
begin;
explain select * from catalog_items limit 5;
commit;
return this:
-- Executing query:
begin;
explain select * from catalog_items limit 5;
commit;
Query result with 2 rows discarded.
Query returned successfully with no result in 58 ms.
I'm interested in explain inside a transactio
Thanks a lot everybody! I got it clear. I was wrongly thinking that
PostgreSQL might not be creating the indices by default.
regards
Sanjay
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscri
Bill Moran <[EMAIL PROTECTED]> writes:
> In response to Sanjay <[EMAIL PROTECTED]>:
>> Seq Scan on website (cost=0.00..1.31 rows=1 width=162) (actual
>> time=0.047..0.051 rows=1 loops=1)
>> Filter: (website_id = 1)
>> Total runtime: 0.102 ms
>> Wondering why it is not using the index, which woul
On Monday 27 August 2007 05:21, Sanjay <[EMAIL PROTECTED]> wrote:
>Wondering why it is not using the index, which would have
> been
> automatically created for the primary key.
Because you not only have just one row in the whole table, 100% of them will
match the query. In short, one page fetch f
In response to Sanjay <[EMAIL PROTECTED]>:
> Hi All,
>
> Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
> VARCHAR(30)). While I try this:
>
> EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
>
> the output is:
>
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
--
Seq Scan on website (cost=
How many rows are in this table?
Sanjay wrote:
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
Alvaro Herrera wrote:
>
> I meant the queries inside the function.
>
>
Oh ;(
Here it is
"HashAggregate (cost=825.10..825.19 rows=1 width=112) (actual
time=59175.752..59176.301 rows=75 loops=1)"
" -> Nested Loop Left Join (cost=443.57..825.06 rows=1 width=112)
(actual time=148.338..58997.5
Rikard Pavelic wrote:
> Alvaro Herrera wrote:
> > Try doing a PREPARE and then EXPLAIN EXECUTE, like
> >
> > alvherre=# prepare foo as select generate_series(1, $1);
> Hm, no luck ;(
>
> prepare f1 as select * from
> raspored.dohvati_statistiku_rada_u_smjenama_radnika($1, $2);
> explain analyze e
Alvaro Herrera wrote:
> Try doing a PREPARE and then EXPLAIN EXECUTE, like
>
> alvherre=# prepare foo as select generate_series(1, $1);
> PREPARE
>
> alvherre=# explain analyze execute foo(100);
> QUERY PLAN
>
> --
Rikard Pavelic wrote:
> Is this possible?
>
> I've been searching posts, but to no luck ;(
>
> I have one SQL query inside function, but when
> i do select from function it takes 8 sec.
> If I execute just SQL query (with some parameters passed to it)
> it takes 0.3 seconds.
Try doing a PREPARE
Is this possible?
I've been searching posts, but to no luck ;(
I have one SQL query inside function, but when
i do select from function it takes 8 sec.
If I execute just SQL query (with some parameters passed to it)
it takes 0.3 seconds.
What I'm trying to do is select part of the data from the
I have 2 databases that are supposed to be identical.
In one a specific query goes very fast and the other one the same query goes
slow.
I checked the data and it is identical.
I checked the indices and constraints and they are identical.
I vacuumed both databases immediately before I ran the E
"Phil Endecott" <[EMAIL PROTECTED]> writes:
> If I understand it correctly, it is still doing a sequential scan on
> part_tsearch that does not terminate early due to the limit clause. So
> I'm still seeing run times that are rather worse than I think should be
> possible. Can it not step thro
Thanks for the quick reply Tom.
Tom Lane wrote:
>"Phil Endecott"
>writes:
>> I was not patient enough to wait for the remaining explain-analyse results,
>> but I feel that there is a linear slowdown of about 60x between the raw
>> query and the explain-analyse version.
>
> Slow gettimeofday() .
"Phil Endecott" <[EMAIL PROTECTED]> writes:
> I was not patient enough to wait for the remaining explain-analyse results,
> but I feel that there is a linear slowdown of about 60x between the raw
> query and the explain-analyse version.
Slow gettimeofday() ... fairly common on desktop-grade PC ha
Dear All,
I want to find all of the msg_ids from "messages" that are not in table
"part_tsearch", where both tables are large but the result of the
query is normally small, and often empty. To 'protect' the application
against the unusual case where the result of the query is large I have
adde
Jeff Davis <[EMAIL PROTECTED]> writes:
> What am I missing about EXPLAIN ANALYZE that is causing that much
> overhead?
Instrumentation overhead? In this case you've got 2 million more
gettimeofday() calls in the one case than the other, which says that
gettimeofday() takes about 1 microsecond on
Version 8.2beta3.
If I EXPLAIN ANALYZE the query, it reports a time of about 2600 ms. If I
run the query in psql, I can tell visibly that the query takes less time
to execute than 2 seconds. Even stranger, if I wrap the query in a SQL
function and EXPLAIN ANALYZE a query that selects from the SQL
On Tue, Aug 15, 2006 at 10:14:49AM +0300, Andrii Vasyliev wrote:
> Hi!
>
> I have a database just recreated from scratch with SQL script.
> (My PostgreSQL version is 8.1.3).
> All of the tables are pretty empty, but that's what I've got with explain:
>
> EXPLAIN SELECT * FROM account;
>
Hi!
I have a database just recreated from scratch with SQL script.
(My PostgreSQL version is 8.1.3).
All of the tables are pretty empty, but that's what I've got with explain:
EXPLAIN SELECT * FROM account;
QUERY PLAN
Tom Lane wrote:
David Link <[EMAIL PROTECTED]> writes:
The following large EXPLAIN SELECT Statement fails to return, but
continues to take up processing time until it is killed.
[ 52-way join... ]
Am I right in guessing that all the sales_xxx tables are the same size
and have similar
David Link <[EMAIL PROTECTED]> writes:
> The following large EXPLAIN SELECT Statement fails to return, but
> continues to take up processing time until it is killed.
> [ 52-way join... ]
Am I right in guessing that all the sales_xxx tables are the same size
and have similar statistics? I think t
Hi,
The following large EXPLAIN SELECT Statement fails to return, but
continues to take up processing time until it is killed.
I've straightened up the postgresql.conf, as per Tom Lane's suggestions
since last time I wrote about this.
See: http://archives.postgresql.org/pgsql-general/2005-12
David Rysdam <[EMAIL PROTECTED]> writes:
> According to what I've been able to find:
> A = predicted time to first returned row in milliseconds
arbitrary units
> B = total cost in arbitrary units
> C = actual time to first returned row in
There's been plenty of vacuuming and analyzing on these tables, so
outdated stats are not the answer.
No but it could be incorrect stats do to the statistics parameter not
being set high enough.
Joshua D. Drake
---(end of broadcast)---
TIP
I'm afraid I still don't understand the output of EXPLAIN ANALYZE. The
output I get is like this:
blah blah blah (cost=A..B blah blah) (actual time=C..D blah blah)
According to what I've been able to find:
A = predicted time to first returned row in milliseconds
B = total cost in arbitrary un
David Link <[EMAIL PROTECTED]> writes:
> more importantly, do you think the problem I am having is due to these
> configuration short comings?
Yeah, the planning thresholds ...
regards, tom lane
---(end of broadcast)---
TIP
Thanks for your reply, Tom. Different folks have made different
suggestions. Can you suggest more reasonable values for these? But
more importantly, do you think the problem I am having is due to these
configuration short comings?
Thanks much.
David take a look at:
http://www.varlena.
Tom Lane wrote:
David Link <[EMAIL PROTECTED]> writes:
Certain SQL Queries, I believe those with many table joins, when run as
EXPLAIN plans, never return.
I'd guess that one or all of these settings are excessive:
geqo_threshold = 14
from_collapse_limit = 13
join_collapse_limit
Jaime Casanova wrote:
when you have thoses cases, you can take a look in pg_stats_activity
to find the offending query...
or simply logs all queries
Thanks for the advice. I also turned on stat_command_string
---(end of broadcast)---
TIP 3:
David Link <[EMAIL PROTECTED]> writes:
> Certain SQL Queries, I believe those with many table joins, when run as
> EXPLAIN plans, never return.
I'd guess that one or all of these settings are excessive:
> geqo_threshold = 14
> from_collapse_limit = 13
> join_collapse_limit = 13
Keep in mind t
On 12/5/05, David Link <[EMAIL PROTECTED]> wrote:
> Hi, This has become a major problem for us. Thank you in advance for
> your help.
>
> OS: SUSE Linux 2.6.5-7.191-bigsmp
> PostgreSQL: 7.4.8
> Application: ModPerl Web application using DBI.pm
> Database size: 100 Gb, 1025 Tables.
>
> Problem: EXP
Hi, This has become a major problem for us. Thank you in advance for
your help.
OS: SUSE Linux 2.6.5-7.191-bigsmp
PostgreSQL: 7.4.8
Application: ModPerl Web application using DBI.pm
Database size: 100 Gb, 1025 Tables.
Problem: EXPLAIN SELECT ... does not return.
Description:
The Application
Hi,
Is there any easy way to explain analyze a query, which is inside an sql
stored procedure? I could of course copy the query out of the procedure, and
explain analyze it but this is a slower process. I would do this with a lot
of procedures, that's why it should be fast.
create function m
9:25 AM
Subject: [GENERAL] Explain Analyse never returns .. maybe a bug
Hi all,
I stop the following statement after 3 minutes
explain analyse select *
From IC
where IC.ICNUM NOT IN (select IRICNUM from IR WHERE IRSTATUT
in (1, 2))
I'm using 8.0.3 on window
David Gagnon <[EMAIL PROTECTED]> writes:
> Hi all,
>
> I stop the following statement after 3 minutes
>
>explain analyse select *
> From IC
> where IC.ICNUM NOT IN (select IRICNUM from IR WHERE IRSTATUT
> in (1, 2))
>
>
> I'm using 8.0.3 on windows.
>
> Is that a
On Wed, Jun 29, 2005 at 09:25:24AM -0400, David Gagnon wrote:
>
> I stop the following statement after 3 minutes
>
> explain analyse select *
>From IC
>where IC.ICNUM NOT IN (select IRICNUM from IR WHERE IRSTATUT in
> (1, 2))
What does EXPLAIN without ANALYZE show? What
- Original Message -
From: "David Gagnon" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday, June 29, 2005 9:25 AM
Subject: [GENERAL] Explain Analyse never returns .. maybe a bug
Hi all,
I stop the following statement after 3 minutes
explain analyse select
Hi all,
I stop the following statement after 3 minutes
explain analyse select *
From IC
where IC.ICNUM NOT IN (select IRICNUM from IR WHERE IRSTATUT in
(1, 2))
I'm using 8.0.3 on windows.
Is that a know bug ?
Thanks
/David
P.S.: I can send more info if needed.
Richard Huxton writes:
> Relyea, Mike wrote:
>> Any suggestions for how to get the explain analyze output?
> You should get *some* output. Unfortunately, I don't think it will show
> you anything useful. The effort is almost certainly all going on the
> FK's and you can't see through the trigge
Relyea, Mike wrote:
I am trying to optimize a delete query that's currently taking 4 hours
to run. My first step was to add some indexes on some of my FK's. That
sped up my select queries by as much as ~20x. From 3.5 sec to < 0.25
sec. They didn't help my delete query. I ran an explain analy
1 - 100 of 115 matches
Mail list logo