On Fri, Nov 4, 2011 at 12:14 PM, Sorbara, Giorgio (CIOK)
wrote:
>> How fast do you expect this to run? It's aggregating 125 million
>> rows, so that's going to take some time no matter how you slice it.
>> Unless I'm misreading this, it's actually taking only about 4
>> microseconds per row, whic
> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: 04 November 2011 5:07 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: Tomas Vondra; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Strange query plan
>
> On Mon, Oct 31, 2011 at 9
On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
wrote:
> Group (cost=0.00..4674965.80 rows=200 width=17) (actual
> time=13.375..550943.592 rows=1 loops=1)
> -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual
> time=13.373..524324.817 rows=125595932 loops=1)
>
Hi Tomas,
and thank you for your reply.
Inline my comments
> -Original Message-
> From: Tomas Vondra [mailto:t...@fuzzy.cz]
> Sent: 28 October 2011 8:10 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Strange query pla
Hi,
On 28 Říjen 2011, 19:27, Sorbara, Giorgio (CIOK) wrote:
> Dear all,
>
> I am new to PG but I have a solid background on tuning in Oracle and
> MSSQL.
> I have a query coming out from a piece of software from our SW-Stack (I
> can't change it) and of course it takes a large amount of time.
>
>
Dear all,
I am new to PG but I have a solid background on tuning in Oracle and MSSQL.
I have a query coming out from a piece of software from our SW-Stack (I can't
change it) and of course it takes a large amount of time.
The table I am query are inherited (partitioned) and the query is the foll
On Fri, Jun 10, 2011 at 1:22 PM, wrote:
>> If I had set the primary key to (diag_id, create_time) would simple
>> queries on
>> diag_id still work well i.e.
>> select * from tdiag where diag_id = 1234;
>
> Yes. IIRC the performance penalty for using non-leading column of an index
> is negligi
> If I had set the primary key to (diag_id, create_time) would simple
> queries on
> diag_id still work well i.e.
> select * from tdiag where diag_id = 1234;
Yes. IIRC the performance penalty for using non-leading column of an index
is negligible. But why don't you try that on your own - just
On Wednesday 08 June 2011 19:47, t...@fuzzy.cz wrote:
> Have you tried to create a composite index on those two columns? Not sure
> if that helps but I'd try that.
>
> Tomas
This finally works well enough
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq'::text),
On Thursday 09 June 2011 16:04, anthony.ship...@symstream.com wrote:
> I must be doing something really wrong to get this to happen:
Yes I did. Ignore that.
--
Anthony Shipman | flailover systems: When one goes down it
anthony.ship...@symstream.com | flails about until the other
On Wednesday 08 June 2011 19:47, t...@fuzzy.cz wrote:
> Have you tried to create a composite index on those two columns? Not sure
> if that helps but I'd try that.
>
> Tomas
Do you mean
create index tdiag_index2 ON tdiag(diag_id, create_time);
Should this be in addition to or instead of the
Hello
2011/6/8 :
> On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
>> if you use FOR statement, there should be a problem in using a
>> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
> Alas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that
> paramet
> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
>> Of course optimally executing a plan with limit is a lot different
>> than one without.
>
> I imagined that limit just cuts out a slice of the query results.
> If it can find 8 rows in 0.5 seconds then I would have thought that
> retur
On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
> if you use FOR statement, there should be a problem in using a
> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Alas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that
parameter.
--
Anthony Shipman
2011/6/8 :
> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
>> Of course optimally executing a plan with limit is a lot different
>> than one without.
>
> I imagined that limit just cuts out a slice of the query results.
> If it can find 8 rows in 0.5 seconds then I would have thought
On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
> Of course optimally executing a plan with limit is a lot different
> than one without.
I imagined that limit just cuts out a slice of the query results.
If it can find 8 rows in 0.5 seconds then I would have thought that
returning just
> What seems odd to me is that the only difference between the two is the
> limit
> clause:
>
> select * from tdiag where (create_time >= '2011-06-03
> 09:49:04.00+0' and create_time < '2011-06-06 09:59:04.00+0') order
> by
> diag_id limit 1;
>
> select * from tdiag where (create_time >= '2
On Wed, Jun 8, 2011 at 7:08 AM, wrote:
> What seems odd to me is that the only difference between the two is the limit
> clause
Why would that seem odd?
Of course optimally executing a plan with limit is a lot different
than one without.
Just... why are you sorting by diag_id?
I believe you w
On Wednesday 08 June 2011 02:40, t...@fuzzy.cz wrote:
> Hi,
>
> why are you reposting this? Pavel Stehule already recommended you to run
> ANALYZE on the tdiag table - have you done that? What was the effect?
The mailing list system hiccupped and I ended up with two posts.
VACUUM ANALYZE was done
> Version: PostgreSQL 8.3.5 (mammoth replicator)
>
> Schema:
>
> CREATE TABLE tdiag (
> diag_id integer DEFAULT nextval('diag_id_seq'::text),
> create_time timestamp with time zone default now(), /* time
> this
> record
> was created */
> diag_time t
Version: PostgreSQL 8.3.5 (mammoth replicator)
Schema:
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq'::text),
create_time timestamp with time zone default now(), /* time this
record
was created */
diag_time timestamp with time zone n
Hello
did you run a ANALYZE statement on table tdiag? A statistics are
absolutelly out.
Regards
Pavel Stehule
2011/6/7 :
> Version: PostgreSQL 8.3.5 (mammoth replicator)
>
> Schema:
>
> CREATE TABLE tdiag (
> diag_id integer DEFAULT nextval('diag_id_seq'::text),
> create_time
Version: PostgreSQL 8.3.5 (mammoth replicator)
Schema:
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq'::text),
create_time timestamp with time zone default now(), /* time this
record
was created */
diag_time timestamp with time zone n
Tom Lane wrote:
Rich Doughty <[EMAIL PROTECTED]> writes:
However, the following query (which i believe should be equivalent)
SELECT *
FROM
tokens.ta_tokenhist h INNER JOIN
tokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN
tokens.ta_tokenhist i ON t.tok
Rich Doughty <[EMAIL PROTECTED]> writes:
> However, the following query (which i believe should be equivalent)
> SELECT *
> FROM
> tokens.ta_tokenhist h INNER JOIN
> tokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN
> tokens.ta_tokenhist i ON t.token_id = i.token_i
i have the following query involving a view that i really need to optimise:
SELECT *
FROM
tokens.ta_tokenhist h INNER JOIN
tokens.vw_tokenst ON h.token_id = t.token_id
WHERE
h.sarreport_id = 9
;
where vw_tokens is defined as
CREATE VIEW tokens.vw_tokens AS SELECT
26 matches
Mail list logo