Lennin Caro wrote:
--- On Wed, 10/1/08, H. William Connors II <[EMAIL PROTECTED]> wrote:
From: H. William Connors II <[EMAIL PROTECTED]>
Subject: [PERFORM] bizarre query performance question
To: pgsql-performance@postgresql.org
Date: Wednesday, October 1, 2008, 8:34 PM
I have two fairly sim
--- On Wed, 10/1/08, H. William Connors II <[EMAIL PROTECTED]> wrote:
> From: H. William Connors II <[EMAIL PROTECTED]>
> Subject: [PERFORM] bizarre query performance question
> To: pgsql-performance@postgresql.org
> Date: Wednesday, October 1, 2008, 8:34 PM
> I have two fairly simple tables as
I have two fairly simple tables as described below. The relationship
between them is through assignment_id. The problem is when I try to
join these two tables the planner does a sequential scan on
fa_assignment_detail and the query takes forever to resolve. I've run
the usual vacuum and anal
On Wednesday 01 October 2008 03:34, Gauri Kanekar wrote:
> -> Nested Loop (cost=186.26..647160.32 rows=42543 width=16) (actual
> time=655.832..6622.011 rows=5120582 loops=1)
That nested loop estimate is off by 100x, which is why the DB is using a
slow nested loop for a large amount of data.
Looks like that worked. I set work_mem to 256MB, and it looks like my
standard sql came back in just a couple of seconds.
Thanks!
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Wednesday, October 01, 2008 07:30
To: David logan
Cc: pgsql-pe
"David logan" <[EMAIL PROTECTED]> writes:
> (The question is why this simple select takes me 20 minutes to run...)
What have you got work_mem set to? The hash join is not going to be
real fast if it has to split the join into multiple batches, so you
want work_mem large enough to hold the whole i
Hi!
(The question is why this simple select takes me 20 minutes to run...)
I have two tables with address data and result data from two different runs
of two different geocoding engines. I want the count of result data
differences when the output address data matches.
In essence, I want:
Engine
[EMAIL PROTECTED] wrote:
> Hello
>
> I'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM
> (Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64
> GNU/Linux).
Unless you're committed to this version, I'd seriously look into 8.3
from backports (or compiled you
On Wed, 1 Oct 2008, Gauri Kanekar wrote:
"new_table1" is 18% of the the whole "table1".
-> Nested Loop (cost=186.26..647160.32 rows=42543 width=16) (actual
time=655.832..6622.011 rows=5120582 loops=1)
-> Nested Loop (cost=0.00..414246.81 rows=25155 width=16) (actual
time=19.578.
Hi,
We have a table called "table1" which contains around 638725448 records.
We created a subset of this table and named it as "new_table1" which has
around 120107519 records.
"new_table1" is 18% of the the whole "table1".
If we fire the below queries we are not finding any drastic performance
g
10 matches
Mail list logo