We have a query which generates a small set of rows (~1,000) which are
to be used in a DELETE on the same table. The problem we have is that
we need to join on 5 different columns and it takes far too long. I
have a solution but I'm not sure it's the right one. Instead of joining
on 5 columns in
nel=# DELETE FROM sid2.data_id_table AS dd WHERE dd.point_id=2 AND
dd.dtype_id=3 AND dd.deleted AND NOT dd.persist;
DELETE 0
Time: 0.960 ms
cranel=# COMMIT;
Time: 20.500 ms
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, April 09, 2007 4:55 PM
To: Spiegelb
ECTED]
Sent: Monday, April 09, 2007 5:58 PM
To: Spiegelberg, Greg
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] DELETE with filter on ctid
Spiegelberg, Greg wrote:
> We have a query which generates a small set of rows (~1,000) which are
> to be used in a DELETE on the same
All,
Has anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++
compiler?
Greg
--
Greg Spiegelberg
[EMAIL PROTECTED]
ISOdx Product Development Manager
Cranel, Inc.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Sort of on topic, how many foreign keys in a single table is good v.
bad? I realize it's relative to the tables the FK's reference so here's
an example:
Table A: 300 rows
Table B: 15,000,000 rows
Table C: 100,000 rows
Table E: 38 rows
Table F: 9 rows
Table G: is partitioned on the FK from Table A
That's an all PCI-X box which makes sense. There are 6 SATA controllers
in that little beastie also. You can always count on Sun to provide
over engineered boxes.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Joshua D. Drake
> Sent: Friday
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Craig A. James
> Sent: Wednesday, October 25, 2006 12:52 PM
> To: Jim C. Nasby
> Cc: Worky Workerson; Merlin Moncure; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Best COPY Performance
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Luke Lonergan
> Sent: Saturday, October 28, 2006 12:07 AM
> To: Worky Workerson; Merlin Moncure
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Best COPY Performance
>
> Worky,
>
>
Hello,
My experience with dblink() is that each dblink() is executed serially, in
part I would guess, due to the plan for the query. To have each query run
in parallel you would need to execute both dblink()'s simultaneously saving
each result into a table. I'm not sure if the same table could b
Isn't this a prime example of when to use a servlet or something similar
in function? It will create the cursor, maintain it, and fetch against
it for a particular page.
Greg
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 20, 2005 10:21 AM
To:
It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime). It is still much faster than our
experiments with ext2, ext3, and reiserfs but occasionally during
large loads it will hiccup for a couple seconds but no crashes yet.
I'm reluctant to switch bac
11 matches
Mail list logo