Hi all
I'm encountering an odd issue with a bulk import query using PostgreSQL
8.3. After a 400,000 row import into a just-truncated table `booking', a
sequential scan run on the table in the same transaction is incredibly
slow, taking ~ 166738.047 ms. After a:
`COMMIT; BEGIN;'
the same qu
Hi all
I've just spent some time working with PostgreSQL 8.3 trying to get a 90
minute job to run in a reasonable amount of time, and in the process
I've come up with something that I thought others might find useful.
Attached is a pair of PL/PgSQL functions that enable/disable the
triggers assoc
Craig Ringer wrote:
I'm encountering an odd issue with a bulk import query using PostgreSQL
8.3. After a 400,000 row import into a just-truncated table `booking', a
sequential scan run on the table in the same transaction is incredibly
slow, taking ~ 166738.047 ms. After a:
`COMMIT; BEGIN
Thanks for the extremely helpful response. I don't think I would've
spotted that one in a hurry.
You must be having an exception handler block in that pl/pgsql
function, which implicitly creates a new subtransaction on each
invocation of the exception handler block, so you end up with hundreds
Heikki Linnakangas wrote:
You must be having an exception handler block in that pl/pgsql
function, which implicitly creates a new subtransaction on each
invocation of the exception handler block, so you end up with hundreds
of thousands of committed subtransactions.
I've just confirmed that tha
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> For 8.4, it would be nice to improve that. I tested that on my laptop
> with a similarly-sized table, inserting each row in a pl/pgsql function
> with an exception handler, and I got very similar run times. According
> to oprofile, all the time
Tom Lane wrote:
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
For 8.4, it would be nice to improve that. I tested that on my laptop
with a similarly-sized table, inserting each row in a pl/pgsql function
with an exception handler, and I got very similar run times. According
to oprofile, all
Tom Lane wrote:
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
For 8.4, it would be nice to improve that. I tested that on my laptop
with a similarly-sized table, inserting each row in a pl/pgsql function
with an exception handler, and I got very similar run times. According
to oprofile, all
Craig Ringer <[EMAIL PROTECTED]> writes:
> It seems to happen with every statement run in the same transaction as,
> and after, the procedure with all the subtransactions. As soon as a
> COMMIT is executed, operations return to normal speed.
Ah. I misread your post as saying that it happened on
Craig Ringer wrote:
I'll bang out a couple of examples at work tomorrow to see what I land
up with, since this is clearly something that can benefit from a neat
test case.
Here's what I used to reproduce this:
postgres=# BEGIN;
BEGIN
postgres=# CREATE TABLE foo (id int4,t text);CREATE TABLE
p
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
It seems to happen with every statement run in the same transaction as,
and after, the procedure with all the subtransactions. As soon as a
COMMIT is executed, operations return to normal speed.
Ah. I misread your post as sayin
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Well, scanning an index to get a count might be significantly faster
> than scanning the main table, but it's hardly "instantaneous". It's
> still going to take time proportional to the table size.
Hm, Mark's comment about bitmap indexes makes that not en
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> The oprofile output is pretty damning:
> samples %symbol name
> 4214899.7468 TransactionIdIsCurrentTransactionId
Oh, I have no doubt that that could eat a lot of cycles inside the
originating transaction ;-). I just misread Craig's
Hi,
Le mardi 26 février 2008, Dimitri Fontaine a écrit :
> You may remember some thread about data loading performances and
> multi-threading support in pgloader:
> http://archives.postgresql.org/pgsql-performance/2008-02/msg00081.php
As people here have asked for the new features implemented i
We experienced a similar degradation,
when heavily using savepoints within a single transaction.
However, we had not yet enough time to really investigate the issue.
It also was not directly reproducible using a (small) set of statements from a
script.
As the overall scenario "bulk loads with sub-
Gregory,
I just joined this listserv and was happy to see this posting. I have a
400GB table that I have indexed (building the index took 27 hours) , Loading
the table with 10 threads took 9 hours. I run queries on the data nad get
immediate max and min as well as other aggrgate functions very q
In response to "Joe Mirabal" <[EMAIL PROTECTED]>:
> Gregory,
>
> I just joined this listserv and was happy to see this posting. I have a
> 400GB table that I have indexed (building the index took 27 hours) , Loading
> the table with 10 threads took 9 hours. I run queries on the data nad get
> i
On Mon, 10 Mar 2008, Bill Moran wrote:
Some searches through the archives should turn up details on these
methods.
I've collected up what looked like the best resources on this topic into
the FAQ entry at http://www.postgresqldocs.org/index.php/Slow_Count
General Bits has already done two g
On Mon, 2008-03-10 at 11:01 +, Heikki Linnakangas wrote:
> According
> to oprofile, all the time is spent in TransactionIdIsInProgress.
I recently submitted a patch to optimise this. Your comments would be
welcome on the patch.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
On Mon, Mar 10, 2008 at 1:54 PM, Joe Mirabal <[EMAIL PROTECTED]> wrote:
> Gregory,
>
> I just joined this listserv and was happy to see this posting. I have a
> 400GB table that I have indexed (building the index took 27 hours) , Loading
> the table with 10 threads took 9 hours. I run queries on
On Mon, 10 Mar 2008, Joe Mirabal wrote:
I run queries on the data nad get immediate max and min as well as other
aggrgate functions very quickly, however a select count(*) of the table
takes forever usually nearly an hour or more.
Are you sure the form of "select count(*)" you're using is act
Hi!
I read and did many stuff you pointed me too. Raised shared buffers
to 180 MB, and tried again. Same results.
I deleted the DB, created a new one and generated new test data. I
know have 72k rows, and the same query finishes in... 9 seconds.
I'm totally clueless. Anyway, two que
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 10 Mar 2008 23:17:54 +
Miguel Arroz <[EMAIL PROTECTED]> wrote:
> Hi!
>
>I read and did many stuff you pointed me too. Raised shared
> buffers to 180 MB, and tried again. Same results.
>
>I deleted the DB, created a new one and ge
Hi!
It now raised to 40 seconds... here goes the result of iostat:
iostat -K -c 40
tty ad4 ad6 cpu
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 78 32.86 34 1.08 0.70 0 0.00 13 0 1 0 86
0 180 6.00 4 0.02 0.00
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 10 Mar 2008 23:46:10 +
Miguel Arroz <[EMAIL PROTECTED]> wrote:
tty ad4 ad6 cpu
> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
> 0 60 33.48 449 14.66 0.00 0 0.00 11 0
Hi!
The disk subsystem will be a RAID 1, but for now it's just a single
7200 rpm 160 GB SATA hard drive. The PgSQL version is 8.3, the latest
one.
I have done some performance tests on the drive, and it handles
about 40 MB/s on sequential writes, so I'm assuming it's OK.
Yours
Mi
Tom Lane writes:
Vlad Arkhipov <[EMAIL PROTECTED]> writes:
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
It looks like you are wishing to optimize for all-
Hi,
I have been reading this conversation for a few days now and I just wanted
to ask this. From the release notes, one of the new additions in 8.3 is
(Allow col IS NULL to use an index (Teodor)).
Sorry, if I am missing something here, but shouldn't something like this
allow us to get a (fast) ac
Robins Tharakan wrote:
Hi,
I have been reading this conversation for a few days now and I just
wanted to ask this. From the release notes, one of the new additions
in 8.3 is (Allow col IS NULL to use an index (Teodor)).
Sorry, if I am missing something here, but shouldn't something like
thi
On Tue, 11 Mar 2008 08:27:05 +0530
"Robins Tharakan" <[EMAIL PROTECTED]> wrote:
> SELECT COUNT(*) from table WHERE indexed_field IS NULL
> +
> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL
If the selectivity is appropriate yes. However if you have 1 million
rows, and 200k of those ro
On Mon, Mar 10, 2008 at 7:57 PM, Robins Tharakan <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have been reading this conversation for a few days now and I just wanted
> to ask this. From the release notes, one of the new additions in 8.3 is
> (Allow col IS NULL to use an index (Teodor)).
>
> Sorry, if I
On Mon, 10 Mar 2008, Miguel Arroz wrote:
I deleted the DB, created a new one and generated new test data. I know have
72k rows, and the same query finishes in... 9 seconds.
This seems like more evidence that your problem here is related to dead
rows (this is what Andrew suggested). If a fres
On Mon, Mar 10, 2008 at 4:31 PM, Heikki Linnakangas
<[EMAIL PROTECTED]> wrote:
> According
> to oprofile, all the time is spent in TransactionIdIsInProgress. I think
> it would be pretty straightforward to store the committed subtransaction
> ids in a sorted array, instead of a linked list, and
33 matches
Mail list logo