-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Tuesday, July 13, 2010 3:26 PM
To: Thom Brown
Cc: Duncavage, Daniel P. (JSC-OD211); pgsql-general@postgresql.org
Subject: Re: [GENERAL] NASA needs Postgres - Nagios help
On Tue, Jul 13, 2010 at 20:10, Thom Brown
On Sat, Jul 17, 2010 at 3:14 AM, Tom Lane wrote:
> Howard Rogers writes:
>> OK, Tom: I did actually account for the number of rows difference
>> before I posted, though I accept I didn't show you that. So here goes:
>> ...
>> Both queries return zero rows. One takes an awful lot longer than the
>
On 17/07/10 04:26, Thomas Kellerer wrote:
> Hmm.
> For years I have been advocating to always use fully qualified column
> lists in INSERTs (for clarity and stability)
> And now I learn it's slower when I do so :(
If you're not doing hundreds of thousands of identical ones at a time,
it's still v
Tom Lane wrote on 16.07.2010 18:40:
Thomas Kellerer writes:
the explanation of the --inserts option of pg_dumps states that
"The --column-inserts option is safe against column order changes, though even
slower."
The way I read this is, that
INSERT INTO table (column, ...) VALUES ...
Hi Julian,
Using this way to break up the queries, I am able to update about 1500 rows
per minute which will take over 100 days to complete, so I need to figure
out why this is slow, and if there is any faster way.
UPDATE table1 SET new_column = table1.new_column FROM table2 WHERE
table1.row_id =
Hi Julian,
Sorry for the slow response. I think I will need to chop up the query some
how, but have not yet found an efficient way to do that. row_id is the
primary key in both tables, so that might work.
Here is the explain:
urls_jrubin_merged=# EXPLAIN UPDATE table1 SET row_id = table2.row_id F
2010/7/16 Vincenzo Romano :
> Hi all.
> I'd like to add an item to the PG wishlist (provided that one exists).
> In PL/PgSQL function bodies I'm using very often a pattern like this:
>
> EXECUTE SPRINTF( '...',... );
>
> Where SPRINFT comes from here:
> http://wiki.postgresql.org/wiki/Sprintf
>
> I
Howard Rogers writes:
> OK, Tom: I did actually account for the number of rows difference
> before I posted, though I accept I didn't show you that. So here goes:
> ...
> Both queries return zero rows. One takes an awful lot longer than the
> other. The only difference between them is that one sea
"Wappler, Robert" writes:
> Attached is a query and its corresponding plan, where sorting of the CTE
> acts seems to be the bottle neck. It is a real execution plan captured
> with the auto_explain module.
There isn't a lot of intelligence about CTEs at the moment; in
particular I don't believe w
Thomas Kellerer writes:
> the explanation of the --inserts option of pg_dumps states that
> "The --column-inserts option is safe against column order changes, though
> even slower."
> The way I read this is, that
>INSERT INTO table (column, ...) VALUES ...
> is slower than
>INSERT INTO
On 16/07/10 05:22, Howard Rogers wrote:
OK, Tom: I did actually account for the number of rows difference
before I posted, though I accept I didn't show you that. So here goes:
Tom's good, but his mind-reading powers aren't what they used to be :-)
ims=# select count(*)
ims-# from search_rm
i
Does it run any differently if you split out the tag?
select count(*) from search_rm where to_tsvector('english', textsearch) @@
to_tsquery('english', 'woan & batt') and to_tsvector('english',
textsearch) @@ 'ftx1'::tsquery
Steve
On 16 July 2010 05:22, Howard Rogers wrote:
> On Fri, Jul
On Fri, Jul 16, 2010 at 10:23 AM, Tom Lane wrote:
> Howard Rogers writes:
>> I have 10 million rows in a table, with full text index created on one
>> of the columns. I submit this query:
>
>> ims=# select count(*) from search_rm
>> ims-# where to_tsvector('english', textsearch)
>> ims-# @@ to_ts
Hi,
Attached is a query and its corresponding plan, where sorting of the CTE
acts seems to be the bottle neck. It is a real execution plan captured
with the auto_explain module.
The query is recursive. In each iteration CTE acts is sorted again,
which is obviously quite expensive for about 24000
Hi all.
I'd like to add an item to the PG wishlist (provided that one exists).
In PL/PgSQL function bodies I'm using very often a pattern like this:
EXECUTE SPRINTF( '...',... );
Where SPRINFT comes from here:
http://wiki.postgresql.org/wiki/Sprintf
It's by far more powerful, easy and effective
In response to Ivan Sergio Borgonovo :
> > If you need strong user resource limits, user storage limits, etc
> > PostgreSQL might not be your best option. There are some things
> > you can do, but there's not much.
>
> What about an external process that monitor backend and kill them
> gracefully
On Fri, 16 Jul 2010 19:43:01 +0800
Craig Ringer wrote:
> On 16/07/10 19:21, Ivan Sergio Borgonovo wrote:
> > If I'd like to learn how to manage resources in postgres and
> > grant different users different time slot/memory/CPU?
> >
> > eg. I'd like to grant to user A to execute queries that last
On 16/07/10 19:21, Ivan Sergio Borgonovo wrote:
> If I'd like to learn how to manage resources in postgres and grant
> different users different time slot/memory/CPU?
>
> eg. I'd like to grant to user A to execute queries that last less
> than 1min or that occupy no more than X Mb... etc...
Postg
In response to Ivan Sergio Borgonovo :
> If I'd like to learn how to manage resources in postgres and grant
> different users different time slot/memory/CPU?
>
> eg. I'd like to grant to user A to execute queries that last less
> than 1min or that occupy no more than X Mb... etc...
Isn't (real) p
If I'd like to learn how to manage resources in postgres and grant
different users different time slot/memory/CPU?
eg. I'd like to grant to user A to execute queries that last less
than 1min or that occupy no more than X Mb... etc...
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
--
Sent
On 07/16/10 02:23, Tom Lane wrote:
> Howard Rogers writes:
>> I have 10 million rows in a table, with full text index created on one
>> of the columns. I submit this query:
>
>> ims=# select count(*) from search_rm
>> ims-# where to_tsvector('english', textsearch)
>> ims-# @@ to_tsquery('english'
Hi,
the explanation of the --inserts option of pg_dumps states that
"The --column-inserts option is safe against column order changes, though even
slower."
The way I read this is, that
INSERT INTO table (column, ...) VALUES ...
is slower than
INSERT INTO table VALUES ...
Is that really tr
22 matches
Mail list logo