On Wed, May 26, 2010 at 1:27 AM, Len Walter wrote:
>
> PL/pgsql doesn't allow that because it doesn't support nested transactions.
> Is there an equivalent Postgres way of doing this?
what about this?
create function f() returns void as $$
declare
r record;
begin
for r in select col_a fro
Hello
it is useless in PostgreSQL - it isn't important if you commit one or
billion updated rows. PostgreSQL has different implementation of
transactions, so some Oracle's issues are not here.
Regards
Pavel Stehule
2010/5/26 Len Walter :
> Hi,
> I need to populate a new column in a Postgres 8.3
On 26 May 2010, at 8:27, Len Walter wrote:
> Hi,
>
> I need to populate a new column in a Postgres 8.3 table. The SQL would be
> something like "update t set col_c = col_a + col_b". Unfortunately, this
> table has 110 million rows, so running that query runs out of memory.
That's unusual, what
Hello.
I have 2 questions:
Is effective_io_concurrency working on WinXP sp2 ?
and what is the difference between effective_io_concurrency = 0
and effective_io_concurrency = 1
Postgres 8.4.4
pasman
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make change
On Wed, May 26, 2010 at 04:27:22PM +1000, Len Walter wrote:
> Hi,
>
> I need to populate a new column in a Postgres 8.3 table. The SQL would be
> something like "update t set col_c = col_a + col_b". Unfortunately, this
> table has 110 million rows, so running that query runs out of memory.
> In Or
2010/5/26 pasman pasmański :
> Hello.
>
> I have 2 questions:
> Is effective_io_concurrency working on WinXP sp2 ?
no
> and what is the difference between effective_io_concurrency = 0
> and effective_io_concurrency = 1
0 disable prefetching.
1 allow a short prefetch window
>
>
> Postgres 8.4.4
Hello.
>Is effective_io_concurrency working on WinXP sp2 ?
No.
"effective_io_concurrency" works when an OS can use posix_fadvise().
But I think WinXP doesn't have posix_fadvise().
>and what is the difference between effective_io_concurrency = 0
>and effective_io_concurrency = 1
If effective_io
Hello,
I am using postgres 8.3.1, and have implemented warm standby very much like
the one described in the high availability documentation on this site.
It seems to work well except for this problem: I've had a case where the
postgresql server was interrupted while in recovery (I think it was a us
create table test(i number , j number);
insert into test values(1,2)
insert into test values(1,3)
insert into test values(1,4)
select * from test;
I J
-- --
1 2
1 3
1 4
My intentions:
after update
select *
Or Kroyzer writes:
> I am using postgres 8.3.1,
... you really ought to be using 8.3.something-recent ...
> and have implemented warm standby very much like
> the one described in the high availability documentation on this site.
> It seems to work well except for this problem: I've had a case w
Hi all,
How can I reduce disk space postgresql used ?
I tried to delete many rows from my database and
I am running ' vacuum analyze reindexdb ' commands regularly
but my disk space on my linux machine didn't reduce.
I know that ' vacuum full ' command can do that but I don't want to use
that
On 26 May 2010 15:16, paladine wrote:
>
> Hi all,
>
> How can I reduce disk space postgresql used ?
> I tried to delete many rows from my database and
> I am running ' vacuum analyze reindexdb ' commands regularly
> but my disk space on my linux machine didn't reduce.
>
> I know that ' vacuum full
On Wed, May 26, 2010 at 10:16 AM, paladine wrote:
> Anyone know another method ?
>
options to reclaim disk space:
vacuum full
dump/restore (sometimes faster than vacuum full)
cluster (not mvcc safe as far as i know)
alter a table column to its own type, like this:
alter table foo alter column m
It is default value ( #checkpoint_segments = 3# in logfile segments, min
1, 16MB each )
Many of my database configurations are default values. (plain TOAST etc)
my database is a log database so, some tables of db grow everytime.
My ' /base ' directory contains a lot of compressed object (1GB
On 26 May 2010 15:50, paladine wrote:
>
> It is default value ( #checkpoint_segments = 3 # in logfile segments, min
> 1, 16MB each )
> Many of my database configurations are default values. (plain TOAST etc)
> my database is a log database so, some tables of db grow everytime.
> My ' /base ' d
While you can do
insert into foo (a, b) select a, b from bar;
and
delete from bar where a=1 returning a, b;
it fails to run
insert into foo (a, b) delete from bar where a=1 returning a, b;
This is not what I would expect. Is there a reason for it not working?
Cheers
Palle
--
Sent via pgsq
On 05/26/2010 07:16 AM, paladine wrote:
Hi all,
How can I reduce disk space postgresql used ?
I tried to delete many rows from my database and
I am running ' vacuum analyze reindexdb ' commands regularly
but my disk space on my linux machine didn't reduce.
I know that ' vacuum full ' command ca
On Wed, May 26, 2010 at 11:17 AM, Palle Girgensohn wrote:
> While you can do
>
> insert into foo (a, b) select a, b from bar;
>
> and
>
> delete from bar where a=1 returning a, b;
>
> it fails to run
>
> insert into foo (a, b) delete from bar where a=1 returning a, b;
>
> This is not what I would
On Wed, May 26, 2010 at 2:52 AM, Ravi Katkar wrote:
> How can we achieve nested transactions? ( may be using save points )with in
> functions.
> Is there any work around?
It unfortunately can't be done from within the database. There is
only one workaround -- using dblink or similar technology
Palle Girgensohn wrote:
> While you can do
>
> insert into foo (a, b) select a, b from bar;
>
> and
>
> delete from bar where a=1 returning a, b;
>
> it fails to run
>
> insert into foo (a, b) delete from bar where a=1 returning a, b;
>
> This is not what I would expect. Is there a reason for it
On 05/26/2010 07:50 AM, paladine wrote:
It is default value ( #checkpoint_segments = 3# in logfile segments, min
1, 16MB each )
Many of my database configurations are default values. (plain TOAST etc)
my database is a log database so, some tables of db grow everytime.
My ' /base ' directory
My PostgreSQL version is 8.1.11.
My log entries are on filesytem and I write these logs to db from unix
stream.
Another interesting situation is that while my logs on filesystem contains
about 5-6 GB,
db directory (/base/OID) contains 40 GB. Is that normal ?
I implement your suggestions by usi
Hi:
Under the hood, does PG implement an update statement as a delete followed by
an insert?I'm at a point in coding a script where it would be more
expeditious for me to delete/insert a record as opposed to update and want to
know if I lose anything by doing that.
Thanks in Advance !
Gauthier, Dave wrote:
> Hi:
>
>
>
> Under the hood, does PG implement an update statement as a delete followed by
> an insert?I’m at a point in coding a script where it would be more
Yes, Update means delete the old record and create a new one.
Andreas
--
Really, I'm not out to destro
On May 26, 2010, at 11:41 AM, Gauthier, Dave wrote:
> Hi:
>
> Under the hood, does PG implement an update statement as a delete followed by
> an insert?I’m at a point in coding a script where it would be more
> expeditious for me to delete/insert a record as opposed to update and want to
O.. Yes, I forgot about the triggers.
Good call !!! This table has lots of triggers.
Thanks for reminding me of this!!!
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Atkins
Sent: Wednesday, May 26, 2010
On tis, 2010-05-25 at 12:05 -0400, Richard Wallace wrote:
> 1) When using cursor_to_xml in a plpgsql function, the FOUND variable does
> not seem to get set, so there is no way to exit a loop that is iterating over
> the cursor. Below is the function code; it loops indefinitely when it is run.
>
Please forgive this intrusion, and please ignore it, but how many
applications out there have 110,000,000 row tables? I recently
multiplied 85,000 by 1,400 and said now way Jose.
Thanks,
John Gage
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I've had many times that before and things were very slow. That's when
I partitioned it out. Luckily that table was just for reporting and
could be slow. Are you thinking you'll need that many rows and you just
don't know how to handle it? I would recommend partitioning if at all
possible.
On Wed, May 26, 2010 at 4:29 PM, John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many
> applications out there have 110,000,000 row tables? I recently multiplied
> 85,000 by 1,400 and said now way Jose.
/me stands and waves hand. I have two tables in my primary a
On 26 May 2010 21:29, John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many
> applications out there have 110,000,000 row tables? I recently multiplied
> 85,000 by 1,400 and said now way Jose.
>
> Thanks,
>
> John Gage
>
There's no reason why it can't have that man
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Thom Brown
> Sent: Wednesday, May 26, 2010 2:14 PM
> To: John Gage
> Cc: PostgreSQL - General
> Subject: Re: [GENERAL] 110,000,000 rows
>
> On 26 May 2010 21:29, John
On Wed, May 26, 2010 at 4:29 PM, John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many
> applications out there have 110,000,000 row tables? I recently multiplied
> 85,000 by 1,400 and said now way Jose.
>
>
I've got a DB with two 500,000,000+ row tables. Performan
At work I have one table with 32 million rows, not quite the size you
are talking about, but to give you an idea of the performance, the
following query returns 14,659 rows in 405ms:
SELECT * FROM farm.frame
WHERE process_start > '2010-05-26';
process_start is a timestamp without time zone colum
> > I need to populate a new column in a Postgres 8.3 table. The SQL would be
> something like "update t set col_c = col_a + col_b". Unfortunately, this
> table has 110 million rows, so running that query runs out of memory.
>
> That's unusual, what is the error you get?
Here it is:
TopMemoryCont
Len Walter writes:
>>> I need to populate a new column in a Postgres 8.3 table. The SQL would be
>>> something like "update t set col_c = col_a + col_b". Unfortunately, this
>>> table has 110 million rows, so running that query runs out of memory.
>>
>> That's unusual, what is the error you get?
Hello.
First,we can not execute the SQL which Paulo indicated in PostgreSQL.
See this manual.
==
http://www.postgresql.org/docs/8.4/interactive/sql-update.html
Compatibility
This command conforms to the SQL standard, except
Hello,
While I can't answer my question, I am afraid I did not understand your
intentions either. The update you mentioned may not return consistent
results, isn't it? You are not ordering the data in any particular manner
and there is no unique column either. So the result of the update will
d
38 matches
Mail list logo