9. August 2012 09:12
> To: Geert Mak; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] processing large amount of rows with plpgsql
>
> > > There is (almost) no way to
> > > force commit inside a function --
> >
> > So what you are saying is that
> > There is (almost) no way to
> > force commit inside a function --
>
> So what you are saying is that this behavior is normal and we should
> either equip ourselves with enough disk space (which I am trying now,
> it is a cloud server, which I am resizing to gain more disk space and
> see what
On 08.08.2012, at 22:04, Merlin Moncure wrote:
> What is the general structure of the procedure? In particular, how
> are you browsing and updating the rows?
Here it is -
BEGIN
for statistics_row in SELECT * FROM statistics ORDER BY time ASC
LOOP
...
... here some very minimal trans
On Wed, Aug 8, 2012 at 2:41 PM, Geert Mak wrote:
> hello everybody,
>
> we are trying to move the data from table1 into table2 using a plpgsql stored
> procedure which is performing simple a data conversion
>
> there are about 50 million rows
>
> the tables are relatively simple, less than a doze
hello everybody,
we are trying to move the data from table1 into table2 using a plpgsql stored
procedure which is performing simple a data conversion
there are about 50 million rows
the tables are relatively simple, less than a dozen columns, most are integer,
a couple are char(32) and one is