Re: [GENERAL] processing large amount of rows with plpgsql

2012-08-09 Thread Marc Mamin
9. August 2012 09:12 > To: Geert Mak; pgsql-general@postgresql.org > Subject: Re: [GENERAL] processing large amount of rows with plpgsql > > > > There is (almost) no way to > > > force commit inside a function -- > > > > So what you are saying is that

Re: [GENERAL] processing large amount of rows with plpgsql

2012-08-09 Thread Marc Mamin
> > There is (almost) no way to > > force commit inside a function -- > > So what you are saying is that this behavior is normal and we should > either equip ourselves with enough disk space (which I am trying now, > it is a cloud server, which I am resizing to gain more disk space and > see what

Re: [GENERAL] processing large amount of rows with plpgsql

2012-08-08 Thread Geert Mak
On 08.08.2012, at 22:04, Merlin Moncure wrote: > What is the general structure of the procedure? In particular, how > are you browsing and updating the rows? Here it is - BEGIN for statistics_row in SELECT * FROM statistics ORDER BY time ASC LOOP ... ... here some very minimal trans

Re: [GENERAL] processing large amount of rows with plpgsql

2012-08-08 Thread Merlin Moncure
On Wed, Aug 8, 2012 at 2:41 PM, Geert Mak wrote: > hello everybody, > > we are trying to move the data from table1 into table2 using a plpgsql stored > procedure which is performing simple a data conversion > > there are about 50 million rows > > the tables are relatively simple, less than a doze

[GENERAL] processing large amount of rows with plpgsql

2012-08-08 Thread Geert Mak
hello everybody, we are trying to move the data from table1 into table2 using a plpgsql stored procedure which is performing simple a data conversion there are about 50 million rows the tables are relatively simple, less than a dozen columns, most are integer, a couple are char(32) and one is