> When we declare a cursor for a select on the mentioned big table, it takes
> multiple seconds and a big temp file is created which to me seems like the
> materialization took place immediately.
Since you mentioned, Postgres already postponed materialization until commit
operations we checked aga
> > The calculations inside the loop are written in some dynamic high-level
> > language and cannot easily be translated into SQL.
>
> You don't really have to --- PG supports functions written in non-SQL
> languages. Not sure if your problem is big enough to justify developing
> a new PL interfa
> > The calculations inside the loop are written in some dynamic high-level
> > language and cannot easily be translated into SQL.
> >
>
> ???Can you not simply create a second connection to perform the updates?
That would be possibe, but I can see some problems:
loop
{ update table1;
select so
> I don't know quite how to put this, but it's not clear to me that the
> difficulties in this situation are things PostgreSQL could resolve
> even with much larger development resources than are currently
> available.
There does not seem to exist a cursor/portal/pointer semantic that can survive
> What other things did you try, and how did they fail? In particular,
> what happened when you used
>
> UPDATE table2
> SET [things based on table1]
> FROM table1 [qualified] JOIN table2 ON ([conditions])
well, it is not the ideal way of doing things but then again this SQL is merel
Hi,
I originally brought up this issue on the pgsql-performance mailing list [^] to
no avail so I am trying again here.
During implementation of a runtime environment and the adjoining database
abstraction layer I noticed (like many before me [0] and as correctly mentioned
in the documentation) t