Seref Arikan <serefari...@gmail.com> writes: > The documentation for Cursors at > http://www.postgresql.org/docs/9.2/static/plpgsql-cursors.html says that: > "Rather than executing a whole query at once, it is possible to set up a > *cursor* that encapsulates the query, and then read the query result a few > rows at a time. One reason for doing this is to avoid memory overrun when > the result contains a large number of rows"
> I'm assuming the memory overrun mentioned here is the memory of the client > process connecting to postres. Right. > I think when a cursor ref is returned, say > from a function, the query needs to be completed and the results must be > ready for the cursor to move forward. > If that is the case, there must be a temporary table, presumably with one > or more parameters to adjust its size, (location/tablespace?) etc.. No. The cursor is held as a partially-run execution state tree. If you declare a cursor WITH HOLD and don't close it before ending the transaction, then we do run the cursor query to completion and store its results in a temporary file (not a full-fledged table). This is to avoid holding the query's resources, such as table locks, across transactions. But in typical cursor use-cases it's not necessary to materialize the full query result at once on either the server or client side. Regular query execution (without a cursor) doesn't materialize the result on the server side either: rows are spit out to the client as they are computed. libpq is in the habit of accumulating the whole query result before returning it to the client application, but that's just so that its API doesn't need to include the notion of a query failing after having already returned some rows. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general