To answer my own question, I adapted
http://stackoverflow.com/questions/16128142/how-to-use-pqxxstateless-cursor-class-from-libpqxx

    try {
            work W(*Conn);
            pqxx::stateless_cursor<pqxx::cursor_base::read_only,
pqxx::cursor_base::owned>
                    cursor(W, sql[sqlLoad], "mycursor", false);
            /* Assume you know total number of records returned */
            for (size_t idx = 0; idx < countRecords; idx += 100000) {
                /* Fetch 100,000 records at a time */
                result r = cursor.retrieve(idx, idx + 100000);
                for (int rownum = 0; rownum < r.size(); ++rownum) {
                    const result::tuple row = r[rownum];
                    vid1 = row[0].as<int>();
                    vid2 = row[1].as<int>();
                    vid3 = row[2].as<int>();
                    .............
                }
            }
        } catch (const std::exception &e) {
            std::cerr << e.what() << std::endl;
        }


2014-05-04 17:34 GMT+03:00 Tom Lane <t...@sss.pgh.pa.us>:

> Andres Freund <and...@2ndquadrant.com> writes:
> > On 2014-05-04 01:57:43 -0700, alexandros_e wrote:
> >> I have to execute an SQL query to Postgres by the following code. The
> query
> >> returns a huge number of rows (40M or more) and has 4 integer fields:
> When I
> >> use a workstation with 32Gb everything works but on a 16Gb workstation
> the
> >> query is very slow (due to swapping I guess). Is there any way to tell
> the
> >> C++ to load rows at batches, without waiting the entire dataset? With
> Java I
> >> never had these issues before, due to the probably better JDBC driver.
>
> > Try looking into either using a serverside cursor or COPY.
>
> Another possibility is libpq's recently-introduced row-at-a-time mode:
>
> http://www.postgresql.org/docs/9.3/static/libpq-single-row-mode.html
>
> though I'm not sure how effectively that's supported by libpqxx.
>
>                         regards, tom lane
>

Reply via email to