Kirill,
cursor does not provide a way to limit the fetch size based on the memory
consumption.
Imagine a table like (id int8, value jsonb).
If we use "fetch 1000", then it might require 1GiB on the client if every
row contains 1MiB json.
If the client plays defensively and goes for "fetch 10", it
Hi, client can use CURSOR feature to process data in batches. What is the
case where proposed feature solves problem that CURSOR does not?
https://www.postgresql.org/docs/current/plpgsql-cursors.html
On Fri, 17 Jan 2025, 16:08 Vladimir Sitnikov,
wrote:
> Hi,
>
> Applications often face an "out
Hi,
Applications often face an "out of memory" condition as they try to fetch
"N rows" from the database.
If N is small, then the execution becomes inefficient due to many
roundtrips.
If N is high, there's a risk that many rows would overflow the client's
memory.
Note: the client can't stop readi