Hi, 
  We have postgres 13.9 running with tables thats got billions of records of 
varying sizes. Eventhough pg jdbc driver  provides a way to set fetch size to 
tune the driver to achieve better throughput, the JVM fails at the driver level 
when records of large size (say 200mb each) flows through.  this forces to 
reduce the fetch size (if were to operate at a fixed Xmx setting of client jvm).
It get a bit trickier when 100s of such tables exists with varying records 
sizes. trying to see if the fetch size can be set dynamically based on the row 
count and the record size distribution for a table. Unfortunately, trying to 
get this data by a query run against each table (for row size: 
max(length(t::text))) seem to be  quite time consuming too.
Does postgres maintain metadata about tables for the following.1. row count 
2. max row size. 

or is there some other pg metadata that can help get this data quicker.
TIA.



Reply via email to