On 2/6/21 6:06 AM, Atul Kumar wrote:
Hi Gavan,
Thanks for providing the details, I need more clarification on this as how
should I analyze that what should be ideal no. of connections should we
set to avoid IO overhead based on the available hardware resources.
How to do this calculation ?
Run "iotop -o -u postgres", and then compare that with the total bandwidth
available to the system. If it's (even almost) saturated, then everything
else will be starved.
The "--jobs=" value should AT MOST be some *small* multiple of the number of
CPUs (like 1x, 1.5x or *maybe* 2x if the core count is low, and nothing else
is running on the system.
Note: even during 300 threads, my RAM utilisation is totally normal.
Regards
Atul
On Saturday, February 6, 2021, Gavan Schneider <list.pg.ga...@pendari.org
<mailto:list.pg.ga...@pendari.org>> wrote:
On 6 Feb 2021, at 3:37, Ron wrote:
On 2/5/21 10:22 AM, Rob Sargent wrote:
On 2/5/21 9:11 AM, Ron wrote:
Obviously... don't use 300 threads.
No, no Ron. Clearly the answer is more CPUs
I hope you're being sarcastic.
A reasonable conjecture… though there is the consideration that 300
CPU intensive tasks spread across a given number of CPUs is going to
waste some resources with context switching., i.e., need more CPUs :)
Basically if there is plenty of wait time for I/O completion then CPU
task switching can get more total work done. So far so obvious. In
this thread I can see where it is disappointing to have a system
considered capable of 700 connections getting saturated by a “mere”
300 threads. But this is only a “problem” if connections are equated
to threads. PG max connection count is about external users having
access to resources needed to get a task done. Like all resource
allocations this relies on estimated average usage, i.e., each
connection only asks for a lot of CPU in brief bursts and then the
result is transmitted with a time lag before the connection makes
another CPU demand. The system designer should use estimations about
usage and load to budget and configure the system, and, monitor it all
against actual performance in the real world. Of course estimates are
a standing request for outliers and the system will show stress under
an unexpected load.
So far I have not seen an analysis of where the bottle neck has
occurred: CPU RAM HD and/or the data bus connecting these. Some of
these hardware resources maxed out to the extent the system would not
immediately pick up an additional work unit. As I see it OP started
300 CPU intensive tasks on hardware intended for 700 connections. If
the connection count was designed with say 50% CPU intensive time per
connection you would expect this hardware to be fully saturated with
300 CPU intensive tasks. More than that, doing the task with 300
threads would probably take longer than (say) 200 threads as the
increased CPU context swapping time is just wasted effort.
OP now has a choice: decrease threads or (seriously) upgrade the
hardware. We in the gallery would love to see a plot of total time to
completion as a function of threads invoked (50-300 increments of 50)
assuming the starting conditions are the same :)
Gavan Schneider
——
Gavan Schneider, Sodwalls, NSW, Australia
Explanations exist; they have existed for all time; there is always a
well-known solution to every human problem — neat, plausible, and wrong.
— H. L. Mencken, 1920
--
Angular momentum makes the world go 'round.