Thank you so much for your prompt response.

Regards,
Ramzy

On Sun, Apr 6, 2025, 19:35 Евгений Чекан <hom3c...@gmail.com> wrote:

> Checkpointer process holds the shared_buffers memory and it is the
> expected behaviour. Even during idle, shared buffers are there for reads
> and future modifications. You have SB configured to be 10GB and it will
> stay allocated throughout the whole uptime of the server. This is totally
> OK and should help (generally) with the performance.
> The “idle” thing is slightly less straightforward, as checkpointer and
> writer try to spread the load in time.
> As for memory climbing to 90%, you might want to reduce shared buffers to
> something less than 1/3 of your RAM and see if it helps.
> But again, if there’s no issues with system availability and health then
> you should not be bothered imo
>
> > 6 апр. 2025 г., в 14:37, Motog Plus <mplus7...@gmail.com> написал(а):
> >
> > 
> >
> > Hi All,
> >
> > I’m running PostgreSQL on an EC2 c5.4xlarge Ubuntu instance with the
> following specs:
> >
> > 32 GB RAM
> >
> > 1.2 TB disk
> >
> > 16 vCPUs
> >
> >
> > Pgpool-II Configuration:
> >
> > max_pool = 2
> >
> > num_init_children = 1000
> >
> > client_idle_limit = 300 seconds
> >
> > connection_life_time = 300 seconds
> >
> > load_balance_mode = on
> >
> >
> > PostgreSQL Configuration:
> >
> > max_connections = 3000
> >
> > checkpoint_timeout = 15min
> >
> > checkpoint_completion_target = 0.9
> >
> > shared_buffers = 10GB
> >
> > wal_buffers = 64MB
> >
> > min_wal_size = 80MB
> >
> > max_wal_size = 10GB
> >
> > effective_cache_size = 20GB
> >
> > work_mem = 4MB
> >
> > maintenance_work_mem = 1GB
> >
> > bgwriter_delay = 200ms
> >
> > bgwriter_lru_multiplier = 2
> >
> > bgwriter_lru_maxpages = 100
> >
> > max_standby_streaming_delay = 5s (on standby)
> >
> >
> > I have a primary-standby streaming replication setup, and application
> modules (running in Kubernetes pods) connect to the database through
> Pgpool-II using HikariCP.
> >
> >
> > ---
> >
> > Issue Observed:
> >
> > Even when no queries are running and application batch jobs have
> completed, memory usage remains high(40%)— particularly attributable to the
> checkpointer and background writer processes. According to Grafana, memory
> continues to remain cached.
> >
> > When I manually kill the checkpointer process on the node, memory usage
> drops immediately(9%). If I don't kill it and start another batch, memory
> usage increases further — reaching up to 90%, though under normal
> conditions it stays around 65–70%.
> >
> >
> > ---
> >
> > System Memory Snapshot (when idle):
> >
> > free -h :
> > total 30 Gi
> > used 12 Gi
> > free 2.2 Gi
> > shared 10Gi
> > buff/cache 27Gi
> > available 18Gi
> > PostgreSQL memory usage from ps:
> >
> > ps -eo pid,rss,cmd | grep postgres | awk '{sum+=$2} END {print sum/1024
> " MB"}'
> >
> > Output:
> > 25682.4 MB
> >
> > Note: Even though the system is idle for over 50 hours, connections
> connect, become idle and drop.
> >
> > ---
> >
> > Question:
> >
> > Is it normal for the checkpointer and background writer to hold onto
> memory like this after workloads finish? Is this related to how PostgreSQL
> manages shared buffers or dirty pages in a streaming replication setup?
> >
> > Are there any parameters in postgresql.conf or Pgpool that can help
> ensure memory is better reclaimed when the system is idle?
> >
> > Any guidance would be much appreciated.
> >
> > Best regards,
> > Ramzy
>

Reply via email to