Hi Michael,

Thanks for your response.

Please find answers for your questions
How many rows did these tables have before partitioning?     -->  We starts 
test with  0 rows in partition table.
Why did you decide to partition?                                                
  -->  These tables are heave tables with high number of DML operation 
performed on this tables with high number of rows generated every hour.
Do these list partitions allow for plan-time pruning?                -->   WE 
have tune application queries to utilize partition pruning. Still we have 2-3 
queries not utilizing partition pruning and we are working on same.
Do they support partition wise joins?                                          
-->  Most of the queries are querying to single table.  We have change our 
queries that can utilize partition key.
work_mem can be used for each node of the plan and if you are getting parallel 
scans of many tables or indexes where you previously had one, that could be an 
issue. --> some of query are scanning indexes on all the partition.

Current work_mem is set with 9MB.
cpu_tuple_cost = 0.03
seq_page_cost = 0.7
random_page_cost=1
huge_pages=off.


Thanks & Regards,
Ishan Joshi

From: Michael Lewis <mle...@entrata.com>
Sent: Wednesday, June 10, 2020 1:23 AM
To: Ishan Joshi <ishan.jo...@amdocs.com>
Cc: pgsql-gene...@postgresql.org
Subject: Re: Postgres server 12.2 crash with process exited abnormally and 
possibly corrupted shared memory

On Tue, Jun 9, 2020 at 8:35 AM Ishan Joshi 
<ishan.jo...@amdocs.com<mailto:ishan.jo...@amdocs.com>> wrote:
I have using postgresql server v12.2  on CentOS Linux release 7.3.1611 (Core).

My application is working fine with non partition tables but recently we are 
trying to adopt partition table on few of application tables.
So we have created List partition on 6 table. 2 out of 6 tables have 24 
partitions and 4 out of 6 tables have 500 list partitions. After performing 
partition table, we are trying to run our application it is getting crash as I 
can see the memory utilization is consumed 100% and once it reach to 100%  
Postgres server getting crash with following error


How many rows did these tables have before partitioning? Why did you decide to 
partition? Do these list partitions allow for plan-time pruning? Do they 
support partition wise joins? work_mem can be used for each node of the plan 
and if you are getting parallel scans of many tables or indexes where you 
previously had one, that could be an issue.

2000 for max_connections strikes me as quite high. Consider the use of a 
connection pooler like pgbouncer or pgpool such that Postgres can be run with 
max connections more like 2-5x your number of CPUs, and those connections get 
re-used as needed. There is some fixed memory overhead for each potential 
connection.
This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 
<https://www.amdocs.com/about/email-terms-of-service>

Reply via email to