Hi,

Adding more, We have started to test individual tables now. We have tried two 
tables with 24 partitions and this increased the memory utilization from 30% to 
50-60%. Then we have added another table with 500 partition and our test got 
crashed after few hours of consumption with 100% memory utilization.

Last night I had executed test with 200 partition on same table and executed 15 
hrs test run and the memory utilization was stable with 80% of memory 
utilization. With referencing to this result I have executed log test run for 3 
days with same 200 partitions.

But again suspect is going to number of partition only.

How can we check the memory utilization by a table object  or a specific query ?


Thanks & Regards,
Ishan Joshi

From: Ishan Joshi
Sent: Wednesday, June 10, 2020 10:42 PM
To: Michael Lewis <mle...@entrata.com>
Cc: pgsql-gene...@postgresql.org
Subject: RE: Postgres server 12.2 crash with process exited abnormally and 
possibly corrupted shared memory

Hi Michael,

We have table having rows 2.5 millions records inserted and updated in each 
hour and another table having about 1 million records in an hour. WE have 
dedicated column added in table that have list of 1-500 and this column is used 
as partition key.
Idea behind the 500 partition is to store 1 year data in the table with 
partition key value change at every 20hrs.

As the data is huge in these tables we are approaching to partition and the 
list partition for performing  the maintenance on these tables.

This is not a test as we have partition on these tables with oracle but as we 
migrate to postgres, we are enabling the feature in postgres as well.

As we want to see the impact from beginning 0 size and understanding the 
details for next 72 hrs under heavy load.

As I just executed the same environment with 100 partition on these tables, Run 
was running for 12 hrs with constant 70% RAM utilization and 50% cpu 
utilization.

So I am suspecting the number of partition is the issue behind the memory 
utilization.

Thanks & Regards,
Ishan Joshi

From: Michael Lewis <mle...@entrata.com<mailto:mle...@entrata.com>>
Sent: Wednesday, June 10, 2020 10:28 PM
To: Ishan Joshi <ishan.jo...@amdocs.com<mailto:ishan.jo...@amdocs.com>>
Cc: pgsql-gene...@postgresql.org<mailto:pgsql-gene...@postgresql.org>
Subject: Re: Postgres server 12.2 crash with process exited abnormally and 
possibly corrupted shared memory

On Wed, Jun 10, 2020 at 12:05 AM Ishan Joshi 
<ishan.jo...@amdocs.com<mailto:ishan.jo...@amdocs.com>> wrote:
How many rows did these tables have before partitioning?     -->  We starts 
test with  0 rows in partition table.

Partitions are far from free and pruning is great but not guaranteed. How many 
total rows do you currently have or foresee having in the biggest table (all 
partitions combined, or without partitioning) within the next 1-3 years? 1 
million? 100 million? 2 billion?  You may have partitioned before it was 
prudent to do so is the point. Just because something can be done, doesn't mean 
it should be. What sort of key are you partitioning on?

Also, what do you mean about tests? There are some assumptions made about empty 
tables since stats may indicate they are empty when they are not. If your tests 
involve many empty tables, then it may give rather different performance than 
real life where there are few or no empty partitions.


This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 
<https://www.amdocs.com/about/email-terms-of-service>

Reply via email to