Hello pgsql-General Team,


We have currently using Postgresql DB which is growing about 1.4 billion
records/month with a total of about 16 to 17 billion records/year. The DB
storage is growing about 6.8 TB/year including all indexes and data.


Current total DB Storage we got is 60 TB.



*Use case   *

We wanted to move the partitioned data on a monthly basis older than 90
days to a newer historical database from the existing database (which is
another PostgreSQL DB) so that we can build a REST API to access the data
for the clients.  Intention is to keep this data for > 10 years and separate

out the DB data into hot and cold using separate databases.



*Questions.*

1) Can PostgreSQL can handle this volume with good performance if we want
to use the database for next 10 years ? (I know this depends lot of other
factors like our index bloat sizes and size of the record and partition
strategy)

2) Do we have any similar uses cases of this volume and storage ? any
information would be helpful.

3) How many partitions we can have considering better IO, query response
times. ?

4) We want to merge fewer partitions which are based on weekly to 1 month
partitions. Can we merge the partitions and use them in the newer database.

5) Can we use export and import partitions into the newer database ?.



Any information/resources/design ideas would be helpful.



Thanks

Reply via email to