precautions/prerequisites to take for specific table

2020-11-05 Thread Vasu Madhineni
Hi All, In my organisation a newly built project application team requirement on tables like have a column (text type), with size can reach around 3 MB, and 45 million records annually. Are there any specific precautions/prerequisites we have to take from DBA end to handle this type of table. T

Open source monitoring streaming replication

2020-10-26 Thread Vasu Madhineni
docker containers in different hosts. Thanks in advance. Regards, Vasu Madhineni

Re: multiple tables got corrupted

2020-09-18 Thread Vasu Madhineni
Hi Magnus, Thanks for your update. To identify the number of tables corrupted in the database if I run below command, Will any impact on other tables in the production environment. "pg_dump -f /dev/null database" Thanks in advance. Regards, Vasu Madhineni On Fri, Sep 18, 2020

Re: multiple tables got corrupted

2020-09-16 Thread Vasu Madhineni
I could see block read I/O errors in /var/log/syslog. if those error fixed by OS team, will it require recovery. Also can i use LIMIT and OFFSET to locate corrupted rows? Thanks in advance Regards, Vasu Madhineni On Wed, Sep 16, 2020, 01:58 Magnus Hagander wrote: > Try reading them &quo

Re: multiple tables got corrupted

2020-09-15 Thread Vasu Madhineni
Is it possible to identify which rows are corrupted in particular tables. On Tue, Sep 15, 2020 at 5:36 PM Magnus Hagander wrote: > > > On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni > wrote: > >> Hi All, >> >> In one of my postgres databases multiple tables

multiple tables got corrupted

2020-09-15 Thread Vasu Madhineni
70812": Input/output error Tried to take backup of tables with pg_dump but same error. files exist physically in base location. How to proceed on this, no backup to restore. Thanks in advance Regards, Vasu Madhineni

TDE implementation in postgres which is in docker container

2020-07-25 Thread Vasu Madhineni
Hi All, How to implement TDE in postgres which is running docker containers. Thanks in advance. Regards, Vasu Madhineni

HA setup with pg pool in docker

2020-07-22 Thread Vasu Madhineni
in advance. Regards, Vasu Madhineni

Pgpool in docker container

2020-07-21 Thread Vasu Madhineni
Hi All, Planning to build standalone postgres and with pgpool as connection pooler in docker containers. Shall we try option like installing pgpool in one docker container and postgres in another docker container, is it possible? Thanks in advance. Regards, Vasu Madhineni

Re: Multitenent architecture

2020-07-21 Thread Vasu Madhineni
Hi All, Our project uses each database for tenant, But how can we restrict tenant resources? Ex: Tenent1 has to use 20% resource and Tenent2 has to use 10% resource, how can we restrict users like this. Thanks and Regards, Vasu Madhineni On Mon, Jun 8, 2020 at 2:50 PM Laurenz Albe wrote

Re: Multitenent architecture

2020-06-08 Thread Vasu Madhineni
Hi All, Thanks a lot for information, I will look into it and get back to you. Regards, Vasu Madhineni On Sun, Jun 7, 2020 at 1:21 AM Michel Pelletier wrote: > > On Sat, Jun 6, 2020 at 3:14 AM Vasu Madhineni > wrote: > >> Hi Rob, >> >> Our environment is medical

Re: Multitenent architecture

2020-06-06 Thread Vasu Madhineni
Hi Rob, Our environment is medical clinical data, so each clinic as a tenant. Approximately 500+ tenants with 6TB data. Thank you in advance. Regards, Vasu Madhineni On Fri, Jun 5, 2020 at 6:09 PM Rob Sargent wrote: > > > On Jun 5, 2020, at 2:54 AM, Vasu Madhineni wrote: > >

Re: Multitenent architecture

2020-06-05 Thread Vasu Madhineni
If the data size is more than 6TB, which approach better? On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe wrote: > On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote: > > We are planning a POC on multitenant architecture in Postgres, Could you > please > > help us with ste

Multitenent architecture

2020-06-04 Thread Vasu Madhineni
Hi All, We are planning a POC on multitenant architecture in Postgres, Could you please help us with steps for multitenant using schema for each application model. Thank you so much all. Regards, Vasu