Multitenent architecture
Hi All, We are planning a POC on multitenant architecture in Postgres, Could you please help us with steps for multitenant using schema for each application model. Thank you so much all. Regards, Vasu
Re: Multitenent architecture
If the data size is more than 6TB, which approach better? On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe wrote: > On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote: > > We are planning a POC on multitenant architecture in Postgres, Could you > please > > help us with steps for multitenant using schema for each application > model. > > For few tenants, you can keep identical tables in several schemas and > set "search_path" to select a tenant. > > With many tenants, you are better off with one table that holds the > data for all clients. You can use Row Level Security to have each > tenant see only his or her data, and it might be a good idea to > use list partitioning on the tenant ID. > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com > >
Re: Multitenent architecture
Hi Rob, Our environment is medical clinical data, so each clinic as a tenant. Approximately 500+ tenants with 6TB data. Thank you in advance. Regards, Vasu Madhineni On Fri, Jun 5, 2020 at 6:09 PM Rob Sargent wrote: > > > On Jun 5, 2020, at 2:54 AM, Vasu Madhineni wrote: > > > If the data size is more than 6TB, which approach better? > > On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe > wrote: > >> On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote: >> > We are planning a POC on multitenant architecture in Postgres, Could >> you please >> > help us with steps for multitenant using schema for each application >> model. >> >> For few tenants, you can keep identical tables in several schemas and >> set "search_path" to select a tenant. >> >> With many tenants, you are better off with one table that holds the >> data for all clients. You can use Row Level Security to have each >> tenant see only his or her data, and it might be a good idea to >> use list partitioning on the tenant ID. >> >> Yours, >> Laurenz Albe >> -- >> Cybertec | https://www.cybertec-postgresql.com >> >> The question is How many separate data owners? >
Re: Multitenent architecture
Hi All, Thanks a lot for information, I will look into it and get back to you. Regards, Vasu Madhineni On Sun, Jun 7, 2020 at 1:21 AM Michel Pelletier wrote: > > On Sat, Jun 6, 2020 at 3:14 AM Vasu Madhineni > wrote: > >> Hi Rob, >> >> Our environment is medical clinical data, so each clinic as a tenant. >> Approximately 500+ tenants with 6TB data. >> >> Thank you in advance. >> >> > There's a good article on the AWS blog on multi tenancy with postgres: > > > https://aws.amazon.com/blogs/database/multi-tenant-data-isolation-with-postgresql-row-level-security/ > > This is similar to Laurenz's second suggestion. > > -Michel > > >> Regards, >> Vasu Madhineni >> >> On Fri, Jun 5, 2020 at 6:09 PM Rob Sargent wrote: >> >>> >>> >>> On Jun 5, 2020, at 2:54 AM, Vasu Madhineni >>> wrote: >>> >>> >>> If the data size is more than 6TB, which approach better? >>> >>> On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe >>> wrote: >>> >>>> On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote: >>>> > We are planning a POC on multitenant architecture in Postgres, Could >>>> you please >>>> > help us with steps for multitenant using schema for each application >>>> model. >>>> >>>> For few tenants, you can keep identical tables in several schemas and >>>> set "search_path" to select a tenant. >>>> >>>> With many tenants, you are better off with one table that holds the >>>> data for all clients. You can use Row Level Security to have each >>>> tenant see only his or her data, and it might be a good idea to >>>> use list partitioning on the tenant ID. >>>> >>>> Yours, >>>> Laurenz Albe >>>> -- >>>> Cybertec | https://www.cybertec-postgresql.com >>>> >>>> The question is How many separate data owners? >>> >>
Re: Multitenent architecture
Hi All, Our project uses each database for tenant, But how can we restrict tenant resources? Ex: Tenent1 has to use 20% resource and Tenent2 has to use 10% resource, how can we restrict users like this. Thanks and Regards, Vasu Madhineni On Mon, Jun 8, 2020 at 2:50 PM Laurenz Albe wrote: > On Sat, 2020-06-06 at 11:56 +0800, Vasu Madhineni wrote: > > > > On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe < > laurenz.a...@cybertec.at> wrote: > > > > > On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote: > > > > > > We are planning a POC on multitenant architecture in Postgres, > Could you please > > > > > > help us with steps for multitenant using schema for each > application model. > > > > > > > > > > For few tenants, you can keep identical tables in several schemas > and > > > > > set "search_path" to select a tenant. > > > > > > > > > > With many tenants, you are better off with one table that holds the > > > > > data for all clients. You can use Row Level Security to have each > > > > > tenant see only his or her data, and it might be a good idea to > > > > > use list partitioning on the tenant ID. > > > > Our environment is medical clinical data, so each clinic as a tenant. > > Approximately 500+ tenants with 6TB data. > > The important number to base this decision on would be the number of > tables you'd expect in the database. It shouldn't be too many. > > If the database grows large, you may be better off sharding the database > together with partitioning it across schemas. > Several smaller databases are easier to back up and make scaling easier. > > Of course that requires your application to be part of the solution: > it needs to know which database to use for which tenant. > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com > >
Pgpool in docker container
Hi All, Planning to build standalone postgres and with pgpool as connection pooler in docker containers. Shall we try option like installing pgpool in one docker container and postgres in another docker container, is it possible? Thanks in advance. Regards, Vasu Madhineni
HA setup with pg pool in docker
Hi Team, We are planning to build a HA setup with pgpool in docker, Could you please let us know prerequisites for it. 1. How many servers are needed for this?, can we plan with 2 servers like below config. Primary instance and pgpool in two different docker containers in server 1 and standby instance in docker container server 2. 2. User requirement: any user needs password less authentication between nodes. 3. Port or firewall rules between servers. 4. If planning to create a network bridge between all docker containers between 2 servers will be any prerequisites for that. Thanks in advance. Regards, Vasu Madhineni
TDE implementation in postgres which is in docker container
Hi All, How to implement TDE in postgres which is running docker containers. Thanks in advance. Regards, Vasu Madhineni
multiple tables got corrupted
Hi All, In one of my postgres databases multiple tables got corrupted and followed the below steps but still the same error. 1.SET zero_damaged_pages = on 2. VACUUM ANALYZE, VACUUM FULL but still same error. moh_fa=# VACUUM FULL; ERROR: could not read block 9350 in file "base/1156523/1270812": Input/output error Tried to take backup of tables with pg_dump but same error. files exist physically in base location. How to proceed on this, no backup to restore. Thanks in advance Regards, Vasu Madhineni
Re: multiple tables got corrupted
Is it possible to identify which rows are corrupted in particular tables. On Tue, Sep 15, 2020 at 5:36 PM Magnus Hagander wrote: > > > On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni > wrote: > >> Hi All, >> >> In one of my postgres databases multiple tables got corrupted and >> followed the below steps but still the same error. >> >> 1.SET zero_damaged_pages = on >> 2. VACUUM ANALYZE, VACUUM FULL >> but still same error. >> > > > That is a very destructive first attempt. I hope you took a full > disk-level backup of the database before you did that, as it can ruin your > chances for forensics and data recovery for other issues. > > > moh_fa=# VACUUM FULL; >> ERROR: could not read block 9350 in file "base/1156523/1270812": >> Input/output error >> >> Tried to take backup of tables with pg_dump but same error. files exist >> physically in base location. >> >> How to proceed on this, no backup to restore. >> >> > This is clearly some sort of disk error, and with no backups to restore > you will definitely be losing data. > > I'd start by figuring out which tables have no corruption and do work, and > back those up (with pg_dump for example) as soon as possible to a different > machine -- since it's not exactly unlikely that further disk errors will > appear. > > Once you've done that, identify the tables, and then try to do partial > recovery. For example, if you look at the file 1270812, how big it is? > PostgreSQL is failing to read block 9350 which is 76595200 bytes into the > file. If this is at the very end of the file, you can for example try to > get the data out until that point with LIMIT. If it's in the middle of the > file, it gets more ticky, but similar approaches can be done. > > Also, unless you are running with data checksums enabled, I wouldn't fully > trust the data in the tables that you *can* read either. Since you clearly > have disk issues, they may have caused corruption elsewhere as well, so > whatever verification you can do against other tables, you should do as > well. > > > You'll of course also want to check any kernel logs or storage system logs > to see if they can give you a hint as to what happened, but they are > unlikely to actually give you something that will help you fix the problem. > > -- > Magnus Hagander > Me: https://www.hagander.net/ <http://www.hagander.net/> > Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/> >
Re: multiple tables got corrupted
I could see block read I/O errors in /var/log/syslog. if those error fixed by OS team, will it require recovery. Also can i use LIMIT and OFFSET to locate corrupted rows? Thanks in advance Regards, Vasu Madhineni On Wed, Sep 16, 2020, 01:58 Magnus Hagander wrote: > Try reading them "row by row" until it breaks. That is, SELECT * FROM ... > LIMIT 1, then LIMIT 2 etc. For more efficiency use a binary search starting > at what seems like a reasonable place looking at the size of the table vs > the first failed block to make it faster, but the principle is the same. > Once it fails, you've found a corrupt block... > > //Magnus > > > On Tue, Sep 15, 2020 at 12:46 PM Vasu Madhineni > wrote: > >> Is it possible to identify which rows are corrupted in particular tables. >> >> On Tue, Sep 15, 2020 at 5:36 PM Magnus Hagander >> wrote: >> >>> >>> >>> On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni >>> wrote: >>> >>>> Hi All, >>>> >>>> In one of my postgres databases multiple tables got corrupted and >>>> followed the below steps but still the same error. >>>> >>>> 1.SET zero_damaged_pages = on >>>> 2. VACUUM ANALYZE, VACUUM FULL >>>> but still same error. >>>> >>> >>> >>> That is a very destructive first attempt. I hope you took a full >>> disk-level backup of the database before you did that, as it can ruin your >>> chances for forensics and data recovery for other issues. >>> >>> >>> moh_fa=# VACUUM FULL; >>>> ERROR: could not read block 9350 in file "base/1156523/1270812": >>>> Input/output error >>>> >>>> Tried to take backup of tables with pg_dump but same error. files exist >>>> physically in base location. >>>> >>>> How to proceed on this, no backup to restore. >>>> >>>> >>> This is clearly some sort of disk error, and with no backups to restore >>> you will definitely be losing data. >>> >>> I'd start by figuring out which tables have no corruption and do work, >>> and back those up (with pg_dump for example) as soon as possible to a >>> different machine -- since it's not exactly unlikely that further disk >>> errors will appear. >>> >>> Once you've done that, identify the tables, and then try to do partial >>> recovery. For example, if you look at the file 1270812, how big it is? >>> PostgreSQL is failing to read block 9350 which is 76595200 bytes into the >>> file. If this is at the very end of the file, you can for example try to >>> get the data out until that point with LIMIT. If it's in the middle of the >>> file, it gets more ticky, but similar approaches can be done. >>> >>> Also, unless you are running with data checksums enabled, I wouldn't >>> fully trust the data in the tables that you *can* read either. Since you >>> clearly have disk issues, they may have caused corruption elsewhere as >>> well, so whatever verification you can do against other tables, you should >>> do as well. >>> >>> >>> You'll of course also want to check any kernel logs or storage system >>> logs to see if they can give you a hint as to what happened, but they are >>> unlikely to actually give you something that will help you fix the problem. >>> >>>
Re: multiple tables got corrupted
Hi Magnus, Thanks for your update. To identify the number of tables corrupted in the database if I run below command, Will any impact on other tables in the production environment. "pg_dump -f /dev/null database" Thanks in advance. Regards, Vasu Madhineni On Fri, Sep 18, 2020 at 3:42 PM Magnus Hagander wrote: > That depends on what the problem is and how they fix it. Most likely yes > -- especially since if you haven't enabled data checksums you won't *know* > if things are OK or not. So I'd definitely recommend it even if things > *look* OK. > > //Magnus > > > On Wed, Sep 16, 2020 at 5:06 AM Vasu Madhineni > wrote: > >> I could see block read I/O errors in /var/log/syslog. if those error >> fixed by OS team, will it require recovery. >> >> Also can i use LIMIT and OFFSET to locate corrupted rows? >> >> Thanks in advance >> >> Regards, >> Vasu Madhineni >> >> On Wed, Sep 16, 2020, 01:58 Magnus Hagander wrote: >> >>> Try reading them "row by row" until it breaks. That is, SELECT * FROM >>> ... LIMIT 1, then LIMIT 2 etc. For more efficiency use a binary search >>> starting at what seems like a reasonable place looking at the size of the >>> table vs the first failed block to make it faster, but the principle is the >>> same. Once it fails, you've found a corrupt block... >>> >>> //Magnus >>> >>> >>> On Tue, Sep 15, 2020 at 12:46 PM Vasu Madhineni >>> wrote: >>> >>>> Is it possible to identify which rows are corrupted in particular >>>> tables. >>>> >>>> On Tue, Sep 15, 2020 at 5:36 PM Magnus Hagander >>>> wrote: >>>> >>>>> >>>>> >>>>> On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni < >>>>> vasumdba1...@gmail.com> wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> In one of my postgres databases multiple tables got corrupted and >>>>>> followed the below steps but still the same error. >>>>>> >>>>>> 1.SET zero_damaged_pages = on >>>>>> 2. VACUUM ANALYZE, VACUUM FULL >>>>>> but still same error. >>>>>> >>>>> >>>>> >>>>> That is a very destructive first attempt. I hope you took a full >>>>> disk-level backup of the database before you did that, as it can ruin your >>>>> chances for forensics and data recovery for other issues. >>>>> >>>>> >>>>> moh_fa=# VACUUM FULL; >>>>>> ERROR: could not read block 9350 in file "base/1156523/1270812": >>>>>> Input/output error >>>>>> >>>>>> Tried to take backup of tables with pg_dump but same error. files >>>>>> exist physically in base location. >>>>>> >>>>>> How to proceed on this, no backup to restore. >>>>>> >>>>>> >>>>> This is clearly some sort of disk error, and with no backups to >>>>> restore you will definitely be losing data. >>>>> >>>>> I'd start by figuring out which tables have no corruption and do work, >>>>> and back those up (with pg_dump for example) as soon as possible to a >>>>> different machine -- since it's not exactly unlikely that further disk >>>>> errors will appear. >>>>> >>>>> Once you've done that, identify the tables, and then try to do partial >>>>> recovery. For example, if you look at the file 1270812, how big it is? >>>>> PostgreSQL is failing to read block 9350 which is 76595200 bytes into the >>>>> file. If this is at the very end of the file, you can for example try to >>>>> get the data out until that point with LIMIT. If it's in the middle of the >>>>> file, it gets more ticky, but similar approaches can be done. >>>>> >>>>> Also, unless you are running with data checksums enabled, I wouldn't >>>>> fully trust the data in the tables that you *can* read either. Since you >>>>> clearly have disk issues, they may have caused corruption elsewhere as >>>>> well, so whatever verification you can do against other tables, you should >>>>> do as well. >>>>> >>>>> >>>>> You'll of course also want to check any kernel logs or storage system >>>>> logs to see if they can give you a hint as to what happened, but they are >>>>> unlikely to actually give you something that will help you fix the >>>>> problem. >>>>> >>>>>
Open source monitoring streaming replication
Hi Team, Is there any open source monitoring tool can we use to configure monitoring PostgreSQL Primary-Standby as a single group instead of adding servers as individual monitoring. In my case I am using repmgr for automatic failover between primary and standby. primary, standby instances are docker containers in different hosts. Thanks in advance. Regards, Vasu Madhineni
precautions/prerequisites to take for specific table
Hi All, In my organisation a newly built project application team requirement on tables like have a column (text type), with size can reach around 3 MB, and 45 million records annually. Are there any specific precautions/prerequisites we have to take from DBA end to handle this type of table. Thanks in advance. Regards, Vasu