VM Instance to Google Cloud SQL Migration
Hi Team, We would like to migrate our Postgresql VM instance on Google Cloud Platform to Google Cloud SQL with a minimal downtime. As I checked, we have to export and import the SQL file and our database size is large and cannot afford longer downtime. Do any have solution to achieve this?. Thanks & Regards, Sathish Kumar.V
Triggers when importing data
Hi, I am trying to export and import sql file of a database. I would like to know whether it will execute all the triggers when importing the sql dump which is for Insert or Update or Delete. Export:pg_dump -h test -U db_admin --format=plain --no-owner --no-acl production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > test.sql Import: psql -h test -U db_admin -d production -W < test.sql
PL/pgSQL HTTP Request
Hi Team, Do PL/pgSQL support to create a function to make HTTP request. We have a requirement to send data to external server from Postgres DB using HTTP/HTTPS Post Method.
Re: PL/pgSQL HTTP Request
Hi Pavel, We would like to use with Google Cloud Sql where third party extensions are not supported. On Fri, Dec 7, 2018, 9:55 PM Pavel Stehule Hi > > pá 7. 12. 2018 v 14:48 odesílatel Sathish Kumar > napsal: > >> Hi Team, >> >> Do PL/pgSQL support to create a function to make HTTP request. We have a >> requirement to send data to external server from Postgres DB using >> HTTP/HTTPS Post Method. >> > > > Surely It doesn't support it. You can use some untrusted language - but it > is not safe. > > you can use https://github.com/pramsey/pgsql-http extension. It is much > better, but still it not good idea. Stored procedures are perfect for work > inside database. Communication with outer world is not good > > a) outer world is not transactional > b) outer world is too slow. > > This is task for application (communication) server. > > Regards > > Pavel >
Anonymize Data
Hi Team, I am trying to protect some data on few tables when exporting to other environments, is there anyway or extension which can anonymize of personal data like name, credit card numbers ,etc. after import. Thanks & Regards Sathish Kumar.V
Table Replication
Hi, We are trying to replicate few tables from one postgresql server to another server. We are currently using Postgresql 9.5.x, is there any way to achieve it without Postgresql upgrade.
Re: Anonymize Data
Hi, Thanks, I have checked about this extension. Is it reliable to use. On Tue, Jan 29, 2019, 9:02 PM Tumasgiu Rossini Hi, > > you should check this thread : > > > https://www.postgresql.org/message-id/flat/CABokaaPSrK6%2BFFAdfT8gg-mNLXB9612NezaCbTNRtC7HqztkKw%40mail.gmail.com > > Le mar. 29 janv. 2019 à 11:08, Sathish Kumar a > écrit : > >> Hi Team, >> >> I am trying to protect some data on few tables when exporting to other >> environments, is there anyway or extension which can anonymize of >> personal data like name, credit card numbers ,etc. after import. >> >> Thanks & Regards >> Sathish Kumar.V >> >
Re: Anonymize Data
Hi Adrian, I am looking to do it either during export or while importing data in the secondary db. On Tue, Jan 29, 2019, 10:43 PM Adrian Klaver On 1/29/19 2:08 AM, Sathish Kumar wrote: > > Hi Team, > > > > I am trying to protect some data on few tables when exporting to other > > environments, is there anyway or extension which can anonymize of > > personal data like name, credit card numbers ,etc. after import. > > Would you not want to anonymize before or during export? > > > > > Thanks & Regards > > Sathish Kumar.V > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Postgresql Duplicate DB
Hi All, I would like to duplicate our existing db on the same server, what will be the faster way to achieve it. DB size is around 300gb.
Permission Read Only User
Hi All, I have created a read only user to perform select statements on our database but whenever we create new tables on the database this user is unable to view it unless I grant select again for this table. Is there a way I can make select as default permission for this user so that in future if I create any new tables, it will be still accessible.
Table Export & Import
Hi Team, We have a requirement to copy a table from one database server to another database server. We are looking for a solution to achieve this with lesser downtime on Prod. Can you help us with this? Table Size: 160GB Postgresql Server Version: 9.5
Re: Table Export & Import
Hi Ros, Using server on Cloud. On Mon, Apr 1, 2019, 5:26 PM ROS Didier wrote: > Hi > > One solution could be to use intel technology: FPGA : > https://www.intel.fr/content/www/fr/fr/products/programmable.html > > the principle is to add an PCI electronic card on the server with CPUs and > RAM. > > this greatly speeds up the loading of the data into the database. > > > > Best Regards > > > > [image: cid:image002.png@01D14E0E.8515EB90] > > [image: Certification-DALIBO] > > > *Didier ROS* > > *Expertise SGBD* > > EDF - DTEO - DSIT - IT DMA > > Département Solutions Groupe > > Groupe Performance Applicative > > 32 avenue Pablo Picasso > > 92000 NANTERRE > > > > *didier@edf.fr * > > Tél. : +33 6 49 51 11 88 > > [image: cid:image003.png@01D4BE20.1EAF68B0] [image: > cid:image004.png@01D4BE20.1EAF68B0] > > > > > > *De :* satcs...@gmail.com [mailto:satcs...@gmail.com] > *Envoyé :* lundi 1 avril 2019 08:10 > *À :* pgsql-gene...@postgresql.org >> PG-General Mailing List < > pgsql-gene...@postgresql.org> > *Objet :* Table Export & Import > > > > Hi Team, > > > > We have a requirement to copy a table from one database server to another > database server. We are looking for a solution to achieve this with lesser > downtime on Prod. Can you help us with this? > > > > Table Size: 160GB > > Postgresql Server Version: 9.5 > > > > > > > Ce message et toutes les pièces jointes (ci-après le 'Message') sont > établis à l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme à sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de > votre système, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions également d'en > avertir immédiatement l'expéditeur par retour du message. > > Il est impossible de garantir que les communications par messagerie > électronique arrivent en temps utile, sont sécurisées ou dénuées de toute > erreur ou virus. > > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. >
Re: Table Export & Import
Hi Adrian, We are exporting live table data to a new database, so we need to stop our application until the export/import is completed. We would like to minimise this downtime. On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver wrote: > On 3/31/19 11:09 PM, Sathish Kumar wrote: > > Hi Team, > > > > We have a requirement to copy a table from one database server to > > another database server. We are looking for a solution to achieve this > > with lesser downtime on Prod. Can you help us with this? > > So what is creating the downtime now? > > In addition to other suggestions you might want to take a look at: > > https://www.postgresql.org/docs/9.5/postgres-fdw.html > > > > > > Table Size: 160GB > > Postgresql Server Version: 9.5 > > > > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: Table Export & Import
The table size is 160gb. We would like to move/copy this table fro one db server to another db server. On Tue, Apr 2, 2019, 12:17 AM Michel Pelletier wrote: > On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar wrote: > >> Hi Adrian, >> We are exporting live table data to a new database, so we need to stop >> our application until the export/import is completed. We would like to >> minimise this downtime. >> > > It's more complicated if you want to keep your application running and > writing to the db while migrating. There are trigger-level replication > tools, like slony that can be used to stream changes to the new database, > and then you switch over once you get both of them to parity, but there are > some gotchas. You said the db is only 160GB, it depend a lot on what kind > of schema we're talking about, but I imagine it wouldn't take long to just > take the downtime and do a normal pg_upgrade. > > >> >> On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver >> wrote: >> >>> On 3/31/19 11:09 PM, Sathish Kumar wrote: >>> > Hi Team, >>> > >>> > We have a requirement to copy a table from one database server to >>> > another database server. We are looking for a solution to achieve this >>> > with lesser downtime on Prod. Can you help us with this? >>> >>> So what is creating the downtime now? >>> >>> In addition to other suggestions you might want to take a look at: >>> >>> https://www.postgresql.org/docs/9.5/postgres-fdw.html >>> >>> >>> > >>> > Table Size: 160GB >>> > Postgresql Server Version: 9.5 >>> > >>> > >>> >>> >>> -- >>> Adrian Klaver >>> adrian.kla...@aklaver.com >>> >>
Re: Table Export & Import
Hi All, Can you tell me a way for table replication or sync or to achieve minimal downtime from dbserver1 to dbserver2 on Postgresql 9.5 Table Size: 160gb 4VCPU, 16gb RAM On Tue, Apr 2, 2019, 12:19 AM Sathish Kumar wrote: > The table size is 160gb. We would like to move/copy this table fro one db > server to another db server. > > On Tue, Apr 2, 2019, 12:17 AM Michel Pelletier > wrote: > >> On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar wrote: >> >>> Hi Adrian, >>> We are exporting live table data to a new database, so we need to stop >>> our application until the export/import is completed. We would like to >>> minimise this downtime. >>> >> >> It's more complicated if you want to keep your application running and >> writing to the db while migrating. There are trigger-level replication >> tools, like slony that can be used to stream changes to the new database, >> and then you switch over once you get both of them to parity, but there are >> some gotchas. You said the db is only 160GB, it depend a lot on what kind >> of schema we're talking about, but I imagine it wouldn't take long to just >> take the downtime and do a normal pg_upgrade. >> >> >>> >>> On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver >>> wrote: >>> >>>> On 3/31/19 11:09 PM, Sathish Kumar wrote: >>>> > Hi Team, >>>> > >>>> > We have a requirement to copy a table from one database server to >>>> > another database server. We are looking for a solution to achieve >>>> this >>>> > with lesser downtime on Prod. Can you help us with this? >>>> >>>> So what is creating the downtime now? >>>> >>>> In addition to other suggestions you might want to take a look at: >>>> >>>> https://www.postgresql.org/docs/9.5/postgres-fdw.html >>>> >>>> >>>> > >>>> > Table Size: 160GB >>>> > Postgresql Server Version: 9.5 >>>> > >>>> > >>>> >>>> >>>> -- >>>> Adrian Klaver >>>> adrian.kla...@aklaver.com >>>> >>>
Import Database
Hi, I am trying to import a database of size 300+gb to another server. It's taking a long time. 4vCPU 15GB RAM psql -h newserver -U dbuser -d production -W < prod.sql Is there a way to speed up the importing process by tweaking Postgresql config like maintenance_workmem, work_mem, shared_buffers etc.,
Re: [External] Re: Import Database
Hi, I am trying to export our database in GCE instance to Google Cloud SQL. Below are the commands used to export/import the database. I am exporting only 1 database which is required. Export: pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > prod.sql Import: psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain wrote: > Yes. > I do bump up maintenance_work_mem temporarily during a restore. > it helps in rebuilding on indexes a little faster. > Turning fsync off during restore will help the restore a little fast too > but in case of any crash you may have to restart the restore from scratch. > Also do have the option to take pg_dump and run pg_restore ? or you just > the have the raw sql dump to work with? > if you have the option of taking a dump again, you can try using pg_dump > and pg_restore with -Fc (custom format) and -j n (parallel) option along > with temp bump in maint memory. > This will make the restore a little faster that raw sql dump I think. > If you are on pg10 or above? you can use logical replication to mirror > the database. > There are blogs by several people explaining how to do that, that may be > helpful. > > > On Sun, 5 May 2019 at 10:29 PM Ravi Krishna wrote: > >> IMO you are using the slowest tool to import. >> >> Just one quick question: Why can't you take cluster backup using any of >> the tools available and then drop all >> unwanted databases after you import the cluster. >> >> pg_basebackup will do a good job. >> >> -- > > Regards, > Vijay >
Re: [External] Re: Import Database
Hi All, Postgresql version: 9.6 On Mon, May 6, 2019, 7:14 AM Sathish Kumar wrote: > Hi, > > I am trying to export our database in GCE instance to Google Cloud SQL. > > Below are the commands used to export/import the database. I am exporting > only 1 database which is required. > > Export: > > pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl > production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 > EXTENSION/g' > prod.sql > > > Import: > > psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql > > > On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain wrote: > >> Yes. >> I do bump up maintenance_work_mem temporarily during a restore. >> it helps in rebuilding on indexes a little faster. >> Turning fsync off during restore will help the restore a little fast too >> but in case of any crash you may have to restart the restore from scratch. >> Also do have the option to take pg_dump and run pg_restore ? or you just >> the have the raw sql dump to work with? >> if you have the option of taking a dump again, you can try using pg_dump >> and pg_restore with -Fc (custom format) and -j n (parallel) option along >> with temp bump in maint memory. >> This will make the restore a little faster that raw sql dump I think. >> If you are on pg10 or above? you can use logical replication to mirror >> the database. >> There are blogs by several people explaining how to do that, that may be >> helpful. >> >> >> On Sun, 5 May 2019 at 10:29 PM Ravi Krishna wrote: >> >>> IMO you are using the slowest tool to import. >>> >>> Just one quick question: Why can't you take cluster backup using any of >>> the tools available and then drop all >>> unwanted databases after you import the cluster. >>> >>> pg_basebackup will do a good job. >>> >>> -- >> >> Regards, >> Vijay >> >
Re: Import Database
Hi All, Is there any other option to restore faster using psql, since I have to export it as plain text dump. --format=plain Only plain SQL format is supported by Cloud SQL. I cannot use pgrestore option for plain text sql dump restore. On Mon, May 6, 2019, 6:35 PM Andreas Kretschmer wrote: > > > Am 05.05.19 um 19:26 schrieb Ron: > > On 5/5/19 12:20 PM, Andreas Kretschmer wrote: > >> > >> > >> Am 05.05.19 um 18:47 schrieb Sathish Kumar: > >>> Is there a way to speed up the importing process by tweaking > >>> Postgresql config like maintenance_workmem, work_mem, shared_buffers > >>> etc., > >> > >> sure, take the dump in custom-format and use pg_restore with -j > >> . > > > > Custom format or directory format? > > both are possible for that. > > Regards, Andreas > > -- > 2ndQuadrant - The PostgreSQL Support Company. > www.2ndQuadrant.com > > > >
Inherit Database - Table Permissions
Hi Team, We have a database and keep creating new tables for the requirement. Every time we have to grant readonly permission to the new tables which are created for the db user. Instead is there a way to inherit privileges. Basically, we have a readonly user, who should run only select statements on existing or nee tables. Can you help me on how to achieve it.
Re: pg_repack issue
Hi Prakash, You can run below command. pg_repack -d dbname -E DEBUG On Wed, Jun 5, 2019, 7:55 PM Prakash Ramakrishnan < prakash.ramakrishnan...@nielsen.com> wrote: > Hi Peter, > > Thanks i have successfully created the extension how to use full vacuum > using pg_repack. > > Regards, > Prakash.R > > On Wed, Jun 5, 2019 at 5:23 PM Peter Eisentraut < > peter.eisentr...@2ndquadrant.com> wrote: > >> On 2019-06-05 12:48, Prakash Ramakrishnan wrote: >> > gcc -Wall -Wmissing-prototypes -Wpointer-arith >> > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute >> > -Wformat-security -fno-strict-aliasing -fwrapv >> > -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >> > -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 >> > -grecord-gcc-switches -m64 -mtune=generic pg_repack.o pgut/pgut.o >> > pgut/pgut-fe.o -L/usr/pgsql-11/lib -Wl,--as-needed >> > -L/usr/lib64/llvm5.0/lib -L/usr/lib64 -Wl,--as-needed >> > -Wl,-rpath,'/usr/pgsql-11/lib',--enable-new-dtags -L/usr/pgsql-11/lib >> > -lpq -L/usr/pgsql-11/lib -lpgcommon -lpgport -lpthread -lssl -lcrypto >> > -lz -lreadline -lrt -lcrypt -ldl -lm -o pg_repack >> > */usr/bin/ld: cannot find -lreadline >> > collect2: error: ld returned 1 exit status >> > make[1]: *** [pg_repack] Error 1 >> > make[1]: Leaving directory `/home/postgres/software/pg_repack-1.4.4/bin' >> > make: *** [all] Error 2* >> >> It's a bit bogus that pg_repack would require this, but perhaps >> installing the readline-devel (or similar) package would get you past >> this. >> >> -- >> Peter Eisentraut http://www.2ndQuadrant.com/ >> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >> > > > -- > Thanks, > Prakash.R > PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On > call : +91-8939599426 >
Re: pg_repack issue
Hi Prakash, You can see all the available options from here. http://reorg.github.io/pg_repack/#installation On Wed, Jun 5, 2019, 8:48 PM Prakash Ramakrishnan < prakash.ramakrishnan...@nielsen.com> wrote: > Hi Sathish, > > Thanks for quick response and the database using separate table-space and > we need to clear table and index bloat values and using jobs also how can i > achieve this need exact command ? > > Regards, > Prakash.R > > On Wed, Jun 5, 2019 at 6:16 PM Sathish Kumar wrote: > >> Hi Prakash, >> >> You can run below command. >> >> pg_repack -d dbname -E DEBUG >> >> >> On Wed, Jun 5, 2019, 7:55 PM Prakash Ramakrishnan < >> prakash.ramakrishnan...@nielsen.com> wrote: >> >>> Hi Peter, >>> >>> Thanks i have successfully created the extension how to use full vacuum >>> using pg_repack. >>> >>> Regards, >>> Prakash.R >>> >>> On Wed, Jun 5, 2019 at 5:23 PM Peter Eisentraut < >>> peter.eisentr...@2ndquadrant.com> wrote: >>> >>>> On 2019-06-05 12:48, Prakash Ramakrishnan wrote: >>>> > gcc -Wall -Wmissing-prototypes -Wpointer-arith >>>> > -Wdeclaration-after-statement -Wendif-labels >>>> -Wmissing-format-attribute >>>> > -Wformat-security -fno-strict-aliasing -fwrapv >>>> > -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >>>> > -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 >>>> > -grecord-gcc-switches -m64 -mtune=generic pg_repack.o pgut/pgut.o >>>> > pgut/pgut-fe.o -L/usr/pgsql-11/lib -Wl,--as-needed >>>> > -L/usr/lib64/llvm5.0/lib -L/usr/lib64 -Wl,--as-needed >>>> > -Wl,-rpath,'/usr/pgsql-11/lib',--enable-new-dtags -L/usr/pgsql-11/lib >>>> > -lpq -L/usr/pgsql-11/lib -lpgcommon -lpgport -lpthread -lssl -lcrypto >>>> > -lz -lreadline -lrt -lcrypt -ldl -lm -o pg_repack >>>> > */usr/bin/ld: cannot find -lreadline >>>> > collect2: error: ld returned 1 exit status >>>> > make[1]: *** [pg_repack] Error 1 >>>> > make[1]: Leaving directory >>>> `/home/postgres/software/pg_repack-1.4.4/bin' >>>> > make: *** [all] Error 2* >>>> >>>> It's a bit bogus that pg_repack would require this, but perhaps >>>> installing the readline-devel (or similar) package would get you past >>>> this. >>>> >>>> -- >>>> Peter Eisentraut http://www.2ndQuadrant.com/ >>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >>>> >>> >>> >>> -- >>> Thanks, >>> Prakash.R >>> PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On >>> call : +91-8939599426 >>> >> > > -- > Thanks, > Prakash.R > PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On > call : +91-8939599426 >