Re: [GENERAL] BDR - DDL Locking
Hey Craig thank you very much for your response. > When you say you "attempted to" - what was the outcome? I tried a truncate without the cascade option. After that I tried it with the cascade option. The session just hanged indefinitely at that point. There was no rollback and I was testing on an empty table. Replication was in a ready state on both nodes and both DDL and DML was replicating. 0.9.2.0 BDR When you say restarting the nodes. I did restart postgres and this didn't help. On Wed, Oct 21, 2015 at 8:31 AM, Craig Ringer wrote: > What's the *exact* BDR version? > > When you say you "attempted to" - what was the outcome? Presumably an > ERROR from the TRUNCATE, right? That would roll back the transaction, > and in the process abort the DDL lock acquisition attempt. > > Are you sure replication was working normally prior to this point, > with no issues? > > The global DDL lock isn't a true lock in the sense that it appears in > pg_locks, etc. If you roll back the transaction trying to acquire it, > or terminate the PostgreSQL backend attempting to acquire it - such as > your TRUNCATE - using pg_terminate_backend(...) then it will be > removed automatically. If for any reason that is not the case (which > it shouldn't be) then restarting the nodes will clear it. >
[GENERAL] BDR-Plugin make install on RHEL7.1
or xsltproc... xsltproc checking for osx... osx checking thread safety of required library functions... yes checking whether gcc supports -Wl,--as-needed... yes configure: using compiler=gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9) configure: using CFLAGS=-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 configure: using CPPFLAGS= -D_GNU_SOURCE configure: using LDFLAGS= -Wl,--as-needed configure: creating ./config.status config.status: creating GNUmakefile config.status: creating src/Makefile.global config.status: creating src/include/pg_config.h config.status: src/include/pg_config.h is unchanged config.status: creating src/include/pg_config_ext.h config.status: src/include/pg_config_ext.h is unchanged config.status: creating src/interfaces/ecpg/include/ecpg_config.h config.status: src/interfaces/ecpg/include/ecpg_config.h is unchanged config.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s config.status: linking src/backend/port/dynloader/linux.c to src/backend/port/dynloader.c config.status: linking src/backend/port/sysv_sema.c to src/backend/port/pg_sema.c config.status: linking src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c config.status: linking src/backend/port/unix_latch.c to src/backend/port/pg_latch.c config.status: linking src/backend/port/dynloader/linux.h to src/include/dynloader.h config.status: linking src/include/port/linux.h to src/include/pg_config_os.h config.status: linking src/makefiles/Makefile.linux to src/Makefile.port [root@klw1128 postgresql-bdr]# make -j4 -s install-world PostgreSQL, contrib, and documentation installation complete. # cd /stage/gitrepo/bdr-plugin # PATH=/database/postgres/product/9.4.4/bin:$PATH # echo $PATH /database/postgres/product/9.4.4/bin:/database/postgres/product/9.4.4/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin # ./configure checking for a sed that does not truncate output... /usr/bin/sed checking for pg_config... /database/postgres/product/9.4.4/bin/pg_config configure: building against PostgreSQL 9.4 checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for replication/replication_identifier.h... yes configure: building against appropriately patched PostgreSQL configure: creating ./config.status config.status: creating Makefile config.status: creating Makefile.global config.status: creating doc/Makefile config.status: creating run_tests config.status: creating bdr_config_generated.h config.status: bdr_config_generated.h is unchanged # make -j4 -s all make -s install make: *** No rule to make target `make'. Stop. make: *** Waiting for unfinished jobs Many thanks, Will McCormick
[GENERAL] BDR - Remove & Join
I have a script which is meant to remove a node from bdr replication. Two nodes in replication node A and node B. The script does the following. 1. On node A - SELECT bdr.bdr_part_by_node_names(ARRAY['node B']); 2. On node A -Checks that the node_status is marked as 'k' in bdr.bdr_nodes 3. I then drop the database on node B 4. On node A I remove all nodes from bdr.bdr_nodes which have a status 'k' I then run a script which I used to setup replication before removal. The problem I encounter is node B after join in bdr.bdr_nodes is stuck in status 'c'. Any advice here would be appricated. I'm not sure if I need to drop all databases and reinstall postgres instead of step 3. Or f this should be enough? We are using the following version of bdr bdr|16385 | 11 | f | 0.9.2.0| {18080,18095,18108,18143,18173,18183,18192,18199,18212,18281} | {"","","","","","","","","",""} according to pg_extension. Please let me know if there is further information I can provide to troubleshoot or if this is a known issue. Regards and thank you.
[GENERAL] BDR and Backup and Recovery
What viable options exist for Backup & Recovery in a BDR environment? From the reading I have done PITR recovery is not an option with BDR. It's important to preface this that I have almost no exposure to postgres backup and recovery. Is PITR not an option with BDR? If a user fat fingers something and deletes records from a table without a where clause what is the correct course of action is to recover as much data as possible. What type of backup do I require to restore as much data as possible before the incident in a BDR environment. Sorry for such an open ended question. :D I'm continuing to read as I solicit feedback. Is there a document outlining recovery with BDR?
Re: [GENERAL] BDR and Backup and Recovery
Re-sending to group as well Jim :D Regarding testing backups, Well said Jim. Thanks for taking the time to respond. I will test regularly whatever we decide to put in place. The below is from the 0.9.3 BDR documentation: "Because logical replication is only supported in streaming mode (rather than WAL archiving) it isn't suitable for point-in-time recovery. Logical replication may be used in conjunction with streaming physical replication and/or PITR, though; it is not necessary to choose one or the other." Am I misinterpreting that BDR uses Logical Decoding and as such I cannot perform PITR? On Wed, Nov 18, 2015 at 11:19 AM, Jim Nasby wrote: > On 11/18/15 9:46 AM, Will McCormick wrote: > >> What viable options exist for Backup & Recovery in a BDR environment? >> From the reading I have done PITR recovery is not an option with BDR. >> It's important to preface this that I have almost no exposure to >> postgres backup and recovery. Is PITR not an option with BDR? >> >> If a user fat fingers something and deletes records from a table without >> a where clause what is the correct course of action is to recover as >> much data as possible. What type of backup do I require to restore as >> much data as possible before the incident in a BDR environment. >> >> Sorry for such an open ended question. :D I'm continuing to read as I >> solicit feedback. >> >> Is there a document outlining recovery with BDR? >> > > I don't know why PITR wouldn't work with BDR, other than you can't use > binary backups across incompatible versions and BDR might be considered > incompatible with community Postgres. I would think it should still work > fine if you try to restore to a BDR server. > > That said, remember that if you are not regularly (preferably > automatically) testing your backups by doing a restore and testing the > restore, then you don't have a backup. You have a hope and a prayer. :) > -- > Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX > Experts in Analytics, Data Architecture and PostgreSQL > Data in Trouble? Get it in Treble! http://BlueTreble.com >
[GENERAL] Complete Recovery 9.4.4
Basic backup and recovery question. I want to perform complete restore and recovery using continuous archive mode. Lets imagine we have a single table MYTABLE. Here are my high level steps 1) Add a record A) to MYTABLE 2) Take a file system backup to be used for recovery. This backup includes archive logs 3) Add a record B) to MYTABLE Timeline -> incident happens need backup 4) Stop PG 5) Make copy of current state including PGDATA w/ pg_xlog and WAL archives 6) Cleanup PGDATA /wpg_xlog and WAL archive directory 7) Restore backup taken in step 2, placing contents in PGDATA /w pg_xlog and archives 8) Remove contents of pg_xlog 9) Copy contents of pg_xlog in step 5 to PGDATA/pg_xlog 10) Create recovery.conf file with cp /database/postgres/product/9.4.4/archive/%f %p 11) Startup the server What I see happen is 1) restores but my change in step 3) is not. Can anyone explain why this is?
[GENERAL] - PostgreSQL Replication Types
I inherited a 9.1 replication environment Few basic questions that I can't find clear answers / clarifications for if possible: 3 types of replication in 9.1 I've read about from the offical docs: 1) warm standby 2) hot standby 3) streaming replication I'm using streaming replication I believe, the only indication I have is that there is the primary_conninfo on the standby. Is this the only indication? Is it possible to get if using streaming replication under normal operations? *cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': No such file or directory* *cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': No such file or directory* *LOG: streaming replication successfully connected to primary* *FATAL: could not receive data from WAL stream: FATAL: requested WAL segment 0007000F0057 has already been removed* My understanding is that warm standby and hot standby do log shipping and there is a greater window for transactions not to be send to the standby because WAL XLOG must be filled. Whereas Streaming replication basically sends at the transaction level? I'm sure this is somewhat misinformed! Thanks, Will
Re: [GENERAL] - PostgreSQL Replication Types
Thanks a ton for the prompt response. I've read most of this but some it was not clear until we discussed. Updated with WLM: On 12/17/2015 07:17 AM, Will McCormick wrote: > > I inherited a 9.1 replication environment > > Few basic questions that I can't find clear answers / clarifications for > if possible: > > 3 types of replication in 9.1 I've read about from the offical docs: > > 1) warm standby > 2) hot standby > 3) streaming replication > > I'm using streaming replication I believe, the only indication I have > is that there is the primary_conninfo on the standby. Is this the only > indication? > WLM: I also see streaming replication in the logs. On standby: http://www.postgresql.org/docs/9.1/interactive/functions-admin.html " pg_last_xlog_receive_location() Get last transaction log location received and synced to disk by streaming replication. While streaming replication is in progress this will increase monotonically. If recovery has completed this will remain static at the value of the last WAL record received and synced to disk during recovery. If streaming replication is disabled, or if it has not yet started, the function returns NULL." WLM: When I do this on the standby I get an error: ERROR: recovery is in progress HINT: WAL control functions cannot be ... > Is it possible to get if using streaming replication under normal > operations? > > /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': > No such file or directory/ > > /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': > No such file or directory/ > > /LOG: streaming replication successfully connected to primary/ > > /FATAL: could not receive data from WAL stream: FATAL: requested WAL > segment 0007000F0057 has already been removed/ > Assuming above is from standby log, correct? WLM: yes The cp lines would seem to indicate a restore_command in the standby recovery.conf, is that the case?: http://www.postgresql.org/docs/9.1/static/archive-recovery-settings.html restore_command (string) WLM: Correct The FATAL indicates that the WAL file has already been recycled on the master. WLM: I had read this what confuses me a bit is: /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': No such file or directory/ Does Streaming replication automatically use Archived WAL files when WAL XLOG files don't contain a transaction? We did have wal_keep_segments set to 0. I changed this to 50 but want to better understand this. Especially the correlation between the Archived WALs and the XLOG WALs. My guess is the difference between Streaming replication and the others is very simply that Streaming replication can read the XLOG WALs as well? So if all the Archived WALs have been shipped and processed to the Standby then the XLOGs are processed but not shipped? This meaning at a transaction level "kindof"? See: http://www.postgresql.org/docs/9.1/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-MASTER "wal_keep_segments (integer) Specifies the minimum number of past log file segments kept in the pg_xlog directory, in case a standby server needs to fetch them for streaming replication. Each segment is normally 16 megabytes. If a standby server connected to the primary falls behind by more than wal_keep_segments segments, the primary might remove a WAL segment still needed by the standby, in which case the replication connection will be terminated. (However, the standby server can recover by fetching the segment from archive, if WAL archiving is in use.) This sets only the minimum number of segments retained in pg_xlog; the system might need to retain more segments for WAL archival or to recover from a checkpoint. If wal_keep_segments is zero (the default), the system doesn't keep any extra segments for standby purposes, so the number of old WAL segments available to standby servers is a function of the location of the previous checkpoint and status of WAL archiving. This parameter has no effect on restartpoints. This parameter can only be set in the postgresql.conf file or on the server command line. " http://www.postgresql.org/docs/9.1/interactive/warm-standby.html#STREAMING-REPLICATION "If you use streaming replication without file-based continuous archiving, you have to set wal_keep_segments in the master to a value high enough to ensure that old WAL segments are not recycled too early, while the standby might still need them to catch up. If the standby falls behind too much, it needs to be reinitialized from a new base backup. If you set up a WAL archive that's accessible from the standby, wal_keep_segments is not required as the standby can always use the archive to catch up." > / > / > > My understanding is that wa
Re: [GENERAL] - PostgreSQL Replication Types
Thanks for the great assistance On Thu, Dec 17, 2015 at 11:27 AM, Adrian Klaver wrote: > On 12/17/2015 07:56 AM, Will McCormick wrote: > >> Thanks a ton for the prompt response. >> >> I've read most of this but some it was not clear until we discussed. >> >> Updated with WLM: >> >> On 12/17/2015 07:17 AM, Will McCormick wrote: >> >> I inherited a 9.1 replication environment >> >> Few basic questions that I can't find clear answers / clarifications >> for >> if possible: >> >> 3 types of replication in 9.1 I've read about from the offical docs: >> >> 1) warm standby >> 2) hot standby >> 3) streaming replication >> >> I'm using streaming replication I believe, the only indication I have >> is that there is the primary_conninfo on the standby. Is this the >> only >> indication? >> >> >> WLM: I also see streaming replication in the logs. >> >> On standby: >> >> http://www.postgresql.org/docs/9.1/interactive/functions-admin.html >> " >> pg_last_xlog_receive_location() >> >> Get last transaction log location received and synced to disk by >> streaming replication. While streaming replication is in progress this >> will increase monotonically. If recovery has completed this will remain >> static at the value of the last WAL record received and synced to disk >> during recovery. If streaming replication is disabled, or if it has not >> yet started, the function returns NULL." >> >> WLM: When I do this on the standby I get an error: >> >>ERROR: recovery is in progress >> >>HINT: WAL control functions cannot be ... >> > > What is the full hint message? > The functions are supposed to be able to be run while the server is in > recovery. > > > >> >> >> Is it possible to get if using streaming replication under normal >> operations? >> >> /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >> No such file or directory/ >> >> /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >> No such file or directory/ >> >> /LOG: streaming replication successfully connected to primary/ >> >> /FATAL: could not receive data from WAL stream: FATAL: requested WAL >> segment 0007000F0057 has already been removed/ >> >> >> Assuming above is from standby log, correct? WLM: yes >> >> The cp lines would seem to indicate a restore_command in the standby >> recovery.conf, is that the case?: >> >> http://www.postgresql.org/docs/9.1/static/archive-recovery-settings.html >> >> restore_command (string) WLM: Correct >> >> >> The FATAL indicates that the WAL file has already been recycled on the >> master. >> >> WLM: I had read this what confuses me a bit is: >> >>/cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >>No such file or directory/ >> >> Does Streaming replication automatically use Archived WAL files when >> WAL XLOG files don't contain a transaction? >> >> We did have wal_keep_segments set to 0. I changed this to 50 but >> want to better understand this. Especially the correlation between the >> Archived WALs and the XLOG WALs. My guess is the difference between >> Streaming replication and the others is very simply that Streaming >> replication can read the XLOG WALs as well? So if all the Archived WALs >> have been shipped and processed to the Standby then the XLOGs are >> processed but not shipped? This meaning at a transaction level "kindof"? >> > > Not really see the section below(#STREAMING-REPLICATION) I posted > previously. It is either or, if streaming is set up and the standby can > reach the master xlog directory then it will stream the WAL files from > there. If the standby cannot access the xlog directory and if you have WAL > archiving set up on the master and archive restore setup on the standby it > will switch to full WAL log shipping from the archived WAL directory, > assuming the files it needs are there. > > > >> See: >> >> >> http://www.postgresql.org/docs/9.1/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-MASTER >> >> "wal_keep_segments (integer) >> >> Specifies the minimum number of past log file segments kept in the >> pg_xlog directory, in case a standby server needs to fetch them fo
Re: [GENERAL] - PostgreSQL Replication Types
Almost forgot this: SELECT pg_current_xlog_location(); ERROR: recovery is in progress HINT: WAL control functions cannot be executed during recovery. bms=> SELECT pg_current_xlog_location(); ERROR: recovery is in progress HINT: WAL control functions cannot be executed during recovery. On Thu, Dec 17, 2015 at 11:27 AM, Adrian Klaver wrote: > On 12/17/2015 07:56 AM, Will McCormick wrote: > >> Thanks a ton for the prompt response. >> >> I've read most of this but some it was not clear until we discussed. >> >> Updated with WLM: >> >> On 12/17/2015 07:17 AM, Will McCormick wrote: >> >> I inherited a 9.1 replication environment >> >> Few basic questions that I can't find clear answers / clarifications >> for >> if possible: >> >> 3 types of replication in 9.1 I've read about from the offical docs: >> >> 1) warm standby >> 2) hot standby >> 3) streaming replication >> >> I'm using streaming replication I believe, the only indication I have >> is that there is the primary_conninfo on the standby. Is this the >> only >> indication? >> >> >> WLM: I also see streaming replication in the logs. >> >> On standby: >> >> http://www.postgresql.org/docs/9.1/interactive/functions-admin.html >> " >> pg_last_xlog_receive_location() >> >> Get last transaction log location received and synced to disk by >> streaming replication. While streaming replication is in progress this >> will increase monotonically. If recovery has completed this will remain >> static at the value of the last WAL record received and synced to disk >> during recovery. If streaming replication is disabled, or if it has not >> yet started, the function returns NULL." >> >> WLM: When I do this on the standby I get an error: >> >>ERROR: recovery is in progress >> >>HINT: WAL control functions cannot be ... >> > > What is the full hint message? > The functions are supposed to be able to be run while the server is in > recovery. > > > >> >> >> Is it possible to get if using streaming replication under normal >> operations? >> >> /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >> No such file or directory/ >> >> /cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >> No such file or directory/ >> >> /LOG: streaming replication successfully connected to primary/ >> >> /FATAL: could not receive data from WAL stream: FATAL: requested WAL >> segment 0007000F0057 has already been removed/ >> >> >> Assuming above is from standby log, correct? WLM: yes >> >> The cp lines would seem to indicate a restore_command in the standby >> recovery.conf, is that the case?: >> >> http://www.postgresql.org/docs/9.1/static/archive-recovery-settings.html >> >> restore_command (string) WLM: Correct >> >> >> The FATAL indicates that the WAL file has already been recycled on the >> master. >> >> WLM: I had read this what confuses me a bit is: >> >>/cp: cannot stat `/opt/postgres/9.1/archive/0007000F0057': >>No such file or directory/ >> >> Does Streaming replication automatically use Archived WAL files when >> WAL XLOG files don't contain a transaction? >> >> We did have wal_keep_segments set to 0. I changed this to 50 but >> want to better understand this. Especially the correlation between the >> Archived WALs and the XLOG WALs. My guess is the difference between >> Streaming replication and the others is very simply that Streaming >> replication can read the XLOG WALs as well? So if all the Archived WALs >> have been shipped and processed to the Standby then the XLOGs are >> processed but not shipped? This meaning at a transaction level "kindof"? >> > > Not really see the section below(#STREAMING-REPLICATION) I posted > previously. It is either or, if streaming is set up and the standby can > reach the master xlog directory then it will stream the WAL files from > there. If the standby cannot access the xlog directory and if you have WAL > archiving set up on the master and archive restore setup on the standby it > will switch to full WAL log shipping from the archived WAL directory, > assuming the files it needs are there. > > > >> See: >> >> >> http://www.postgresql.org/docs/9.1/interactive/runtime-
[GENERAL] BDR Alter table failing
Why does this not work? From what I read only default values should cause issue. I'm on release 9.4.4: bms=# ALTER TABLE trap ALTER COLUMN trap_timestamp TYPE TIMESTAMP WITH TIME ZONE; ERROR: ALTER TABLE ... ALTER COLUMN TYPE may only affect UNLOGGED or TEMPORARY tables when BDR is active; trap is a regular table
Re: [GENERAL] BDR Alter Table
Hi All, And sorry about that damn thumb pad! Premature send! On Wed, Apr 27, 2016 at 10:15 AM, Adrian Klaver wrote: > On 04/27/2016 07:11 AM, Will McCormick wrote: > >> Why does this not work: >> >> > Because it is NULL :)? > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
[GENERAL] BDR Alter Table
Why does this not work:
Re: [GENERAL] BDR Alter table failing
But this is the exact column definition that exists on the table when I execute the statement It's like it does not check the pre-existing state of the column. Our code is expecting a column already exists error but this error predicates that. On Wed, Apr 27, 2016 at 10:21 AM, Adrian Klaver wrote: > On 04/27/2016 07:13 AM, Will McCormick wrote: > >> Why does this not work? From what I read only default values should >> cause issue. I'm on release 9.4.4: >> >> >> bms=# ALTER TABLE trap ALTER COLUMN trap_timestamp TYPE TIMESTAMP WITH >> TIME ZONE; >> ERROR: ALTER TABLE ... ALTER COLUMN TYPE may only affect UNLOGGED or >> TEMPORARY >> tables when BDR is active; trap is a regular table >> > > http://bdr-project.org/docs/stable/ddl-replication-statements.html > > 8.2.3. DDL statements with restrictions > > ALTER TABLE > > Generally ALTER TABLE commands are allowed. There are a however > several sub-commands that are not supported, mainly those that perform a > full-table re-write. > > ... > > ALTER COLUMN ... TYPE - changing a column's type is not supported. Chaning > a column in a way that doesn't require table rewrites may be suppported at > some point. > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: [GENERAL] BDR Alter table failing
I guess the only viable option would be to the check explicitly ourselves. On Wed, Apr 27, 2016 at 11:25 AM, Will McCormick wrote: > But this is the exact column definition that exists on the table when I > execute the statement > > It's like it does not check the pre-existing state of the column. Our code > is expecting a column already exists error but this error predicates that. > > On Wed, Apr 27, 2016 at 10:21 AM, Adrian Klaver > wrote: > >> On 04/27/2016 07:13 AM, Will McCormick wrote: >> >>> Why does this not work? From what I read only default values should >>> cause issue. I'm on release 9.4.4: >>> >>> >>> bms=# ALTER TABLE trap ALTER COLUMN trap_timestamp TYPE TIMESTAMP WITH >>> TIME ZONE; >>> ERROR: ALTER TABLE ... ALTER COLUMN TYPE may only affect UNLOGGED or >>> TEMPORARY >>> tables when BDR is active; trap is a regular table >>> >> >> http://bdr-project.org/docs/stable/ddl-replication-statements.html >> >> 8.2.3. DDL statements with restrictions >> >> ALTER TABLE >> >> Generally ALTER TABLE commands are allowed. There are a however >> several sub-commands that are not supported, mainly those that perform a >> full-table re-write. >> >> ... >> >> ALTER COLUMN ... TYPE - changing a column's type is not supported. >> Chaning a column in a way that doesn't require table rewrites may be >> suppported at some point. >> >> -- >> Adrian Klaver >> adrian.kla...@aklaver.com >> > >
Re: [GENERAL] BDR Alter table failing
So if I wanted to extend a column from 100 characters to 255 characters is this permitted? The fact that I'm not making a change and the BDR kicked me out makes me skeptical. On Wed, Apr 27, 2016 at 11:56 AM, Craig Ringer wrote: > On 27 April 2016 at 23:43, Alvaro Aguayo Garcia-Rada < > aagu...@opensysperu.com> wrote: > >> Based on my experience, I can say BDR does not performs pre-DDL checks. >> For example, if you try to CREATE TABLE with the name of an existing table, >> BDR will acquire lock anyway, and then will fail when executing the DDL >> statement on the first node, because the table already exists. >> > > Correct, and it has to because otherwise it'd face a race condition where > the table might be created between when it checked and when it tries to > create it. > >> In your case, it's the same: BDR does not checks(nor needs to) if the DDL >> statement is or not required, as that's a dba dutty. Then, BDR executes the >> statement(ane acquires locks), and fails because it would require a full >> table rewrite, which, at the time, is not supported by BDR. >> > > Yeah. This is more of a "we never thought anyone would want to do that and > didn't much care" problem. In this case we could lock the table and then > inspect it. In fact we really should be locking it to prevent races, but we > rely on the global DDL lock mechanism for that right now. (That's not what > it's for, just a side-effect that's kind of handy). > > Applications *must* be adapted to run on BDR. You can't just point it at a > BDR-enabled DB and go "we'll be right". DDL is one thing, but multimaster > async replication conflicts are rather more significant concerns. Also > handling of the currently somewhat quirky global sequence support's habit > of ERRORing if you go too fast, trying to keep your transaction sizes down, > and not trusting row locking for mutual exclusion between nodes. You can't > use LISTEN/NOTIFY between nodes either, or advisory locking, or > pg_largeobject ... yeah. Apps require audit and usually require changes. > Changing an expected error code will be the least of your worries. > > -- > Craig Ringer http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services >
Re: [GENERAL] Thoughts on "Love Your Database"
A reason to consider may be portability. What happens if I want to let my customer chose their data store or I just don't want to put all my eggs in one basket.Technically there are truths but you cannot ignore the business side either. If a we can exceed our performance requirements and keep things generic/portable this is the best of both worlds.I think this is the main reason people separate the business logic from the database. How many of you have ported databases between platforms? Or had multiple types of data stores in the same company? On Wed, May 4, 2016 at 12:11 AM, Guyren Howe wrote: > I've long been frustrated with how most web developers I meet have no idea > how to use an SQL database properly. I think I'm going to write a book > called Love Your Database, aimed at web developers, that explains how to > make their apps better by leveraging the power of SQL in general, and > Postgres in particular. > > I'm thinking of a section on features of SQL most folks don't know about > (CTEs are *way* to hell at the top of that list, but also EXCEPT/INTERSECT > and window functions), but much of the book would be about how to do things > server side. Benchmarks showing how much faster this can be, but mostly > techniques — stored procedures/triggers/rules, views. > > I asked a colleague about the advice I often hear stated but seldom > justified, that one shouldn't put business rules in the database. He > offered that server-side code can be hard to debug. > > I'm sure many here would love to see such a book published, maybe some > talks on the topic given. > > > What might I cover that I haven't mentioned? What are the usual objections > to server-side code and how can they be met? When *are* they justified and > what should the criteria be to put code in Postgres? Any other thoughts? > Any other websites or books on the topic I might consult? > > TIA > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] Thoughts on "Love Your Database"
I agree it's typically political but so are most things business. Examples: Companies buy other companies - You are using a competitors data store and want to replace it. Company needs to compete with competitors and wants to reduce cost ... these are not technical requirements and it's certainly not vapor ideology. I have only worked for startups and have seen this happen at every company i have worked for, yes it is political but yes it happens. Smaller companies are more susceptible to it. The reality is somewhere in the middle as it often is. My point is you don't have to replace a million lines of code if you plan upfront. If you don't .. you do. On Wed, May 4, 2016 at 11:29 AM, Uwe Schroeder wrote: > On Wed, May 04, 2016 11:05:25 AM Will McCormick wrote: > > A reason to consider may be portability. What happens if I want to let my > customer chose their data store or I just don't want to put all my eggs in > one basket.Technically there are truths but you cannot ignore the business > side either. If a we can exceed our performance requirements and keep > things generic/portable this is the best of both worlds.I think this is the > main reason people separate the business logic from the database. How many > of you have ported databases between platforms? Or had multiple types of > data stores in the same company? > > I have been waiting for the portability argument for the last 20+ posts > :-) Everyone who did any type of consulting/working in this field knows > that latest the second argument from management is “portability”. I have > yet to see anyone who really needed to move to a different database. If > they did it usually is a political issue and not a technical one (uhh, we > hired this new operations manager and MS SQL is so much better than > postgresql …) Unless you develop a generic software to be sold to many > clients, the choice of data storage is rarely a real concern unless someone > who feels the urge to be important starts throwing words he read in a > magazine at management. None of my clients ever questioned the database. > They either had one which they wanted to use (i.e. they had a big iron IBM > with DB2 on it plus a DBA who knew what he was doing) or they didn't care > as long as it worked as expected and came with a maintenance contract of > sorts. > > If you're developing a web application, the least concern is portability > (you're not going to replace the million lines of PHP either). The biggest > concern should lie with data integrity and API's that protect your data and > business logic so you can hire that cheap app development company from > overseas your boss recommended (based on a spam email) without running the > risk of compromising everything. > > Uwe >
Re: [GENERAL] Thoughts on "Love Your Database"
I agree that it's not like turning on the light switch. And I'm not implying there isn't a logic layer between the database and the application. Based off my past experiences I would likely not put business logic in the database unless it was a critical for performance. This does not make it portable and does the performance of my product require it? It really comes down to the application there is not one big paint brush. We have all be around and get this. I would not likely design a solution that had the database and the application layers both containing the business logic. I have seen this and the unexpected behavior as assumptions are made on both ends of that spectrum. I like to keep it simple where I can. This all being said I think database minded folks should own DAO's. I think if your a database guy and you don't own the DAO's you are missing an opportunity to really make a difference and get more aligned with your development staff. Doesn't matter what code base DAO's are in it's a repetitive pattern that any database person can pick up. On Wed, May 4, 2016 at 12:29 PM, Szymon Lipiński wrote: > > > On 4 May 2016 at 18:14, Will McCormick wrote: > >> I agree it's typically political but so are most things business. >> Examples: Companies buy other companies - You are using a competitors data >> store and want to replace it. Company needs to compete with competitors >> and wants to reduce cost ... these are not technical requirements and it's >> certainly not vapor ideology. I have only worked for startups and have seen >> this happen at every company i have worked for, yes it is political but yes >> it happens. Smaller companies are more susceptible to it. >> >> The reality is somewhere in the middle as it often is. My point is you >> don't have to replace a million lines of code if you plan upfront. If you >> don't .. you do. >> >> >> On Wed, May 4, 2016 at 11:29 AM, Uwe Schroeder wrote: >> >>> On Wed, May 04, 2016 11:05:25 AM Will McCormick wrote: >>> >>> A reason to consider may be portability. What happens if I want to let >>> my customer chose their data store or I just don't want to put all my eggs >>> in one basket.Technically there are truths but you cannot ignore the >>> business side either. If a we can exceed our performance requirements and >>> keep things generic/portable this is the best of both worlds.I think this >>> is the main reason people separate the business logic from the database. >>> How many of you have ported databases between platforms? Or had multiple >>> types of data stores in the same company? >>> >>> I have been waiting for the portability argument for the last 20+ posts >>> :-) Everyone who did any type of consulting/working in this field knows >>> that latest the second argument from management is “portability”. I have >>> yet to see anyone who really needed to move to a different database. If >>> they did it usually is a political issue and not a technical one (uhh, we >>> hired this new operations manager and MS SQL is so much better than >>> postgresql …) Unless you develop a generic software to be sold to many >>> clients, the choice of data storage is rarely a real concern unless someone >>> who feels the urge to be important starts throwing words he read in a >>> magazine at management. None of my clients ever questioned the database. >>> They either had one which they wanted to use (i.e. they had a big iron IBM >>> with DB2 on it plus a DBA who knew what he was doing) or they didn't care >>> as long as it worked as expected and came with a maintenance contract of >>> sorts. >>> >>> If you're developing a web application, the least concern is portability >>> (you're not going to replace the million lines of PHP either). The biggest >>> concern should lie with data integrity and API's that protect your data and >>> business logic so you can hire that cheap app development company from >>> overseas your boss recommended (based on a spam email) without running the >>> risk of compromising everything. >>> >>> Uwe >>> >> >> > I'm not sure that when a company buys another company they can just > migrate the other database without any logic layer. The data is usually > useless without the business layer which tells how to merge all the parts > together to have a simple answer to a question like "do we have this in > stock". And for such a migration that's not too important if we have the > logic in database, or in some other layer. Of course it is always simpler > to migrate a database treated like a CSV file, where all the logic > (including constraints) is in an external application. But do we really > want that? > > On the other hand, when I was trying to store all my logic in a database, > there was just one thing that made me hate it. Testing. Testing the > procedures inside the database was not easy, not funny, and too much time > consuming. > > -- > regards Szymon Lipiński >
Re: [GENERAL] Thoughts on "Love Your Database"
I 100% agree with you. It's always been a problem but it is up to us to take ownership and provide value. Some would be surprising shocked how simple it is to manage the Data access layer once the framework is in place regardless of what it is written in. For the same reasons you wouldn't typically have Application Developers configuring your production disks for high performance... why would you ever have them access the database inefficiently? There is an assumption designers are good at SQL or at least know it ... I challenge you to flip that around and learn the Data Access Layer. Companies do not knowingly spend money on hardware to have it consumed by inefficient data access? No executive signs up to increase the TCO and reduce profit margins when they could be making more money? But this is far to often the case and the root cause is they did not have the right tool (pun not intended) for the job. On Wed, May 4, 2016 at 1:33 PM, Szymon Lipiński wrote: > > > On 4 May 2016 at 19:09, Will McCormick wrote: > >> I agree that it's not like turning on the light switch. And I'm not >> implying there isn't a logic layer between the database and the >> application. Based off my past experiences I would likely not put business >> logic in the database unless it was a critical for performance. This does >> not make it portable and does the performance of my product require it? It >> really comes down to the application there is not one big paint brush. We >> have all be around and get this. I would not likely design a solution that >> had the database and the application layers both containing the business >> logic. I have seen this and the unexpected behavior as assumptions are made >> on both ends of that spectrum. I like to keep it simple where I can. This >> all being said I think database minded folks should own DAO's. I think if >> your a database guy and you don't own the DAO's you are missing an >> opportunity to really make a difference and get more aligned with your >> development staff. Doesn't matter what code base DAO's are in it's a >> repetitive pattern that any database person can pick up. >> >> On Wed, May 4, 2016 at 12:29 PM, Szymon Lipiński >> wrote: >> >>> >>> >>> On 4 May 2016 at 18:14, Will McCormick wrote: >>> >>>> I agree it's typically political but so are most things business. >>>> Examples: Companies buy other companies - You are using a competitors data >>>> store and want to replace it. Company needs to compete with competitors >>>> and wants to reduce cost ... these are not technical requirements and it's >>>> certainly not vapor ideology. I have only worked for startups and have seen >>>> this happen at every company i have worked for, yes it is political but yes >>>> it happens. Smaller companies are more susceptible to it. >>>> >>>> The reality is somewhere in the middle as it often is. My point is you >>>> don't have to replace a million lines of code if you plan upfront. If you >>>> don't .. you do. >>>> >>>> >>>> On Wed, May 4, 2016 at 11:29 AM, Uwe Schroeder wrote: >>>> >>>>> On Wed, May 04, 2016 11:05:25 AM Will McCormick wrote: >>>>> >>>>> A reason to consider may be portability. What happens if I want to let >>>>> my customer chose their data store or I just don't want to put all my eggs >>>>> in one basket.Technically there are truths but you cannot ignore the >>>>> business side either. If a we can exceed our performance requirements and >>>>> keep things generic/portable this is the best of both worlds.I think this >>>>> is the main reason people separate the business logic from the database. >>>>> How many of you have ported databases between platforms? Or had multiple >>>>> types of data stores in the same company? >>>>> >>>>> I have been waiting for the portability argument for the last 20+ >>>>> posts :-) Everyone who did any type of consulting/working in this >>>>> field knows that latest the second argument from management is >>>>> “portability”. I have yet to see anyone who really needed to move to a >>>>> different database. If they did it usually is a political issue and not a >>>>> technical one (uhh, we hired this new operations manager and MS SQL is so >>>>> much better than postgresql …) Unless you develop a generic software to be >>>>> sold to man
Re: [GENERAL] Thoughts on "Love Your Database"
Yeah but your already paying for a developer ... On Wed, May 4, 2016 at 2:36 PM, Szymon Lipiński wrote: > > > On 4 May 2016 at 20:20, Will McCormick wrote: > >> I 100% agree with you. It's always been a problem but it is up to us to >> take ownership and provide value. Some would be surprising shocked how >> simple it is to manage the Data access layer once the framework is in place >> regardless of what it is written in. For the same reasons you wouldn't >> typically have Application Developers configuring your production disks for >> high performance... why would you ever have them access the database >> inefficiently? There is an assumption designers are good at SQL or at least >> know it ... I challenge you to flip that around and learn the Data Access >> Layer. Companies do not knowingly spend money on hardware to have it >> consumed by inefficient data access? No executive signs up to increase the >> TCO and reduce profit margins when they could be making more money? But >> this is far to often the case and the root cause is they did not have the >> right tool (pun not intended) for the job. >> >> On Wed, May 4, 2016 at 1:33 PM, Szymon Lipiński >> wrote: >> >>> >>> >>> On 4 May 2016 at 19:09, Will McCormick wrote: >>> >>>> I agree that it's not like turning on the light switch. And I'm not >>>> implying there isn't a logic layer between the database and the >>>> application. Based off my past experiences I would likely not put business >>>> logic in the database unless it was a critical for performance. This does >>>> not make it portable and does the performance of my product require it? It >>>> really comes down to the application there is not one big paint brush. We >>>> have all be around and get this. I would not likely design a solution that >>>> had the database and the application layers both containing the business >>>> logic. I have seen this and the unexpected behavior as assumptions are made >>>> on both ends of that spectrum. I like to keep it simple where I can. This >>>> all being said I think database minded folks should own DAO's. I think if >>>> your a database guy and you don't own the DAO's you are missing an >>>> opportunity to really make a difference and get more aligned with your >>>> development staff. Doesn't matter what code base DAO's are in it's a >>>> repetitive pattern that any database person can pick up. >>>> >>>> On Wed, May 4, 2016 at 12:29 PM, Szymon Lipiński >>>> wrote: >>>> >>>>> >>>>> >>>>> On 4 May 2016 at 18:14, Will McCormick wrote: >>>>> >>>>>> I agree it's typically political but so are most things business. >>>>>> Examples: Companies buy other companies - You are using a competitors >>>>>> data >>>>>> store and want to replace it. Company needs to compete with competitors >>>>>> and wants to reduce cost ... these are not technical requirements and >>>>>> it's >>>>>> certainly not vapor ideology. I have only worked for startups and have >>>>>> seen >>>>>> this happen at every company i have worked for, yes it is political but >>>>>> yes >>>>>> it happens. Smaller companies are more susceptible to it. >>>>>> >>>>>> The reality is somewhere in the middle as it often is. My point is >>>>>> you don't have to replace a million lines of code if you plan upfront. If >>>>>> you don't .. you do. >>>>>> >>>>>> >>>>>> On Wed, May 4, 2016 at 11:29 AM, Uwe Schroeder wrote: >>>>>> >>>>>>> On Wed, May 04, 2016 11:05:25 AM Will McCormick wrote: >>>>>>> >>>>>>> A reason to consider may be portability. What happens if I want to >>>>>>> let my customer chose their data store or I just don't want to put all >>>>>>> my >>>>>>> eggs in one basket.Technically there are truths but you cannot ignore >>>>>>> the >>>>>>> business side either. If a we can exceed our performance requirements >>>>>>> and >>>>>>> keep things generic/portable this is the best of both worlds.I think >>>>>>> this &g