Re: Thomas Munro > Thanks for testing! Would you mind trying v8 from that thread? V7 > had a silly bug (I accidentally deleted a 'case' label while cleaning > some stuff up, resulting in the above error...)
v8 worked better. It succeeded a few times (at least 12, my screen scrollback didn't catch more) before erroring like this: [10:21:58.410](0.151s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict [10:21:58.463](0.053s) not ok 16 - startup deadlock: stats show conflict on standby [10:21:58.463](0.000s) [10:21:58.463](0.000s) # Failed test 'startup deadlock: stats show conflict on standby' # at t/031_recovery_conflict.pl line 332. [10:21:58.463](0.000s) # got: '0' # expected: '1' Christoph
2023-08-07 10:21:53.073 UTC [3132659] LOG: starting PostgreSQL 17devel (Debian 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Debian 13.2.0-1) 13.2.0, 64-bit 2023-08-07 10:21:53.074 UTC [3132659] LOG: listening on Unix socket "/tmp/Ew0sehtBKm/.s.PGSQL.52786" 2023-08-07 10:21:53.075 UTC [3132664] LOG: database system was shut down at 2023-08-07 10:21:52 UTC 2023-08-07 10:21:53.086 UTC [3132659] LOG: database system is ready to accept connections 2023-08-07 10:21:53.523 UTC [3132690] 031_recovery_conflict.pl LOG: statement: CREATE TABLESPACE test_recovery_conflict_tblspc LOCATION '' 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl LOG: received replication command: SHOW data_directory_mode 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl STATEMENT: SHOW data_directory_mode 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl LOG: received replication command: SHOW wal_segment_size 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl STATEMENT: SHOW wal_segment_size 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl LOG: received replication command: IDENTIFY_SYSTEM 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl STATEMENT: IDENTIFY_SYSTEM 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl LOG: received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-07 10:21:53.536 UTC [3132700] 031_recovery_conflict.pl STATEMENT: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-07 10:21:53.542 UTC [3132662] LOG: checkpoint starting: immediate force wait 2023-08-07 10:21:53.548 UTC [3132662] LOG: checkpoint complete: wrote 7 buffers (5.5%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.007 s; sync files=0, longest=0.000 s, average=0.000 s; distance=11350 kB, estimate=11350 kB; lsn=0/2000060, redo lsn=0/2000028 2023-08-07 10:21:53.561 UTC [3132701] 031_recovery_conflict.pl LOG: received replication command: SHOW data_directory_mode 2023-08-07 10:21:53.561 UTC [3132701] 031_recovery_conflict.pl STATEMENT: SHOW data_directory_mode 2023-08-07 10:21:53.566 UTC [3132701] 031_recovery_conflict.pl LOG: received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_3132701" TEMPORARY PHYSICAL ( RESERVE_WAL) 2023-08-07 10:21:53.566 UTC [3132701] 031_recovery_conflict.pl STATEMENT: CREATE_REPLICATION_SLOT "pg_basebackup_3132701" TEMPORARY PHYSICAL ( RESERVE_WAL) 2023-08-07 10:21:53.576 UTC [3132701] 031_recovery_conflict.pl LOG: received replication command: IDENTIFY_SYSTEM 2023-08-07 10:21:53.576 UTC [3132701] 031_recovery_conflict.pl STATEMENT: IDENTIFY_SYSTEM 2023-08-07 10:21:53.581 UTC [3132701] 031_recovery_conflict.pl LOG: received replication command: START_REPLICATION SLOT "pg_basebackup_3132701" 0/2000000 TIMELINE 1 2023-08-07 10:21:53.581 UTC [3132701] 031_recovery_conflict.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_3132701" 0/2000000 TIMELINE 1 2023-08-07 10:21:53.774 UTC [3132700] 031_recovery_conflict.pl LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp3132700.0", size 137324 2023-08-07 10:21:53.774 UTC [3132700] 031_recovery_conflict.pl STATEMENT: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-07 10:21:54.341 UTC [3132754] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-07 10:21:54.341 UTC [3132754] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-07 10:21:54.348 UTC [3132754] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:21:54.348 UTC [3132754] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:21:54.361 UTC [3132760] 031_recovery_conflict.pl LOG: statement: CREATE DATABASE test_db 2023-08-07 10:21:54.395 UTC [3132770] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table1(a int, b int); 2023-08-07 10:21:54.396 UTC [3132770] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 SELECT i % 3, 0 FROM generate_series(1,20) i; 2023-08-07 10:21:54.396 UTC [3132770] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table2(a int, b int); 2023-08-07 10:21:54.419 UTC [3132776] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:54.436 UTC [3132788] 031_recovery_conflict.pl LOG: statement: SELECT '0/342B100' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:54.554 UTC [3132811] 031_recovery_conflict.pl LOG: statement: SELECT '0/342B100' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:54.581 UTC [3132817] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:54.585 UTC [3132817] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 VALUES (1,0); 2023-08-07 10:21:54.586 UTC [3132817] 031_recovery_conflict.pl LOG: statement: ROLLBACK; 2023-08-07 10:21:54.589 UTC [3132817] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:54.594 UTC [3132817] 031_recovery_conflict.pl LOG: statement: LOCK test_recovery_conflict_table1; 2023-08-07 10:21:54.596 UTC [3132817] 031_recovery_conflict.pl LOG: statement: COMMIT; 2023-08-07 10:21:54.609 UTC [3132820] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:54.624 UTC [3132823] 031_recovery_conflict.pl LOG: statement: SELECT '0/342B1C0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:54.744 UTC [3132845] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-07 10:21:54.758 UTC [3132847] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:54.781 UTC [3132858] 031_recovery_conflict.pl LOG: statement: SELECT '0/3430000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:54.900 UTC [3132883] 031_recovery_conflict.pl LOG: statement: SELECT '0/3430000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:55.045 UTC [3132890] 031_recovery_conflict.pl LOG: statement: SELECT '0/3430000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:55.243 UTC [3132918] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 SELECT i, 0 FROM generate_series(1,20) i 2023-08-07 10:21:55.269 UTC [3132920] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:55.374 UTC [3132931] 031_recovery_conflict.pl LOG: statement: SELECT '0/34339C8' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:55.433 UTC [3132934] 031_recovery_conflict.pl LOG: statement: UPDATE test_recovery_conflict_table1 SET a = a + 1 WHERE a > 2; 2023-08-07 10:21:55.444 UTC [3132943] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-07 10:21:55.453 UTC [3132945] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:55.466 UTC [3132948] 031_recovery_conflict.pl LOG: statement: SELECT '0/3434000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:55.582 UTC [3132955] 031_recovery_conflict.pl LOG: statement: SELECT '0/3434000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:55.854 UTC [3132962] 031_recovery_conflict.pl LOG: statement: SELECT '0/3434000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:56.204 UTC [3133012] 031_recovery_conflict.pl LOG: statement: DROP TABLE test_recovery_conflict_table1; 2023-08-07 10:21:56.222 UTC [3133014] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:56.234 UTC [3133018] 031_recovery_conflict.pl LOG: statement: SELECT '0/3434630' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:56.360 UTC [3133036] 031_recovery_conflict.pl LOG: statement: SELECT '0/3434630' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:56.435 UTC [3133053] 031_recovery_conflict.pl LOG: statement: DROP TABLESPACE test_recovery_conflict_tblspc; 2023-08-07 10:21:56.452 UTC [3133055] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:56.471 UTC [3133059] 031_recovery_conflict.pl LOG: statement: SELECT '0/34346D0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:56.771 UTC [3133086] 031_recovery_conflict.pl LOG: statement: SELECT '0/34346D0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:57.809 UTC [3133191] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-07 10:21:57.809 UTC [3133191] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-07 10:21:57.809 UTC [3133191] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:21:57.809 UTC [3133191] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:21:58.025 UTC [3133220] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table1(a int, b int); 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 VALUES (1); 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1(a) SELECT generate_series(1, 100) i; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: ROLLBACK; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table2; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: PREPARE TRANSACTION 'lock'; 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1(a) VALUES (170); 2023-08-07 10:21:58.026 UTC [3133220] 031_recovery_conflict.pl LOG: statement: SELECT txid_current(); 2023-08-07 10:21:58.199 UTC [3133223] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:58.221 UTC [3133238] 031_recovery_conflict.pl LOG: statement: SELECT '0/3437418' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:58.270 UTC [3133252] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-07 10:21:58.288 UTC [3133256] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:21:58.304 UTC [3133260] 031_recovery_conflict.pl LOG: statement: SELECT '0/3437418' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:21:58.480 UTC [3133289] 031_recovery_conflict.pl LOG: statement: ROLLBACK PREPARED 'lock'; 2023-08-07 10:22:00.092 UTC [3133521] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-07 10:22:00.092 UTC [3133521] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-07 10:22:00.261 UTC [3133521] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:22:00.261 UTC [3133521] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-07 10:22:00.314 UTC [3133545] 031_recovery_conflict.pl LOG: statement: DROP DATABASE test_db; 2023-08-07 10:22:00.314 UTC [3132662] LOG: checkpoint starting: immediate force wait 2023-08-07 10:22:00.315 UTC [3132662] LOG: checkpoint complete: wrote 2 buffers (1.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.001 s; sync files=0, longest=0.000 s, average=0.000 s; distance=20726 kB, estimate=20726 kB; lsn=0/343D970, redo lsn=0/343D938 2023-08-07 10:22:00.464 UTC [3133564] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-07 10:22:00.502 UTC [3133571] 031_recovery_conflict.pl LOG: statement: SELECT '0/343DBF0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-07 10:22:00.622 UTC [3132659] LOG: received fast shutdown request 2023-08-07 10:22:00.622 UTC [3132659] LOG: aborting any active transactions 2023-08-07 10:22:00.628 UTC [3132659] LOG: background worker "logical replication launcher" (PID 3132670) exited with exit code 1 2023-08-07 10:22:00.650 UTC [3132662] LOG: shutting down 2023-08-07 10:22:00.650 UTC [3132662] LOG: checkpoint starting: shutdown immediate 2023-08-07 10:22:00.650 UTC [3132662] LOG: checkpoint complete: wrote 2 buffers (1.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.001 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=18653 kB; lsn=0/343DBF0, redo lsn=0/343DBF0 2023-08-07 10:22:00.652 UTC [3132659] LOG: database system is shut down
2023-08-07 10:21:54.157 UTC [3132749] LOG: starting PostgreSQL 17devel (Debian 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Debian 13.2.0-1) 13.2.0, 64-bit 2023-08-07 10:21:54.157 UTC [3132749] LOG: listening on Unix socket "/tmp/Ew0sehtBKm/.s.PGSQL.52787" 2023-08-07 10:21:54.160 UTC [3132752] LOG: database system was interrupted; last known up at 2023-08-07 10:21:53 UTC 2023-08-07 10:21:54.160 UTC [3132752] LOG: entering standby mode 2023-08-07 10:21:54.161 UTC [3132752] LOG: redo starts at 0/2000028 2023-08-07 10:21:54.161 UTC [3132752] WARNING: unexpected directory entry "16384" found in pg_tblspc/ 2023-08-07 10:21:54.161 UTC [3132752] DETAIL: All directory entries in pg_tblspc/ should be symbolic links. 2023-08-07 10:21:54.161 UTC [3132752] HINT: Remove those directories, or set allow_in_place_tablespaces to ON transiently to let recovery complete. 2023-08-07 10:21:54.161 UTC [3132752] LOG: consistent recovery state reached at 0/2000100 2023-08-07 10:21:54.161 UTC [3132749] LOG: database system is ready to accept read-only connections 2023-08-07 10:21:54.349 UTC [3132753] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-07 10:21:54.635 UTC [3132815] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:54.720 UTC [3132815] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; 2023-08-07 10:21:54.721 UTC [3132815] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-07 10:21:54.733 UTC [3132815] 031_recovery_conflict.pl LOG: statement: ; 2023-08-07 10:21:54.941 UTC [3132752] LOG: recovery still waiting after 10.021 ms: recovery conflict on buffer pin 2023-08-07 10:21:54.941 UTC [3132752] CONTEXT: WAL redo at 0/342B1C0 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-07 10:21:55.000 UTC [3132815] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-07 10:21:55.000 UTC [3132815] 031_recovery_conflict.pl DETAIL: User was holding shared buffer pin for too long. 2023-08-07 10:21:55.000 UTC [3132815] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-07 10:21:55.000 UTC [3132752] LOG: recovery finished waiting after 69.577 ms: recovery conflict on buffer pin 2023-08-07 10:21:55.000 UTC [3132752] CONTEXT: WAL redo at 0/342B1C0 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-07 10:21:55.222 UTC [3132913] 031_recovery_conflict.pl LOG: statement: SELECT confl_bufferpin FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-07 10:21:55.379 UTC [3132910] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:55.379 UTC [3132910] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; 2023-08-07 10:21:55.380 UTC [3132910] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-07 10:21:55.381 UTC [3132910] 031_recovery_conflict.pl LOG: statement: ; 2023-08-07 10:21:55.460 UTC [3132752] LOG: recovery still waiting after 15.223 ms: recovery conflict on snapshot 2023-08-07 10:21:55.460 UTC [3132752] DETAIL: Conflicting process: 3132910. 2023-08-07 10:21:55.460 UTC [3132752] CONTEXT: WAL redo at 0/3433F00 for Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-07 10:21:55.508 UTC [3132910] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-07 10:21:55.508 UTC [3132910] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed. 2023-08-07 10:21:55.508 UTC [3132910] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-07 10:21:55.513 UTC [3132752] LOG: recovery finished waiting after 68.437 ms: recovery conflict on snapshot 2023-08-07 10:21:55.513 UTC [3132752] CONTEXT: WAL redo at 0/3433F00 for Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-07 10:21:56.182 UTC [3133010] 031_recovery_conflict.pl LOG: statement: SELECT confl_snapshot FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-07 10:21:56.187 UTC [3133007] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:56.187 UTC [3133007] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE; 2023-08-07 10:21:56.187 UTC [3133007] 031_recovery_conflict.pl LOG: statement: SELECT 1; 2023-08-07 10:21:56.188 UTC [3133007] 031_recovery_conflict.pl LOG: statement: ; 2023-08-07 10:21:56.215 UTC [3132752] LOG: recovery still waiting after 10.705 ms: recovery conflict on lock 2023-08-07 10:21:56.215 UTC [3132752] DETAIL: Conflicting process: 3133007. 2023-08-07 10:21:56.215 UTC [3132752] CONTEXT: WAL redo at 0/34340F8 for Standby/LOCK: xid 739 db 16385 rel 16386 2023-08-07 10:21:56.262 UTC [3133007] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-07 10:21:56.262 UTC [3133007] 031_recovery_conflict.pl DETAIL: User was holding a relation lock for too long. 2023-08-07 10:21:56.262 UTC [3133007] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-07 10:21:56.270 UTC [3132752] LOG: recovery finished waiting after 65.579 ms: recovery conflict on lock 2023-08-07 10:21:56.270 UTC [3132752] CONTEXT: WAL redo at 0/34340F8 for Standby/LOCK: xid 739 db 16385 rel 16386 2023-08-07 10:21:56.410 UTC [3133047] 031_recovery_conflict.pl LOG: statement: SELECT confl_lock FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-07 10:21:56.418 UTC [3133041] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:56.418 UTC [3133041] 031_recovery_conflict.pl LOG: statement: SET work_mem = '64kB'; 2023-08-07 10:21:56.418 UTC [3133041] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT count(*) FROM generate_series(1,6000); 2023-08-07 10:21:56.418 UTC [3133041] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-07 10:21:56.419 UTC [3133041] 031_recovery_conflict.pl LOG: statement: ; 2023-08-07 10:21:56.440 UTC [3132752] LOG: could not remove directory "pg_tblspc/16384/PG_17_202307261/pgsql_tmp": Directory not empty 2023-08-07 10:21:56.440 UTC [3132752] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:21:56.440 UTC [3132752] LOG: could not remove directory "pg_tblspc/16384/PG_17_202307261": Directory not empty 2023-08-07 10:21:56.440 UTC [3132752] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:21:56.455 UTC [3132752] LOG: recovery still waiting after 15.222 ms: recovery conflict on tablespace 2023-08-07 10:21:56.455 UTC [3132752] DETAIL: Conflicting process: 3133041. 2023-08-07 10:21:56.455 UTC [3132752] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:21:56.561 UTC [3133041] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-07 10:21:56.561 UTC [3133041] 031_recovery_conflict.pl DETAIL: User was or might have been using tablespace that must be dropped. 2023-08-07 10:21:56.561 UTC [3133041] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-07 10:21:56.561 UTC [3133041] 031_recovery_conflict.pl LOG: temporary file: path "pg_tblspc/16384/PG_17_202307261/pgsql_tmp/pgsql_tmp3133041.0", size 84000 2023-08-07 10:21:56.566 UTC [3132752] LOG: recovery finished waiting after 125.361 ms: recovery conflict on tablespace 2023-08-07 10:21:56.566 UTC [3132752] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:21:57.074 UTC [3133136] 031_recovery_conflict.pl LOG: statement: SELECT confl_tablespace FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-07 10:21:57.279 UTC [3132749] LOG: received fast shutdown request 2023-08-07 10:21:57.279 UTC [3132749] LOG: aborting any active transactions 2023-08-07 10:21:57.279 UTC [3133128] 031_recovery_conflict.pl FATAL: terminating connection due to administrator command 2023-08-07 10:21:57.284 UTC [3132753] FATAL: terminating walreceiver process due to administrator command 2023-08-07 10:21:57.370 UTC [3132750] LOG: shutting down 2023-08-07 10:21:57.489 UTC [3132749] LOG: database system is shut down 2023-08-07 10:21:57.595 UTC [3133177] LOG: starting PostgreSQL 17devel (Debian 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Debian 13.2.0-1) 13.2.0, 64-bit 2023-08-07 10:21:57.595 UTC [3133177] LOG: listening on Unix socket "/tmp/Ew0sehtBKm/.s.PGSQL.52787" 2023-08-07 10:21:57.604 UTC [3133180] LOG: database system was shut down in recovery at 2023-08-07 10:21:57 UTC 2023-08-07 10:21:57.604 UTC [3133180] LOG: entering standby mode 2023-08-07 10:21:57.605 UTC [3133180] LOG: redo starts at 0/2000028 2023-08-07 10:21:57.803 UTC [3133180] LOG: could not stat file "pg_tblspc/16384": No such file or directory 2023-08-07 10:21:57.803 UTC [3133180] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:21:57.803 UTC [3133180] LOG: consistent recovery state reached at 0/34346D0 2023-08-07 10:21:57.803 UTC [3133180] LOG: invalid record length at 0/34346D0: expected at least 24, got 0 2023-08-07 10:21:57.803 UTC [3133177] LOG: database system is ready to accept read-only connections 2023-08-07 10:21:57.809 UTC [3133188] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-07 10:21:58.225 UTC [3133215] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-07 10:21:58.225 UTC [3133215] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT a FROM test_recovery_conflict_table1; 2023-08-07 10:21:58.226 UTC [3133215] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-07 10:21:58.231 UTC [3133215] 031_recovery_conflict.pl LOG: statement: SELECT * FROM test_recovery_conflict_table2; 2023-08-07 10:21:58.252 UTC [3133247] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted; 2023-08-07 10:21:58.314 UTC [3133180] LOG: recovery still waiting after 10.016 ms: recovery conflict on buffer pin 2023-08-07 10:21:58.314 UTC [3133180] CONTEXT: WAL redo at 0/3437418 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 100, nunused: 0, redirected: [], dead: [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101], unused: []; blkref #0: rel 1663/16385/16392, blk 0 2023-08-07 10:21:58.324 UTC [3133215] 031_recovery_conflict.pl ERROR: canceling statement due to conflict with recovery at character 15 2023-08-07 10:21:58.324 UTC [3133215] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery. 2023-08-07 10:21:58.324 UTC [3133215] 031_recovery_conflict.pl STATEMENT: SELECT * FROM test_recovery_conflict_table2; 2023-08-07 10:21:58.324 UTC [3133180] LOG: recovery finished waiting after 20.102 ms: recovery conflict on buffer pin 2023-08-07 10:21:58.324 UTC [3133180] CONTEXT: WAL redo at 0/3437418 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 100, nunused: 0, redirected: [], dead: [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101], unused: []; blkref #0: rel 1663/16385/16392, blk 0 2023-08-07 10:21:58.455 UTC [3133284] 031_recovery_conflict.pl LOG: statement: SELECT confl_deadlock FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-07 10:21:58.502 UTC [3133177] LOG: received fast shutdown request 2023-08-07 10:21:58.502 UTC [3133177] LOG: aborting any active transactions 2023-08-07 10:21:58.504 UTC [3133188] FATAL: terminating walreceiver process due to administrator command 2023-08-07 10:21:58.507 UTC [3133282] 031_recovery_conflict.pl FATAL: terminating connection due to administrator command 2023-08-07 10:21:58.536 UTC [3133215] 031_recovery_conflict.pl FATAL: terminating connection due to administrator command 2023-08-07 10:21:58.536 UTC [3133215] 031_recovery_conflict.pl LOG: could not send data to client: Broken pipe 2023-08-07 10:21:58.543 UTC [3133178] LOG: shutting down 2023-08-07 10:21:58.545 UTC [3133177] LOG: database system is shut down 2023-08-07 10:21:58.618 UTC [3133299] LOG: starting PostgreSQL 17devel (Debian 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Debian 13.2.0-1) 13.2.0, 64-bit 2023-08-07 10:21:58.618 UTC [3133299] LOG: listening on Unix socket "/tmp/Ew0sehtBKm/.s.PGSQL.52787" 2023-08-07 10:21:59.894 UTC [3133495] LOG: database system was shut down in recovery at 2023-08-07 10:21:58 UTC 2023-08-07 10:21:59.894 UTC [3133495] LOG: entering standby mode 2023-08-07 10:22:00.050 UTC [3133495] LOG: redo starts at 0/2000028 2023-08-07 10:22:00.066 UTC [3133495] LOG: could not stat file "pg_tblspc/16384": No such file or directory 2023-08-07 10:22:00.066 UTC [3133495] CONTEXT: WAL redo at 0/3434668 for Tablespace/DROP: 16384 2023-08-07 10:22:00.066 UTC [3133495] LOG: consistent recovery state reached at 0/343D850 2023-08-07 10:22:00.066 UTC [3133495] LOG: invalid record length at 0/343D850: expected at least 24, got 0 2023-08-07 10:22:00.068 UTC [3133299] LOG: database system is ready to accept read-only connections 2023-08-07 10:22:00.263 UTC [3133520] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-07 10:22:00.295 UTC [3133543] 031_recovery_conflict.pl LOG: statement: SELECT conflicts FROM pg_stat_database WHERE datname='test_db'; 2023-08-07 10:22:00.453 UTC [3133533] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-07 10:22:00.453 UTC [3133533] 031_recovery_conflict.pl DETAIL: User was connected to a database that must be dropped. 2023-08-07 10:22:00.453 UTC [3133533] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-07 10:22:00.516 UTC [3133299] LOG: received fast shutdown request 2023-08-07 10:22:00.516 UTC [3133299] LOG: aborting any active transactions 2023-08-07 10:22:00.519 UTC [3133520] FATAL: terminating walreceiver process due to administrator command 2023-08-07 10:22:00.521 UTC [3133493] LOG: shutting down 2023-08-07 10:22:00.521 UTC [3133493] LOG: restartpoint starting: shutdown immediate 2023-08-07 10:22:00.521 UTC [3133493] LOG: restartpoint complete: wrote 4 buffers (3.1%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.001 s; sync files=0, longest=0.000 s, average=0.000 s; distance=20726 kB, estimate=20726 kB; lsn=0/343D970, redo lsn=0/343D938 2023-08-07 10:22:00.521 UTC [3133493] LOG: recovery restart point at 0/343D938 2023-08-07 10:22:00.521 UTC [3133493] DETAIL: Last completed transaction was at log time 2023-08-07 10:22:00.450184+00. 2023-08-07 10:22:00.523 UTC [3133299] LOG: database system is shut down
# Checking port 52786 # Found port 52786 Name: primary Data directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata Backup directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup Archive directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/archives Connection string: port=52786 host=/tmp/Ew0sehtBKm Log file: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log # Running: initdb -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -A trust -N The files belonging to this database system will be owned by user "myon". This user must also own the server process. The database cluster will be initialized with this locale configuration: provider: libc LC_COLLATE: en_US.utf8 LC_CTYPE: en_US.utf8 LC_MESSAGES: C LC_MONETARY: en_US.utf8 LC_NUMERIC: en_US.utf8 LC_TIME: en_US.utf8 The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... Etc/UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok Sync to disk skipped. The data directory might become corrupt if the operating system crashes. Success. You can now start the database server using: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -l logfile start # Running: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../../src/test/regress/pg_regress --config-auth /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 3132659 # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup/my_backup -h /tmp/Ew0sehtBKm -p 52786 --checkpoint fast --no-sync # Backup finished # Checking port 52787 # Found port 52787 Name: standby Data directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata Backup directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/backup Archive directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/archives Connection string: port=52787 host=/tmp/Ew0sehtBKm Log file: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log # Initializing node "standby" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby" ### Starting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log -o --cluster-name=standby start waiting for server to start.... done server started # Postmaster PID for node "standby" is 3132749 Waiting for replication conn standby's replay_lsn to pass 0/342B100 on primary done Waiting for replication conn standby's replay_lsn to pass 0/342B1C0 on primary done [10:21:54.634](3.411s) # issuing query via background psql: # BEGIN; # DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; # FETCH FORWARD FROM test_recovery_conflict_cursor; [10:21:54.734](0.101s) ok 1 - buffer pin conflict: cursor with conflicting pin established Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary done [10:21:55.048](0.314s) ok 2 - buffer pin conflict: logfile contains terminated connection due to recovery conflict [10:21:55.226](0.178s) ok 3 - buffer pin conflict: stats show conflict on standby Waiting for replication conn standby's replay_lsn to pass 0/34339C8 on primary done [10:21:55.378](0.152s) # issuing query via background psql: # BEGIN; # DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; # FETCH FORWARD FROM test_recovery_conflict_cursor; # [10:21:55.382](0.005s) ok 4 - snapshot conflict: cursor with conflicting snapshot established Waiting for replication conn standby's replay_lsn to pass 0/3434000 on primary done [10:21:56.136](0.753s) ok 5 - snapshot conflict: logfile contains terminated connection due to recovery conflict [10:21:56.187](0.051s) ok 6 - snapshot conflict: stats show conflict on standby [10:21:56.187](0.000s) # issuing query via background psql: # BEGIN; # LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE; # SELECT 1; # [10:21:56.188](0.001s) ok 7 - lock conflict: conflicting lock acquired Waiting for replication conn standby's replay_lsn to pass 0/3434630 on primary done [10:21:56.364](0.176s) ok 8 - lock conflict: logfile contains terminated connection due to recovery conflict [10:21:56.415](0.051s) ok 9 - lock conflict: stats show conflict on standby [10:21:56.416](0.000s) # issuing query via background psql: # BEGIN; # SET work_mem = '64kB'; # DECLARE test_recovery_conflict_cursor CURSOR FOR # SELECT count(*) FROM generate_series(1,6000); # FETCH FORWARD FROM test_recovery_conflict_cursor; # [10:21:56.423](0.008s) ok 10 - tablespace conflict: cursor with conflicting temp file established Waiting for replication conn standby's replay_lsn to pass 0/34346D0 on primary done [10:21:56.774](0.350s) ok 11 - tablespace conflict: logfile contains terminated connection due to recovery conflict [10:21:57.274](0.501s) ok 12 - tablespace conflict: stats show conflict on standby ### Restarting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "standby" is 3133177 Waiting for replication conn standby's replay_lsn to pass 0/3437418 on primary done [10:21:58.230](0.956s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established [10:21:58.259](0.029s) ok 14 - startup deadlock: lock acquisition is waiting Waiting for replication conn standby's replay_lsn to pass 0/3437418 on primary done [10:21:58.410](0.151s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict [10:21:58.463](0.053s) not ok 16 - startup deadlock: stats show conflict on standby [10:21:58.463](0.000s) [10:21:58.463](0.000s) # Failed test 'startup deadlock: stats show conflict on standby' # at t/031_recovery_conflict.pl line 332. [10:21:58.463](0.000s) # got: '0' # expected: '1' ### Restarting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start..... done server started # Postmaster PID for node "standby" is 3133299 [10:22:00.298](1.835s) ok 17 - 5 recovery conflicts shown in pg_stat_database Waiting for replication conn standby's replay_lsn to pass 0/343DBF0 on primary done [10:22:00.505](0.207s) ok 18 - database conflict: logfile contains terminated connection due to recovery conflict ### Stopping node "standby" using mode fast # Running: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby" ### Stopping node "primary" using mode fast # Running: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [10:22:00.724](0.219s) 1..18 [10:22:00.730](0.006s) # Looks like you failed 1 test of 18.