Re: Thomas Munro > It's great that you can reproduce this semi-reliably! I've rebased > the patch, hoping you can try it out.
Unfortunately very semi, today I didn't get to the same point where it exited after test 7, but got some other timeouts. Not even sure they are related to this (?) problem. Anyway: > https://www.postgresql.org/message-id/CA%2BhUKGJDcyW8Hbq3UyG-5-5Y7WqqOTjrXbFTMxxmhiofFraE-Q%40mail.gmail.com This patch makes the testsuite hang (and later exit) after this: ok 17 - 5 recovery conflicts shown in pg_stat_database # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 255 just after 17. I haven't seen any other problems with the patch attached, but since that test was hanging and hence very slow, I couldn't do many runs. Perhaps that's progress, I don't know. :) Logs attached. Christoph
2023-08-06 17:20:34.436 UTC [1274721] LOG: starting PostgreSQL 17devel (Ubuntu 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit 2023-08-06 17:20:34.437 UTC [1274721] LOG: listening on Unix socket "/tmp/N8KG4BCXAf/.s.PGSQL.53738" 2023-08-06 17:20:34.505 UTC [1274726] LOG: database system was shut down at 2023-08-06 17:20:33 UTC 2023-08-06 17:20:34.513 UTC [1274721] LOG: database system is ready to accept connections 2023-08-06 17:20:35.499 UTC [1274748] 031_recovery_conflict.pl LOG: statement: CREATE TABLESPACE test_recovery_conflict_tblspc LOCATION '' 2023-08-06 17:20:35.664 UTC [1274757] 031_recovery_conflict.pl LOG: received replication command: SHOW data_directory_mode 2023-08-06 17:20:35.664 UTC [1274757] 031_recovery_conflict.pl STATEMENT: SHOW data_directory_mode 2023-08-06 17:20:35.684 UTC [1274757] 031_recovery_conflict.pl LOG: received replication command: SHOW wal_segment_size 2023-08-06 17:20:35.684 UTC [1274757] 031_recovery_conflict.pl STATEMENT: SHOW wal_segment_size 2023-08-06 17:20:35.689 UTC [1274757] 031_recovery_conflict.pl LOG: received replication command: IDENTIFY_SYSTEM 2023-08-06 17:20:35.689 UTC [1274757] 031_recovery_conflict.pl STATEMENT: IDENTIFY_SYSTEM 2023-08-06 17:20:35.694 UTC [1274757] 031_recovery_conflict.pl LOG: received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-06 17:20:35.694 UTC [1274757] 031_recovery_conflict.pl STATEMENT: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-06 17:20:35.735 UTC [1274724] LOG: checkpoint starting: immediate force wait 2023-08-06 17:20:35.752 UTC [1274724] LOG: checkpoint complete: wrote 7 buffers (5.5%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.018 s; sync files=0, longest=0.000 s, average=0.000 s; distance=11360 kB, estimate=11360 kB; lsn=0/2000060, redo lsn=0/2000028 2023-08-06 17:20:35.877 UTC [1274760] 031_recovery_conflict.pl LOG: received replication command: SHOW data_directory_mode 2023-08-06 17:20:35.877 UTC [1274760] 031_recovery_conflict.pl STATEMENT: SHOW data_directory_mode 2023-08-06 17:20:35.877 UTC [1274760] 031_recovery_conflict.pl LOG: received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_1274760" TEMPORARY PHYSICAL ( RESERVE_WAL) 2023-08-06 17:20:35.877 UTC [1274760] 031_recovery_conflict.pl STATEMENT: CREATE_REPLICATION_SLOT "pg_basebackup_1274760" TEMPORARY PHYSICAL ( RESERVE_WAL) 2023-08-06 17:20:36.012 UTC [1274760] 031_recovery_conflict.pl LOG: received replication command: IDENTIFY_SYSTEM 2023-08-06 17:20:36.012 UTC [1274760] 031_recovery_conflict.pl STATEMENT: IDENTIFY_SYSTEM 2023-08-06 17:20:36.017 UTC [1274760] 031_recovery_conflict.pl LOG: received replication command: START_REPLICATION SLOT "pg_basebackup_1274760" 0/2000000 TIMELINE 1 2023-08-06 17:20:36.017 UTC [1274760] 031_recovery_conflict.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_1274760" 0/2000000 TIMELINE 1 2023-08-06 17:20:37.707 UTC [1274757] 031_recovery_conflict.pl LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1274757.0", size 137324 2023-08-06 17:20:37.707 UTC [1274757] 031_recovery_conflict.pl STATEMENT: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client') 2023-08-06 17:20:52.532 UTC [1274953] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-06 17:20:52.532 UTC [1274953] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-06 17:20:52.558 UTC [1274953] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:20:52.558 UTC [1274953] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:20:53.333 UTC [1274960] 031_recovery_conflict.pl LOG: statement: CREATE DATABASE test_db 2023-08-06 17:21:05.491 UTC [1275162] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table1(a int, b int); 2023-08-06 17:21:05.596 UTC [1275162] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 SELECT i % 3, 0 FROM generate_series(1,20) i; 2023-08-06 17:21:05.729 UTC [1275162] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table2(a int, b int); 2023-08-06 17:21:05.910 UTC [1275187] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:06.117 UTC [1275191] 031_recovery_conflict.pl LOG: statement: SELECT '0/3429BC8' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:06.872 UTC [1275204] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:06.886 UTC [1275204] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 VALUES (1,0); 2023-08-06 17:21:06.892 UTC [1275204] 031_recovery_conflict.pl LOG: statement: ROLLBACK; 2023-08-06 17:21:06.897 UTC [1275204] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:06.902 UTC [1275204] 031_recovery_conflict.pl LOG: statement: LOCK test_recovery_conflict_table1; 2023-08-06 17:21:06.908 UTC [1275204] 031_recovery_conflict.pl LOG: statement: COMMIT; 2023-08-06 17:21:06.934 UTC [1275215] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:06.969 UTC [1275217] 031_recovery_conflict.pl LOG: statement: SELECT '0/3429C88' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:07.146 UTC [1275220] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-06 17:21:07.174 UTC [1275223] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:07.196 UTC [1275227] 031_recovery_conflict.pl LOG: statement: SELECT '0/3430000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:08.219 UTC [1275243] 031_recovery_conflict.pl LOG: statement: SELECT '0/3430000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:08.918 UTC [1275258] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 SELECT i, 0 FROM generate_series(1,20) i 2023-08-06 17:21:09.209 UTC [1275272] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:09.229 UTC [1275274] 031_recovery_conflict.pl LOG: statement: SELECT '0/34324E0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:09.267 UTC [1275276] 031_recovery_conflict.pl LOG: statement: UPDATE test_recovery_conflict_table1 SET a = a + 1 WHERE a > 2; 2023-08-06 17:21:09.974 UTC [1275278] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-06 17:21:10.035 UTC [1275280] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:10.053 UTC [1275282] 031_recovery_conflict.pl LOG: statement: SELECT '0/3432BF8' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:10.177 UTC [1275284] 031_recovery_conflict.pl LOG: statement: SELECT '0/3432BF8' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:10.286 UTC [1275293] 031_recovery_conflict.pl LOG: statement: DROP TABLE test_recovery_conflict_table1; 2023-08-06 17:21:10.853 UTC [1275309] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:10.922 UTC [1275311] 031_recovery_conflict.pl LOG: statement: SELECT '0/3433130' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:11.394 UTC [1275340] 031_recovery_conflict.pl LOG: statement: DROP TABLESPACE test_recovery_conflict_tblspc; 2023-08-06 17:21:11.558 UTC [1275352] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:11.576 UTC [1275354] 031_recovery_conflict.pl LOG: statement: SELECT '0/34331D0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:11.732 UTC [1275356] 031_recovery_conflict.pl LOG: statement: SELECT '0/34331D0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:16.090 UTC [1275434] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-06 17:21:16.090 UTC [1275434] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-06 17:21:16.095 UTC [1275434] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:21:16.095 UTC [1275434] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:21:16.563 UTC [1275442] 031_recovery_conflict.pl LOG: statement: CREATE TABLE test_recovery_conflict_table1(a int, b int); 2023-08-06 17:21:17.197 UTC [1275442] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1 VALUES (1); 2023-08-06 17:21:17.198 UTC [1275442] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:17.198 UTC [1275442] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1(a) SELECT generate_series(1, 100) i; 2023-08-06 17:21:17.950 UTC [1275442] 031_recovery_conflict.pl LOG: statement: ROLLBACK; 2023-08-06 17:21:17.950 UTC [1275442] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:17.950 UTC [1275442] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table2; 2023-08-06 17:21:17.950 UTC [1275442] 031_recovery_conflict.pl LOG: statement: PREPARE TRANSACTION 'lock'; 2023-08-06 17:21:17.950 UTC [1275442] 031_recovery_conflict.pl LOG: statement: INSERT INTO test_recovery_conflict_table1(a) VALUES (170); 2023-08-06 17:21:17.951 UTC [1275442] 031_recovery_conflict.pl LOG: statement: SELECT txid_current(); 2023-08-06 17:21:18.043 UTC [1275463] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:18.959 UTC [1275466] 031_recovery_conflict.pl LOG: statement: SELECT '0/3435EF0' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:21.091 UTC [1275494] 031_recovery_conflict.pl LOG: statement: VACUUM test_recovery_conflict_table1; 2023-08-06 17:21:21.321 UTC [1275497] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:21.341 UTC [1275499] 031_recovery_conflict.pl LOG: statement: SELECT '0/3436000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:21.597 UTC [1275505] 031_recovery_conflict.pl LOG: statement: SELECT '0/3436000' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:21:21.779 UTC [1275514] 031_recovery_conflict.pl LOG: statement: ROLLBACK PREPARED 'lock'; 2023-08-06 17:21:22.573 UTC [1275546] standby LOG: received replication command: IDENTIFY_SYSTEM 2023-08-06 17:21:22.573 UTC [1275546] standby STATEMENT: IDENTIFY_SYSTEM 2023-08-06 17:21:22.578 UTC [1275546] standby LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:21:22.578 UTC [1275546] standby STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2023-08-06 17:21:23.043 UTC [1275575] 031_recovery_conflict.pl LOG: statement: DROP DATABASE test_db; 2023-08-06 17:21:23.102 UTC [1274724] LOG: checkpoint starting: immediate force wait 2023-08-06 17:21:23.986 UTC [1274724] LOG: checkpoint complete: wrote 2 buffers (1.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.884 s; sync files=0, longest=0.000 s, average=0.000 s; distance=20721 kB, estimate=20721 kB; lsn=0/343C488, redo lsn=0/343C450 2023-08-06 17:21:24.086 UTC [1275589] 031_recovery_conflict.pl LOG: statement: SELECT pg_current_wal_flush_lsn() 2023-08-06 17:21:24.110 UTC [1275591] 031_recovery_conflict.pl LOG: statement: SELECT '0/343C740' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('standby', 'walreceiver') 2023-08-06 17:24:30.314 UTC [1274721] LOG: received immediate shutdown request 2023-08-06 17:24:31.534 UTC [1274721] LOG: database system is shut down
2023-08-06 17:20:51.002 UTC [1274900] LOG: starting PostgreSQL 17devel (Ubuntu 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit 2023-08-06 17:20:51.002 UTC [1274900] LOG: listening on Unix socket "/tmp/N8KG4BCXAf/.s.PGSQL.53739" 2023-08-06 17:20:51.503 UTC [1274938] LOG: database system was interrupted; last known up at 2023-08-06 17:20:35 UTC 2023-08-06 17:20:51.503 UTC [1274938] LOG: entering standby mode 2023-08-06 17:20:51.792 UTC [1274938] LOG: redo starts at 0/2000028 2023-08-06 17:20:51.793 UTC [1274938] WARNING: unexpected directory entry "16384" found in pg_tblspc/ 2023-08-06 17:20:51.793 UTC [1274938] DETAIL: All directory entries in pg_tblspc/ should be symbolic links. 2023-08-06 17:20:51.793 UTC [1274938] HINT: Remove those directories, or set allow_in_place_tablespaces to ON transiently to let recovery complete. 2023-08-06 17:20:51.793 UTC [1274938] LOG: consistent recovery state reached at 0/2000100 2023-08-06 17:20:51.797 UTC [1274900] LOG: database system is ready to accept read-only connections 2023-08-06 17:20:52.558 UTC [1274942] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-06 17:21:06.994 UTC [1275199] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:06.994 UTC [1275199] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; 2023-08-06 17:21:07.025 UTC [1275199] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-06 17:21:07.030 UTC [1275199] 031_recovery_conflict.pl LOG: statement: ; 2023-08-06 17:21:07.165 UTC [1274938] LOG: recovery still waiting after 13.616 ms: recovery conflict on buffer pin 2023-08-06 17:21:07.165 UTC [1274938] CONTEXT: WAL redo at 0/3429C88 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-06 17:21:07.209 UTC [1275199] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-06 17:21:07.209 UTC [1275199] 031_recovery_conflict.pl DETAIL: User was holding shared buffer pin for too long. 2023-08-06 17:21:07.209 UTC [1275199] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-06 17:21:07.211 UTC [1274938] LOG: recovery finished waiting after 59.737 ms: recovery conflict on buffer pin 2023-08-06 17:21:07.211 UTC [1274938] CONTEXT: WAL redo at 0/3429C88 for Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-06 17:21:08.456 UTC [1275250] 031_recovery_conflict.pl LOG: statement: SELECT confl_bufferpin FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-06 17:21:09.237 UTC [1275248] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:09.243 UTC [1275248] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; 2023-08-06 17:21:09.249 UTC [1275248] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-06 17:21:09.249 UTC [1275248] 031_recovery_conflict.pl LOG: statement: ; 2023-08-06 17:21:10.024 UTC [1274938] LOG: recovery still waiting after 11.501 ms: recovery conflict on snapshot 2023-08-06 17:21:10.024 UTC [1274938] DETAIL: Conflicting process: 1275248. 2023-08-06 17:21:10.024 UTC [1274938] CONTEXT: WAL redo at 0/3432A18 for Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-06 17:21:10.093 UTC [1275248] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-06 17:21:10.093 UTC [1275248] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed. 2023-08-06 17:21:10.093 UTC [1275248] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-06 17:21:10.098 UTC [1274938] LOG: recovery finished waiting after 86.119 ms: recovery conflict on snapshot 2023-08-06 17:21:10.098 UTC [1274938] CONTEXT: WAL redo at 0/3432A18 for Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0 2023-08-06 17:21:10.238 UTC [1275290] 031_recovery_conflict.pl LOG: statement: SELECT confl_snapshot FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-06 17:21:10.250 UTC [1275288] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:10.255 UTC [1275288] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE; 2023-08-06 17:21:10.260 UTC [1275288] 031_recovery_conflict.pl LOG: statement: SELECT 1; 2023-08-06 17:21:10.263 UTC [1275288] 031_recovery_conflict.pl LOG: statement: ; 2023-08-06 17:21:10.586 UTC [1274938] LOG: recovery still waiting after 11.288 ms: recovery conflict on lock 2023-08-06 17:21:10.586 UTC [1274938] DETAIL: Conflicting process: 1275288. 2023-08-06 17:21:10.586 UTC [1274938] CONTEXT: WAL redo at 0/3432BF8 for Standby/LOCK: xid 739 db 16385 rel 16386 2023-08-06 17:21:10.625 UTC [1275288] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-06 17:21:10.625 UTC [1275288] 031_recovery_conflict.pl DETAIL: User was holding a relation lock for too long. 2023-08-06 17:21:10.625 UTC [1275288] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-06 17:21:10.661 UTC [1274938] LOG: recovery finished waiting after 85.925 ms: recovery conflict on lock 2023-08-06 17:21:10.661 UTC [1274938] CONTEXT: WAL redo at 0/3432BF8 for Standby/LOCK: xid 739 db 16385 rel 16386 2023-08-06 17:21:11.058 UTC [1275322] 031_recovery_conflict.pl LOG: statement: SELECT confl_lock FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-06 17:21:11.149 UTC [1275319] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:11.152 UTC [1275319] 031_recovery_conflict.pl LOG: statement: SET work_mem = '64kB'; 2023-08-06 17:21:11.155 UTC [1275319] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT count(*) FROM generate_series(1,6000); 2023-08-06 17:21:11.199 UTC [1275319] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-06 17:21:11.232 UTC [1275319] 031_recovery_conflict.pl LOG: statement: ; 2023-08-06 17:21:11.532 UTC [1274938] LOG: could not remove directory "pg_tblspc/16384/PG_17_202307261/pgsql_tmp": Directory not empty 2023-08-06 17:21:11.532 UTC [1274938] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:11.535 UTC [1274938] LOG: could not remove directory "pg_tblspc/16384/PG_17_202307261": Directory not empty 2023-08-06 17:21:11.535 UTC [1274938] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:11.546 UTC [1274938] LOG: recovery still waiting after 10.877 ms: recovery conflict on tablespace 2023-08-06 17:21:11.546 UTC [1274938] DETAIL: Conflicting process: 1275319. 2023-08-06 17:21:11.546 UTC [1274938] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:11.588 UTC [1275319] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery 2023-08-06 17:21:11.588 UTC [1275319] 031_recovery_conflict.pl DETAIL: User was or might have been using tablespace that must be dropped. 2023-08-06 17:21:11.588 UTC [1275319] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command. 2023-08-06 17:21:11.614 UTC [1275319] 031_recovery_conflict.pl LOG: temporary file: path "pg_tblspc/16384/PG_17_202307261/pgsql_tmp/pgsql_tmp1275319.0", size 84000 2023-08-06 17:21:11.615 UTC [1274938] LOG: recovery finished waiting after 79.143 ms: recovery conflict on tablespace 2023-08-06 17:21:11.615 UTC [1274938] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:11.798 UTC [1275362] 031_recovery_conflict.pl LOG: statement: SELECT confl_tablespace FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-06 17:21:12.060 UTC [1274900] LOG: received fast shutdown request 2023-08-06 17:21:12.060 UTC [1274900] LOG: aborting any active transactions 2023-08-06 17:21:12.060 UTC [1274942] FATAL: terminating walreceiver process due to administrator command 2023-08-06 17:21:12.064 UTC [1275360] 031_recovery_conflict.pl FATAL: terminating connection due to administrator command 2023-08-06 17:21:12.106 UTC [1274936] LOG: shutting down 2023-08-06 17:21:12.131 UTC [1274900] LOG: database system is shut down 2023-08-06 17:21:14.807 UTC [1275376] LOG: starting PostgreSQL 17devel (Ubuntu 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit 2023-08-06 17:21:14.807 UTC [1275376] LOG: listening on Unix socket "/tmp/N8KG4BCXAf/.s.PGSQL.53739" 2023-08-06 17:21:15.133 UTC [1275421] LOG: database system was shut down in recovery at 2023-08-06 17:21:12 UTC 2023-08-06 17:21:15.133 UTC [1275421] LOG: entering standby mode 2023-08-06 17:21:15.333 UTC [1275421] LOG: redo starts at 0/2000028 2023-08-06 17:21:15.992 UTC [1275421] LOG: could not stat file "pg_tblspc/16384": No such file or directory 2023-08-06 17:21:15.992 UTC [1275421] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:15.992 UTC [1275421] LOG: consistent recovery state reached at 0/34331D0 2023-08-06 17:21:16.007 UTC [1275421] LOG: invalid record length at 0/34331D0: expected at least 24, got 0 2023-08-06 17:21:16.007 UTC [1275376] LOG: database system is ready to accept read-only connections 2023-08-06 17:21:16.100 UTC [1275429] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-06 17:21:19.206 UTC [1275437] 031_recovery_conflict.pl LOG: statement: BEGIN; 2023-08-06 17:21:19.206 UTC [1275437] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT a FROM test_recovery_conflict_table1; 2023-08-06 17:21:19.337 UTC [1275437] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor; 2023-08-06 17:21:19.345 UTC [1275437] 031_recovery_conflict.pl LOG: statement: SELECT * FROM test_recovery_conflict_table2; 2023-08-06 17:21:20.067 UTC [1275482] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted; 2023-08-06 17:21:21.357 UTC [1275437] 031_recovery_conflict.pl ERROR: canceling statement due to conflict with recovery at character 15 2023-08-06 17:21:21.357 UTC [1275437] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery. 2023-08-06 17:21:21.357 UTC [1275437] 031_recovery_conflict.pl STATEMENT: SELECT * FROM test_recovery_conflict_table2; 2023-08-06 17:21:21.664 UTC [1275511] 031_recovery_conflict.pl LOG: statement: SELECT confl_deadlock FROM pg_stat_database_conflicts WHERE datname='test_db'; 2023-08-06 17:21:21.843 UTC [1275376] LOG: received fast shutdown request 2023-08-06 17:21:21.843 UTC [1275376] LOG: aborting any active transactions 2023-08-06 17:21:21.847 UTC [1275429] FATAL: terminating walreceiver process due to administrator command 2023-08-06 17:21:21.851 UTC [1275509] 031_recovery_conflict.pl FATAL: terminating connection due to administrator command 2023-08-06 17:21:21.855 UTC [1275419] LOG: shutting down 2023-08-06 17:21:21.857 UTC [1275376] LOG: database system is shut down 2023-08-06 17:21:21.907 UTC [1275517] LOG: starting PostgreSQL 17devel (Ubuntu 17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit 2023-08-06 17:21:21.907 UTC [1275517] LOG: listening on Unix socket "/tmp/N8KG4BCXAf/.s.PGSQL.53739" 2023-08-06 17:21:21.910 UTC [1275520] LOG: database system was shut down in recovery at 2023-08-06 17:21:21 UTC 2023-08-06 17:21:21.910 UTC [1275520] LOG: entering standby mode 2023-08-06 17:21:21.911 UTC [1275520] LOG: redo starts at 0/2000028 2023-08-06 17:21:22.193 UTC [1275520] LOG: could not stat file "pg_tblspc/16384": No such file or directory 2023-08-06 17:21:22.193 UTC [1275520] CONTEXT: WAL redo at 0/3433168 for Tablespace/DROP: 16384 2023-08-06 17:21:22.193 UTC [1275520] LOG: consistent recovery state reached at 0/343C368 2023-08-06 17:21:22.193 UTC [1275520] LOG: invalid record length at 0/343C368: expected at least 24, got 0 2023-08-06 17:21:22.198 UTC [1275517] LOG: database system is ready to accept read-only connections 2023-08-06 17:21:22.578 UTC [1275544] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2023-08-06 17:21:22.824 UTC [1275558] 031_recovery_conflict.pl LOG: statement: SELECT conflicts FROM pg_stat_database WHERE datname='test_db'; 2023-08-06 17:21:24.078 UTC [1275555] 031_recovery_conflict.pl FATAL: unrecognized conflict mode: 7 2023-08-06 17:24:31.466 UTC [1275544] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. 2023-08-06 17:24:31.532 UTC [1275517] LOG: received immediate shutdown request 2023-08-06 17:24:31.608 UTC [1275517] LOG: database system is shut down
# Checking port 53738 # Found port 53738 Name: primary Data directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata Backup directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup Archive directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/archives Connection string: port=53738 host=/tmp/N8KG4BCXAf Log file: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log # Running: initdb -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -A trust -N The files belonging to this database system will be owned by user "myon". This user must also own the server process. The database cluster will be initialized with this locale configuration: provider: libc LC_COLLATE: en_US.utf8 LC_CTYPE: en_US.utf8 LC_MESSAGES: C LC_MONETARY: en_US.utf8 LC_NUMERIC: en_US.utf8 LC_TIME: en_US.utf8 The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... Etc/UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok Sync to disk skipped. The data directory might become corrupt if the operating system crashes. Success. You can now start the database server using: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -l logfile start # Running: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../../src/test/regress/pg_regress --config-auth /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 1274721 # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup/my_backup -h /tmp/N8KG4BCXAf -p 53738 --checkpoint fast --no-sync # Backup finished # Checking port 53739 # Found port 53739 Name: standby Data directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata Backup directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/backup Archive directory: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/archives Connection string: port=53739 host=/tmp/N8KG4BCXAf Log file: /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log # Initializing node "standby" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby" ### Starting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log -o --cluster-name=standby start waiting for server to start...... done server started # Postmaster PID for node "standby" is 1274900 Waiting for replication conn standby's replay_lsn to pass 0/3429BC8 on primary done Waiting for replication conn standby's replay_lsn to pass 0/3429C88 on primary done [17:21:06.981](60.437s) # issuing query via background psql: # BEGIN; # DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; # FETCH FORWARD FROM test_recovery_conflict_cursor; [17:21:07.128](0.147s) ok 1 - buffer pin conflict: cursor with conflicting pin established Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary done [17:21:08.233](1.105s) ok 2 - buffer pin conflict: logfile contains terminated connection due to recovery conflict [17:21:08.857](0.624s) ok 3 - buffer pin conflict: stats show conflict on standby Waiting for replication conn standby's replay_lsn to pass 0/34324E0 on primary done [17:21:09.235](0.378s) # issuing query via background psql: # BEGIN; # DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1; # FETCH FORWARD FROM test_recovery_conflict_cursor; # [17:21:09.250](0.015s) ok 4 - snapshot conflict: cursor with conflicting snapshot established Waiting for replication conn standby's replay_lsn to pass 0/3432BF8 on primary done [17:21:10.182](0.933s) ok 5 - snapshot conflict: logfile contains terminated connection due to recovery conflict [17:21:10.244](0.061s) ok 6 - snapshot conflict: stats show conflict on standby [17:21:10.244](0.000s) # issuing query via background psql: # BEGIN; # LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE; # SELECT 1; # [17:21:10.268](0.024s) ok 7 - lock conflict: conflicting lock acquired Waiting for replication conn standby's replay_lsn to pass 0/3433130 on primary done [17:21:10.969](0.701s) ok 8 - lock conflict: logfile contains terminated connection due to recovery conflict [17:21:11.147](0.178s) ok 9 - lock conflict: stats show conflict on standby [17:21:11.147](0.000s) # issuing query via background psql: # BEGIN; # SET work_mem = '64kB'; # DECLARE test_recovery_conflict_cursor CURSOR FOR # SELECT count(*) FROM generate_series(1,6000); # FETCH FORWARD FROM test_recovery_conflict_cursor; # [17:21:11.233](0.086s) ok 10 - tablespace conflict: cursor with conflicting temp file established Waiting for replication conn standby's replay_lsn to pass 0/34331D0 on primary done [17:21:11.735](0.502s) ok 11 - tablespace conflict: logfile contains terminated connection due to recovery conflict [17:21:11.802](0.067s) ok 12 - tablespace conflict: stats show conflict on standby ### Restarting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start....... done server started # Postmaster PID for node "standby" is 1275376 Waiting for replication conn standby's replay_lsn to pass 0/3435EF0 on primary done [17:21:19.342](7.541s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established [17:21:20.960](1.618s) ok 14 - startup deadlock: lock acquisition is waiting Waiting for replication conn standby's replay_lsn to pass 0/3436000 on primary done [17:21:21.606](0.646s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict [17:21:21.672](0.066s) ok 16 - startup deadlock: stats show conflict on standby ### Restarting node "standby" # Running: pg_ctl -w -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -l /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "standby" is 1275517 [17:21:22.905](1.233s) ok 17 - 5 recovery conflicts shown in pg_stat_database Waiting for replication conn standby's replay_lsn to pass 0/343C740 on primary done timed out waiting for match: (?^:User was connected to a database that must be dropped) at t/031_recovery_conflict.pl line 318. # Postmaster PID for node "primary" is 1274721 ### Stopping node "primary" using mode immediate # Running: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -m immediate stop waiting for server to shut down..... done server stopped # No postmaster PID for node "primary" # Postmaster PID for node "standby" is 1275517 ### Stopping node "standby" using mode immediate # Running: pg_ctl -D /home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby" [17:24:31.752](188.847s) # Tests were run but no plan was declared and done_testing() was not seen. [17:24:31.800](0.047s) # Looks like your test exited with 255 just after 17.