I received a question about this error.
Just for the record, if someone encounter the same issue.
It has been fixed in ff9f111bce24
TL;DR : update your instance :)
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ff9f111bce24
Fix WAL replay in presence of an incomplete record
Ph
On 5/6/21 7:37 AM, Kyotaro Horiguchi wrote:
At Sun, 2 May 2021 22:43:44 +0200, Adrien Nayrat
wrote in
I also dumped 000100AA00A1 on the secondary and it
contains all the records until AA/A1004018.
It is really weird, I don't understand how the secondary can miss the
last 2 records
At Sun, 2 May 2021 22:43:44 +0200, Adrien Nayrat
wrote in
> I also dumped 000100AA00A1 on the secondary and it
> contains all the records until AA/A1004018.
>
> It is really weird, I don't understand how the secondary can miss the
> last 2 records of A0? It seems he did not received
Oh, I forgot to tell I was able to recover the secondary by replacing the
000100AA00A0 from the archives into pg_wal. Then the secondary were
able to finish recovery, start streaming replication and fetch subsequent wals.
I wondered why there was a CHECKPOINT_SHUTDOWN record. I dig
On 03/05/2021 10:43, Laurenz Albe wrote:
On Sun, 2021-05-02 at 22:43 +0200, Adrien Nayrat wrote:
LOG: started streaming WAL from primary at AA/A100 on timeline 1
FATAL: could not receive data from WAL stream : ERROR: requested starting
point AA/A100 is ahead of the WAL flush position
On Sun, 2021-05-02 at 22:43 +0200, Adrien Nayrat wrote:
> LOG: started streaming WAL from primary at AA/A100 on timeline 1
> FATAL: could not receive data from WAL stream : ERROR: requested starting
> point AA/A100 is ahead of the WAL flush position of this server
> AA/A0FFFBE8
You ar
Hello,
I encountered a similar issue with pg 13.
TL;DR: The secondary did not received a wal record (CHECKPOINT_SHUTDOWN) which
corrupted the wal and he failed when he tried to replay it.
For a personal project I have a primary and a secondary with streaming
replication and replication_slo
Hello,
I encountered a problem on replicas after the primary crashed for lack of disk
space.
After the problem I had a constant flow of "invalid contrecord" logs and
replication ceased working.
The only way I found to make it work again was to completely restart the
replica.
The logs:
Jun