On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pry...@telsasoft.com> wrote: > < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery > immediate wait > < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated > WAL; request 1201/1CAF84F0, current position 1201/1CADB730 > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation > base/16881/2840_vm > < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is > not satisfied --- flushed only to 1201/1CADB730 > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation > base/16881/2840_vm > < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed > > I was able to start it with -c recovery_prefetch=no, so it seems like > prefetch tried to do too much. The VM runs centos7 under qemu. > I'm making a copy of the data dir in cases it's needed.
Hmm, a page with an LSN set 118208 bytes past the end of WAL. It's a vm fork page (which recovery prefetch should ignore completely). Did you happen to get a copy before the successful recovery? After the successful recovery, what LSN does that page have, and can you find the references to it in the WAL with eg pg_waldump -R 1663/16681/2840 -F vm? Have you turned FPW off (perhaps this is on ZFS?)?