Hello,

I have a huge table with 141456059 records on a PostgreSQL 10.18 database.

When I try to do a pg_dump on that table, postgresql gives a segfault,
displaying this message:

2021-12-22 14:08:03.437 UTC [15267] LOG:  server process (PID 25854) was
terminated by signal 11: Segmentation fault
2021-12-22 14:08:03.437 UTC [15267] DETAIL:  Failed process was running:
COPY ********** TO stdout;
2021-12-22 14:08:03.437 UTC [15267] LOG:  terminating any other active
server processes
2021-12-22 14:08:03.438 UTC [15267] LOG:  archiver process (PID 16034)
exited with exit code 2
2021-12-22 14:08:04.196 UTC [15267] LOG:  all server processes terminated;
reinitializing
2021-12-22 14:08:05.785 UTC [25867] LOG:  database system was interrupted
while in recovery at log time 2021-12-22 14:02:29 UTC
2021-12-22 14:08:05.785 UTC [25867] HINT:  If this has occurred more than
once some data might be corrupted and you might need to choose an earlier
recovery target.

On the linux log I only see this:

Dec 22 14:08:03 kernel: postmaster[25854]: segfault at 14be000 ip
00007f828fabb5f9 sp 00007fffe43538b8 error 6 in libc-2.17.so
[7f828f96d000+1c2000]

I'm guessing I'm hitting some (memory?) limit, is there anything I can do
to prevent this? Shouldn't  PostgreSQL have some different behavior instead
of crashing the server?
-- 
Paulo Silva <paulo...@gmail.com>

Reply via email to