Yes, of course we can solve this by restoring from backup.
But if the database volumn is large, say, 100TB or more, the cost
is really too expensive just because the tiny clog file corrupt.
Regards,
Jet
Daniel Gustafsson
Human errors, disk errors, or even cosmic rays ...
Regards,
Jet
Andrey Borodin
S1:
When database normal shutdown, and clog file missing, the database cannot
restart. And if make a zero clog file, database started but may cause
transactions lost.
S2:
When database crashed, and clog file missing, when database restart,
it will try to recover. And everything is ok
So I t
Thanks tom.
But what I think is we may provide a better experience. Consider the
below example:
[jet@halodev-jet-01 data]$ psqlpsql (16.6)
Type "help" for help.
postgres=# CREATE TABLE a_test (n INT);
CREATE TABLE
postgres=# INSERT INTO a_test VALUES (1);
INSERT 0 1
postgres=# 2024-12-23 16
But think about such a scenario, after INSERT some tuples, and COMMIT also
succeed.
And after a while, a system error occurred and unfortunately, just caused
clog file corrupted.
So we need to restore the database from backup just because of the tiny clog
file corrupted.
Is there any chance t
Yes, i think you're right. The tuple will be set to HEAP_XMIN_COMMITTED
when doing the visibility checking, but don't you think it's a little weird? Or
may cause some confusion?
Thanks,
Jet
Junwang Zhao
Hi there,
I notice a little strange things of clog behaviours.
When I create a test table, say, a_test, the table only contains a INT type
column.
postgres=# CREATE TABLE a_test (n INT);
CREATE TABLE
and then insert one tuple:
postgres=# INSERT INTO a_test VALUES (1);
INSERT 0 1
An