Re: Additional Help in vetting my steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2024-12-16 Thread Bharani SV-forum
 TeamI am getting the following error.
pg_dump: error: error reading large object 2113418:
pg_dump: error: could not open large object 3391830: 
I tried to give this command DB name = abcefg
ALTER DATABASE abcefgd SET lo_compat_privileges=on;

and reran and once again , i am getting the same error
while doing using psql/pg_dump from old version server running 13.18 [ OS = 
Amazon Linux release 2 (Karoo) ].
Will be pg_dump and pg_restore to restore in the new VM with new OS [OS= amazon 
linux 2023] and new DB bin pgsql ver 15.09.
We were told by AWS team, in the new VM tagged OS [OS= amazon linux 2023] , 
pgsql Ver 13.16 is not supported
I cross checkedSELECT oid, count(*)  FROM pg_largeobject_metadata group by oid 
order by oid ;Rows =  4260170 rows
Can you suggest

On Wednesday, December 11, 2024 at 03:57:31 PM EST, Adrian Klaver 
 wrote:  
 
 On 12/11/24 11:12, Bharani SV-forum wrote:
> Team
> As suggested from old server, post shutdown of DB, I did OS level dump 
> of PG_DATA folder and had restored in the new server.

If you follow the process shown here:

https://www.postgresql.org/message-id/CAKAnmmKZdhnhdNRd3OgDyEco9OPkT%3DqA_TeWMFMRvUM9pXauKg%40mail.gmail.com

You would not have to do the below.

> 
> Any idea on how to install the older binary postgres 13.18 ( OS=Amazon 
> Linux 2023.6.20241121) under a dedicated folder suffixed as the 
> following e.g.) /usr/pgsql1318
> 
> System Admin had already installed newer version pgsql 15.08 binaries in 
> the  new server (OS= Amazon Linux 2023.6.20241121) in the folder "/usr/bin/"
> 
> We were quoted , OS = Amazon Linux 2023.6.20241121 doesnot support 
> postgres ver 15.10 (Community edition) under its AWS-EC2.

That does not reflect well on Amazon Linux, that it is missing two 
critical bug releases.

> 
> Regards
> 
> 
> 

-- 
Adrian Klaver
adrian.kla...@aklaver.com



  

Re: Additional Help in vetting my steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2024-12-16 Thread Adrian Klaver

On 12/16/24 13:19, Bharani SV-forum wrote:

Team
I am getting the following error.

pg_dump: error: error reading large object 2113418:

pg_dump: error: could not open large object 3391830:


What user are you running pg_dump as?

What version of pg_dump?



I tried to give this command DB name = abcefg

ALTER DATABASE abcefgd SET lo_compat_privileges=on;

and reran and once again , i am getting the same error

while doing using psql/pg_dump from old version server running 13.18 [ 
OS = Amazon Linux release 2 (Karoo) ].


It is either psql or pg_dump. psql is the CLI client for the Postgres 
server. If you are using psql as an alias for Postgres(sql), don't,  it 
only adds confusion.




Will be pg_dump and pg_restore to restore in the new VM with new OS [OS= 
amazon linux 2023] and new DB bin pgsql ver 15.09.


We were told by AWS team, in the new VM tagged OS [OS= amazon linux 
2023] , pgsql Ver 13.16 is not supported


Not sure why? It still a community supported version and will be through 
November 2025.




*I cross checked*
SELECT oid, count(*)  FROM pg_largeobject_metadata group by oid order by 
oid ;

Rows =  4260170 rows

Can you suggest


On Wednesday, December 11, 2024 at 03:57:31 PM EST, Adrian Klaver 
 wrote:



On 12/11/24 11:12, Bharani SV-forum wrote:
 > Team
 > As suggested from old server, post shutdown of DB, I did OS level dump
 > of PG_DATA folder and had restored in the new server.

If you follow the process shown here:

https://www.postgresql.org/message-id/CAKAnmmKZdhnhdNRd3OgDyEco9OPkT%3DqA_TeWMFMRvUM9pXauKg%40mail.gmail.com
 


You would not have to do the below.

 >
 > Any idea on how to install the older binary postgres 13.18 ( OS=Amazon
 > Linux 2023.6.20241121) under a dedicated folder suffixed as the
 > following e.g.) /usr/pgsql1318
 >
 > System Admin had already installed newer version pgsql 15.08 binaries in
 > the  new server (OS= Amazon Linux 2023.6.20241121) in the folder 
"/usr/bin/"

 >
 > We were quoted , OS = Amazon Linux 2023.6.20241121 doesnot support
 > postgres ver 15.10 (Community edition) under its AWS-EC2.

That does not reflect well on Amazon Linux, that it is missing two
critical bug releases.


 >
 > Regards
 >
 >
 >

--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Additional Help in vetting my steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2024-12-16 Thread Bharani SV-forum
 a) user = postgres
b)pg_dump version = /usr/bin/pg_dump -V
pg_dump (PostgreSQL) 13.16
c)
DB version
select version () ;                                                 
version--
 PostgreSQL 13.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 
(Red Hat 4.8.5-44), 64-bit
use this script for backup
pg_dump -Fp -p 5432 -U "$USERNAME" "$DATABASE" 

using username = postgres
for one of the DB (ver 13.16), it worked fine by doing oldvm = pg_dump from ver 
13.16 andlater restoring in new VM with new OS and new db binary 15.09, post 
creating dummy db (appln related) and restoring the pg_dump from oldvm .   
 On Monday, December 16, 2024 at 05:19:31 PM EST, Adrian Klaver 
 wrote:  
 
 On 12/16/24 13:19, Bharani SV-forum wrote:
> Team
> I am getting the following error.
> 
> pg_dump: error: error reading large object 2113418:
> 
> pg_dump: error: could not open large object 3391830:

What user are you running pg_dump as?

What version of pg_dump?

> 
> I tried to give this command DB name = abcefg
> 
> ALTER DATABASE abcefgd SET lo_compat_privileges=on;
> 
> and reran and once again , i am getting the same error
> 
> while doing using psql/pg_dump from old version server running 13.18 [ 
> OS = Amazon Linux release 2 (Karoo) ].

It is either psql or pg_dump. psql is the CLI client for the Postgres 
server. If you are using psql as an alias for Postgres(sql), don't,  it 
only adds confusion.

> 
> Will be pg_dump and pg_restore to restore in the new VM with new OS [OS= 
> amazon linux 2023] and new DB bin pgsql ver 15.09.
> 
> We were told by AWS team, in the new VM tagged OS [OS= amazon linux 
> 2023] , pgsql Ver 13.16 is not supported

Not sure why? It still a community supported version and will be through 
November 2025.

> 
> *I cross checked*
> SELECT oid, count(*)  FROM pg_largeobject_metadata group by oid order by 
> oid ;
> Rows =  4260170 rows
> 
> Can you suggest
> 
> 
> On Wednesday, December 11, 2024 at 03:57:31 PM EST, Adrian Klaver 
>  wrote:
> 
> 
> On 12/11/24 11:12, Bharani SV-forum wrote:
>  > Team
>  > As suggested from old server, post shutdown of DB, I did OS level dump
>  > of PG_DATA folder and had restored in the new server.
> 
> If you follow the process shown here:
> 
> https://www.postgresql.org/message-id/CAKAnmmKZdhnhdNRd3OgDyEco9OPkT%3DqA_TeWMFMRvUM9pXauKg%40mail.gmail.com
>  
> 
> 
> You would not have to do the below.
> 
>  >
>  > Any idea on how to install the older binary postgres 13.18 ( OS=Amazon
>  > Linux 2023.6.20241121) under a dedicated folder suffixed as the
>  > following e.g.) /usr/pgsql1318
>  >
>  > System Admin had already installed newer version pgsql 15.08 binaries in
>  > the  new server (OS= Amazon Linux 2023.6.20241121) in the folder 
> "/usr/bin/"
>  >
>  > We were quoted , OS = Amazon Linux 2023.6.20241121 doesnot support
>  > postgres ver 15.10 (Community edition) under its AWS-EC2.
> 
> That does not reflect well on Amazon Linux, that it is missing two
> critical bug releases.
> 
> 
>  >
>  > Regards
>  >
>  >
>  >
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 
> 
> 

-- 
Adrian Klaver
adrian.kla...@aklaver.com



  

Re: Additional Help in vetting my steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2024-12-16 Thread Adrian Klaver

On 12/16/24 14:30, Bharani SV-forum wrote:

*a) *
*user = *
postgres

b)
*pg_dump version = *
/usr/bin/pg_dump -V

pg_dump (PostgreSQL) 13.16

c)

*DB version*

select version () ;
                                                  version
--
  PostgreSQL 13.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-44), 64-bit


*use this script for backup*

pg_dump -Fp -p 5432 -U "$USERNAME" "$DATABASE"

using username = postgres

for one of the DB (ver 13.16), it worked fine by doing oldvm = pg_dump 
from ver 13.16 and
later restoring in new VM with new OS and new db binary 15.09, post 
creating dummy db (appln related) and restoring the pg_dump from oldvm .




That's nice, but the issue is the case that did not work.

What process where you running that caused the error?


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Additional Help in vetting my steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2024-12-16 Thread Bharani SV-forum
 TeamBeing dev server, I noticed, we haven't performed analyze/vacuum process.
Noticed and re-triggerred vacuum full and analyze for all the application 
related db's
Re ran backup.
No issue's appeared during backup.
Not yet performed restoration in pgsql ver 15.09 in new vm with different OS.
Thank you for guiding me

On Monday, December 16, 2024 at 05:49:28 PM EST, Adrian Klaver 
 wrote:  
 
 On 12/16/24 14:30, Bharani SV-forum wrote:
> *a) *
> *user = *
> postgres
> 
> b)
> *pg_dump version = *
> /usr/bin/pg_dump -V
> 
> pg_dump (PostgreSQL) 13.16
> 
> c)
> 
> *DB version*
> 
> select version () ;
>                                                   version
> --
>   PostgreSQL 13.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
> 20150623 (Red Hat 4.8.5-44), 64-bit
> 
> *use this script for backup*
> 
> pg_dump -Fp -p 5432 -U "$USERNAME" "$DATABASE"
> 
> using username = postgres
> 
> for one of the DB (ver 13.16), it worked fine by doing oldvm = pg_dump 
> from ver 13.16 and
> later restoring in new VM with new OS and new db binary 15.09, post 
> creating dummy db (appln related) and restoring the pg_dump from oldvm .
> 

That's nice, but the issue is the case that did not work.

What process where you running that caused the error?


-- 
Adrian Klaver
adrian.kla...@aklaver.com



  

Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Ron Johnson
On Mon, Dec 16, 2024 at 8:10 AM Greg Sabino Mullane 
wrote:

> On Mon, Dec 16, 2024 at 5:32 AM 張宸瑋  wrote:
>
>> We have both regular accounts and system accounts. For regular accounts,
>> we still require password complexity and the lockout functionality after
>> multiple failed login attempts.
>>
>
> Again, what is the threat model here?
>

I would not be surprised if the "threat model" is security auditors.


> Most people have their password in a .pgpass file or similar, so it seems
> this only adds complexity and annoyance without any real benefit.
>

Mostly, people *do not* log into our PG instances. 99% of connections are
from application service accounts via JDBC.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Ron Johnson
Local (socket-based) connections are typically peer-authenticated (meaning
that authentication is handled by Linux pam).

Thus, if someone enters too many wrong passwords for a superuser account,
you *should* still be able to locally connect to PG.

Better test it, though.

On Mon, Dec 16, 2024 at 5:32 AM 張宸瑋  wrote:

> We have both regular accounts and system accounts. For regular accounts,
> we still require password complexity and the lockout functionality after
> multiple failed login attempts. However, for system accounts, due to
> information security regulations, password complexity is also required. The
> issue is that system accounts are used for system integration, and if the
> account gets locked, it may affect system services, which could lead to
> problems. To prevent this, we would like to exclude system accounts from
> being affected by the credcheck.max_auth_failure parameter.
>
>
> Peter J. Holzer 於 2024年12月14日 週六,上午4:24寫道:
>
>> On 2024-12-11 13:43:38 -0500, Ron Johnson wrote:
>> > On Wed, Dec 11, 2024 at 12:57 PM Greg Sabino Mullane <
>> htamf...@gmail.com>
>> > wrote:
>> >
>> > On Wed, Dec 11, 2024 at 5:46 AM 張宸瑋  wrote:
>> >
>> > In the use of the Credcheck suite, the parameter
>> > "credcheck.max_auth_failure = '3'" is set in the
>> postgresql.conf file
>> > to limit users from entering incorrect passwords more than
>> three times,
>> > after which their account will be locked.
>> >
>> >
>> > Won't that allow absolutely anyone to lock out anyone else,
>> including
>> > admins/superusers? Sounds like a bad idea to me.
>> >
>> >
>> > Isn't this a pretty common password setting?
>>
>> Yes, but that doesn't mean it's a good idea.
>>
>> Actually, let me tease that apart a bit.
>>
>> It is very common for the setting to exist (probably just about any OS
>> and many applications, too), but much less common for it to be turned on.
>>
>> There are good reasons for that.
>>
>> Limiting the number of failed attempts makes a lot of sense for debit
>> cards: The PINs are short enough that a person could bruteforce all
>> combinations and that typos are uncommon. So multiple failed attempts
>> probably mean that the card was stolen. There is also no way to DOS
>> somebody, since you need the card before you can enter the PIN.
>>
>> It may have made a bit of sense in the 1980s, when most people had short
>> and easily guessable passwords and hosts were typically only accessible
>> from directly connected terminals and not from the internet.
>>
>> But it really doesn't make much sense now: Passwords should be so long
>> that brute-forcing them via login attempts is completely futile. Either
>> the attacker knows the password (then the limit doesn't help), or they
>> won't guess it in a million attempts (so the limit doesn't help either).
>> OTOH, the limit gives an attacker a very simple way to deny the service to
>> the legitimate used: Just enter a bogus password three times and boom -
>> account locked. (That threat can be mitigated by applying the limit per
>> IP address - but the attacker may have a botnet with a million nodes,
>> making the limit ineffective.)
>>
>> hp
>>
>> --
>>_  | Peter J. Holzer| Story must make more sense than reality.
>> |_|_) ||
>> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
>> __/   | http://www.hjp.at/ |   challenge!"
>>
>

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Ron Johnson
On Mon, Dec 16, 2024 at 10:19 AM Peter J. Holzer  wrote:

> On 2024-12-16 09:17:25 -0500, Ron Johnson wrote:
> > Local (socket-based) connections are typically peer-authenticated
> > (meaning that authentication is handled by Linux pam).
>  ^^^
> Is it? I haven't checked the source code, but this doesn't seem
> plausible. You can get the uid of a socket peer directly from the
> kernel, which can be converted to a user name via getpwuid, and the
> mapping to postgresql roles is done via pg_ident.conf. I see no role for
> PAM in that path.
>

https://www.postgresql.org/docs/16/auth-peer.html

"
The peer authentication method works by obtaining the client's operating
system user name from the kernel and using it as the allowed database user
name (with optional user name mapping). This method is only supported on
local connections.
[snip]
Peer authentication is only available on operating systems providing the
getpeereid() function, the SO_PEERCRED socket parameter, or similar
mechanisms. Currently that includes Linux, most flavors of BSD including
macOS, and Solaris.
"

That means pam (and presumably also ldap and sssd), since there must be an
OS user with the same name, and OS authentication is handled by pam, ldap
and sssd.

$ grep peer '$PGDATA'/pg_hba.conf
local   all all   peer


>
> > Thus, if someone enters too many wrong passwords for a superuser
> > account, you should still be able to locally connect to PG.
>
> True. But the client may not be on the same machine.
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Greg Sabino Mullane
On Mon, Dec 16, 2024 at 5:32 AM 張宸瑋  wrote:

> We have both regular accounts and system accounts. For regular accounts,
> we still require password complexity and the lockout functionality after
> multiple failed login attempts.
>

Again, what is the threat model here? Most people have their password in a
.pgpass file or similar, so it seems this only adds complexity and
annoyance without any real benefit.

However, for system accounts, due to information security regulations,
> password complexity is also required.
>

Yes, this makes sense.


> The issue is that system accounts are used for system integration, and if
> the account gets locked, it may affect system services, which could lead to
> problems. To prevent this, we would like to exclude system accounts from
> being affected by the credcheck.max_auth_failure parameter.
>

I think we all understand that, but the extension as it exists now cannot
do that. And the obvious and easiest solution is to stop using the denial
of service feature, which I am hoping is NOT mandated by security
regulations.

Cheers,
Greg


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Peter J. Holzer
On 2024-12-16 09:17:25 -0500, Ron Johnson wrote:
> Local (socket-based) connections are typically peer-authenticated
> (meaning that authentication is handled by Linux pam).
 ^^^
Is it? I haven't checked the source code, but this doesn't seem
plausible. You can get the uid of a socket peer directly from the
kernel, which can be converted to a user name via getpwuid, and the
mapping to postgresql roles is done via pg_ident.conf. I see no role for
PAM in that path.

> Thus, if someone enters too many wrong passwords for a superuser
> account, you should still be able to locally connect to PG.

True. But the client may not be on the same machine.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread Peter J. Holzer
On 2024-12-16 18:32:34 +0800, 張宸瑋 wrote:
> We have both regular accounts and system accounts. For regular accounts, we
> still require password complexity and the lockout functionality after multiple
> failed login attempts. However, for system accounts, due to information
> security regulations, password complexity is also required. The issue is that
> system accounts are used for system integration, and if the account gets
> locked, it may affect system services, which could lead to problems. To 
> prevent
> this, we would like to exclude system accounts from being affected by the
> credcheck.max_auth_failure parameter.

Just in case it wasn't clear: My recommendation is to NOT use the
credcheck.max_auth_failure parameter for ANY account. It just causes
problems and doesn't really help. If you can't trust your users to
chooses sufficiently strong passwords, use a second factor. Or maybe
replace passwords with some other method (public keys, FIDO, ...)
altogether (in fact, I'd do that for system accounts).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Credcheck- credcheck.max_auth_failure

2024-12-16 Thread 張宸瑋
We have both regular accounts and system accounts. For regular accounts, we
still require password complexity and the lockout functionality after
multiple failed login attempts. However, for system accounts, due to
information security regulations, password complexity is also required. The
issue is that system accounts are used for system integration, and if the
account gets locked, it may affect system services, which could lead to
problems. To prevent this, we would like to exclude system accounts from
being affected by the credcheck.max_auth_failure parameter.


Peter J. Holzer 於 2024年12月14日 週六,上午4:24寫道:

> On 2024-12-11 13:43:38 -0500, Ron Johnson wrote:
> > On Wed, Dec 11, 2024 at 12:57 PM Greg Sabino Mullane  >
> > wrote:
> >
> > On Wed, Dec 11, 2024 at 5:46 AM 張宸瑋  wrote:
> >
> > In the use of the Credcheck suite, the parameter
> > "credcheck.max_auth_failure = '3'" is set in the postgresql.conf
> file
> > to limit users from entering incorrect passwords more than three
> times,
> > after which their account will be locked.
> >
> >
> > Won't that allow absolutely anyone to lock out anyone else, including
> > admins/superusers? Sounds like a bad idea to me.
> >
> >
> > Isn't this a pretty common password setting?
>
> Yes, but that doesn't mean it's a good idea.
>
> Actually, let me tease that apart a bit.
>
> It is very common for the setting to exist (probably just about any OS
> and many applications, too), but much less common for it to be turned on.
>
> There are good reasons for that.
>
> Limiting the number of failed attempts makes a lot of sense for debit
> cards: The PINs are short enough that a person could bruteforce all
> combinations and that typos are uncommon. So multiple failed attempts
> probably mean that the card was stolen. There is also no way to DOS
> somebody, since you need the card before you can enter the PIN.
>
> It may have made a bit of sense in the 1980s, when most people had short
> and easily guessable passwords and hosts were typically only accessible
> from directly connected terminals and not from the internet.
>
> But it really doesn't make much sense now: Passwords should be so long
> that brute-forcing them via login attempts is completely futile. Either
> the attacker knows the password (then the limit doesn't help), or they
> won't guess it in a million attempts (so the limit doesn't help either).
> OTOH, the limit gives an attacker a very simple way to deny the service to
> the legitimate used: Just enter a bogus password three times and boom -
> account locked. (That threat can be mitigated by applying the limit per
> IP address - but the attacker may have a botnet with a million nodes,
> making the limit ineffective.)
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Disabling vacuum truncate for autovacuum

2024-12-16 Thread Will Storey
Hi!

I would like to disable vacuum's truncate behaviour for autovacuum.
Previously I had an outage due to its access exclusive lock when it was
replicated to a hot standby.

When that outage happened it was from a VACUUM call in a cronjob rather
than autovacuum. I now run such VACUUMs with TRUNCATE false which avoids
the issue for these. However I've realized that autovacuum could cause this
as well. This is of interest to me because I'm looking at tuning autovacuum
and getting rid of the cronjob, but I've realized relying on autovacuum
could be dangerous because of the truncates.

I believe the only way to disable this for autovacuum is by changing the
vacuum_truncate storage parameters on tables. (Ignoring the now removed
old_snapshot_threshold option). I'm thinking of altering all my tables to
turn it off. Is this a horrible idea? I expect I would need to monitor
tables for problematic growth, but that might be better than a surprise
outage. I suppose the growth could cause an outage too, but I'm thinking it
would be more controllable.

Would I need to disable the settings on catalog tables too? (To rule out
any possibility of it happening). Are there any other things I might be
missing?

I am also wondering if having an autovacuum setting to control it would be
a good idea for a feature. That would be simpler for me than altering all
my tables and help me avoid missing any (e.g. catalogs, new tables).

I might be worrying needlessly about this as maybe it is unlikely to
happen. I suppose it is workload dependent.

Thank you!

Will




How to deal with dangling files after aborted `pg_restore`?

2024-12-16 Thread Ivan Kurnosov
The scenario:

1. There is a postgresql 17 server running
2. Restore dump with `--single-transaction` flag
3. For whatever reason the server goes away (eg: we kill the process)
4. Now `base` directory is filled with abandoned table files which
postgresql know nothing about

Playground:

Terminal 1:
Start server: `docker run --rm -it -e POSTGRES_HOST_AUTH_METHOD=trust -v
$PWD/postgres:/var/lib/postgresql/data postgres:17.2`

Terminal 2:
1. Start container with recent pg_restore: `docker run --rm -it -v
$PWD:/app -w /app postgres:17.2 bash`
2. Start dump recovery: `pg_restore -h 172.17.0.2 -U postgres -d postgres
--single-transaction -v -Fc --no-owner dump.sql`

Terminal 3:
1. Find what container is the server: `docker ps`
2. Kill it: `docker kill d7ecf6e66c1d`

Terminal 1:
Start the server again, with the same command

Terminal 3:
Check there are abandoned large files:
```
# ls -la /home/ivan/postgres/base/5

-rw--- 1 systemd-coredump systemd-coredump 342884352 Dec 17 18:58 16399
-rw--- 1 systemd-coredump systemd-coredump  11149312 Dec 17 18:58 16404
-rw--- 1 systemd-coredump systemd-coredump188416 Dec 17 18:58
16403_fsm
-rw--- 1 systemd-coredump systemd-coredump 686145536 Dec 17 18:58 16403
```

Terminal 2:
1. Confirm those OIDs are not accounted:
```
psql -h 172.17.0.2 -U postgres -c 'select * from pg_class'|grep 16404
psql -h 172.17.0.2 -U postgres -c 'select * from pg_class'|grep 16403
psql -h 172.17.0.2 -U postgres -c 'select * from pg_class'|grep 16399
```

Hence a question: am I doing something wrong? Is it expected behaviour? If
so - how would one recover from this scenario now WITHOUT dropping entire
database?

-- 
With best regards, Ivan Kurnosov


Re: Disabling vacuum truncate for autovacuum

2024-12-16 Thread Laurenz Albe
On Mon, 2024-12-16 at 16:25 -0800, Will Storey wrote:
> I would like to disable vacuum's truncate behaviour for autovacuum.
> Previously I had an outage due to its access exclusive lock when it was
> replicated to a hot standby.
> 
> When that outage happened it was from a VACUUM call in a cronjob rather
> than autovacuum. I now run such VACUUMs with TRUNCATE false which avoids
> the issue for these. However I've realized that autovacuum could cause this
> as well.
>
> I believe the only way to disable this for autovacuum is by changing the
> vacuum_truncate storage parameters on tables. (Ignoring the now removed
> old_snapshot_threshold option).

Yes, you can only do that table by table.


> I'm thinking of altering all my tables to
> turn it off. Is this a horrible idea? I expect I would need to monitor
> tables for problematic growth, but that might be better than a surprise
> outage. I suppose the growth could cause an outage too, but I'm thinking it
> would be more controllable.

I don't see a problem with disabling VACUUM truncation for normal workloads.
Some applications, like volatile queue tables, might need the feature, but
I'd assume that to be the exception.

> Would I need to disable the settings on catalog tables too? (To rule out
> any possibility of it happening). Are there any other things I might be
> missing?

Potentially yes.  But unless you are using temporary tables or create,
alter and drop lots of objects, that shouldn't be necessary.

> I am also wondering if having an autovacuum setting to control it would be
> a good idea for a feature.

I'm all for that.

Yours,
Laurenz Albe