Install pg_dump and pg_restore on UBI8 and UBI8-minimal

2022-01-10 Thread Torello Querci
Hello,

I'm trying to install pg_dump and pg_restore for Postgres12 on a docker
image based on UBI8 and UBI8-minimal but I don't find any resources to do
that.
There are some packages available or have I to built it from myself?

Thanks


CI/CD Boilerplate for extension binaries without source code ?

2022-01-10 Thread Markur Sens
Hi, 

For a gig, I’m developing a small Postgres extension, adding a few data types 
and some functions (some of them written in C and some of them written in 
plpython3).

Now, my client doesn’t want the source code to be visible and/or open even 
internally to other teams; They just want a bunch of yum/deb/apt packages to be 
distributed to servers & users internally.
Docker is also an option but I prefer to rely just on my Makefile (using PGXS) 
for now and produce deb/apt/yum

Can you recommend any CI/CD boilerplate that can help with this? Especially the 
plpython3 part can be tricky as the CREATE FUNCTION body obviously contains 
visible python code and is included in the extension—0.1.sql that goes under 
/share/postgresql/extension/ . 

Most of the sophisticated extensions (Postgis, citus) I’ve looked at have 
either too complex CI/CD processes or ship source code with them. And the 
plpython part seems quite unique I think. 







Database stuck in a state where vacuum and vacuumdb is not working

2022-01-10 Thread jonathaan67
Hello.
My database server stuck in a state where vacuum and vacuumdb is not working. 
Yesterday saw this error:

> HINT: To avoid a database shutdown, execute a full-database VACUUM in 
> "production_services"
> WARNING: database "production_services" must be vacuumed within 10059807 
> transactions

Stopped all services using database and tried to launch vacuumdb and it 
immediately failed with this error message:

> vacuumdb: error: processing of database "production_services" failed: ERROR: 
> invalid page in block 38919 of relation base/31893/272925691

I continued with manually executing vacuum on:

> SET zero_damaged_pages=on;
> VACUUM(FULL, VERBOSE, ANALYZE) queue;

It was running good for some time until connection was terminated and I 
stopped/started server and after that nothing is working. vacuumdb is giving 
same error about invalid page, but manually running vacuum sql command with 
zero_damaged_pages is giving error:

> ERROR: database is not accepting commands to avoid wraparound data loss in 
> database "production_services"
> HINT: Stop the postmaster and vacuum that database in single-user mode.
> You might also need to commit or roll back old prepared transactions, or drop 
> stale replication slots.
> SQL state: 54000

And postgre log is filling with this error now:

> WARNING: database "production_services" must be vacuumed within 99 
> transactions

So vacuumdb and vacuum with zero_damaged_pages=on is not working anymore.

How can I get out of this state and force database to accept incoming command 
for vacuum and zeroing damaged pages on vacuum? Is there any way to make 
vacuumdb command line do "zero_damaged_pages=on"?
Some data loss on that table is acceptable and its not a problem.

Running "PostgreSQL 13.2, compiled by Visual C++ build 1914, 64-bit" on Windows 
10, Database size is over 1,5 TB, damaged table ~350 GB

Thank you!

Re: Database stuck in a state where vacuum and vacuumdb is not working

2022-01-10 Thread Laurenz Albe
On Mon, 2022-01-10 at 13:19 +, jonathaan67 wrote:
> My database server stuck in a state where vacuum and vacuumdb is not working. 
> Yesterday saw this error:
> 
> > HINT: To avoid a database shutdown, execute a full-database VACUUM in 
> > "production_services"
> > WARNING: database "production_services" must be vacuumed within 10059807 
> > transactions
> 
> Stopped all services using database and tried to launch vacuumdb and it 
> immediately failed with this error message:
> 
> > vacuumdb: error: processing of database "production_services" failed: 
> > ERROR:  invalid page in block 38919 of relation base/31893/272925691
> 
> I continued with manually executing vacuum on:
> 
> > SET zero_damaged_pages=on;
> > VACUUM(FULL, VERBOSE, ANALYZE) queue;
> 
> It was running good for some time until connection was terminated and I 
> stopped/started
> server and after that nothing is working. vacuumdb is giving same error about 
> invalid page,
> but manually running vacuum sql command with zero_damaged_pages is giving 
> error:
> 
> > ERROR:  database is not accepting commands to avoid wraparound data loss in 
> > database "production_services"
> > HINT:  Stop the postmaster and vacuum that database in single-user mode.
> > You might also need to commit or roll back old prepared transactions, or 
> > drop stale replication slots.
> > SQL state: 54000
>  
> And postgre log is filling with this error now:
> 
> > WARNING:  database "production_services" must be vacuumed within 99 
> > transactions
> 
> So vacuumdb and vacuum with zero_damaged_pages=on is not working anymore.
> 
> How can I get out of this state and force database to accept incoming command 
> for vacuum and
> zeroing damaged pages on vacuum?  Is there any way to make vacuumdb command 
> line do "zero_damaged_pages=on"?
> Some data loss on that table is acceptable and its not a problem.
> 
> Running "PostgreSQL 13.2, compiled by Visual C++ build 1914, 64-bit" on 
> Windows 10, Database size is over 1,5 TB,
> damaged table ~350 GB

You should use single-user mode, like the error message recommends.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Install pg_dump and pg_restore on UBI8 and UBI8-minimal

2022-01-10 Thread Devrim Gündüz

Hi,

On Mon, 2022-01-10 at 09:02 +0100, Torello Querci wrote:
> 
> I'm trying to install pg_dump and pg_restore for Postgres12 on a
> docker image based on UBI8 and UBI8-minimal but I don't find any
> resources to do that.
> There are some packages available or have I to built it from myself?

I once built v13 and v14 RPMs for UBI, but did not test them yet:

https://download.postgresql.org/pub/repos/yum/testing/13/redhat/rpm13-rhel-8-nosystemd-x86_64/

https://download.postgresql.org/pub/repos/yum/testing/14/redhat/rpm14-rhel-8-nosystemd-x86_64/

v13's and v14's pg_dump and pg_restore will work against v12, so you
may want to try these :-)

Please let me know if you see any problems.

Regards,
-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Dominique Devienne
I'm trying to DROP a ROLE that has 4 schemas:
* 2 smallish ones (1 or 2 dozen tables each),
* 2 largish ones (250 tables, totalling around 4000 columns each).

And of course there are various indexes, constraints, etc... on each schema.

This fails with:

DDL Error: DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)"
CASCADE: #53200: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.

And please note that there could be dozens even hundreds of largish schemas
associated to the dropped ROLE (2 + N), not just the 2+2 it fails with here.

So how can I determine a max_locks_per_transaction, when the number of
tables is basically unbounded?

>From a PostgreSQL newbie perspective, this feels like a serious limitation
of DROP OWNED BY,
if it is limited by how many schemas/tables it has to DROP.

Will I need to DROP each larguish schema individually???

Thanks for any guidance. --DD

PS: Also note that in this case, the SCHEMAS are mostly empty (just ~ 200
rows per schema).
  But in production, there could be thousands / millions of rows per SCHEMA.


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Tom Lane
Dominique Devienne  writes:
> I'm trying to DROP a ROLE that has 4 schemas:
> * 2 smallish ones (1 or 2 dozen tables each),
> * 2 largish ones (250 tables, totalling around 4000 columns each).

> And of course there are various indexes, constraints, etc... on each schema.

You're going to need a lock per dropped relation.  The number of
columns or rows doesn't enter into it, but the number of indexes does.

> DDL Error: DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)"
> CASCADE: #53200: ERROR:  out of shared memory
> HINT:  You might need to increase max_locks_per_transaction.

I'd not have expected that when dropping 500-or-so tables, but maybe
you have a lot of indexes per table?

> And please note that there could be dozens even hundreds of largish schemas
> associated to the dropped ROLE (2 + N), not just the 2+2 it fails with here.

There's not a lot of penalty to increasing max_locks_per_transaction,
but no you can't make it "unbounded".

regards, tom lane




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Dominique Devienne
I accidentally replied to Tom only. Here's my response again.

Apparently, I'm using the default max_locks_per_transaction:
```
ddevienne=> show max_locks_per_transaction ;
 max_locks_per_transaction
---
 64
(1 row)
```

Given  max_locks_per_transaction * (max_connections

 + max_prepared_transactions
)
from
https://www.postgresql.org/docs/current/runtime-config-locks.html, and
max_conn being 100, that's not many locks.

Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
What does it mean in this context? relation = table?

Given there's only 64 locks per conn by default, how can this work with
over 100 tables?
I'm confused... --DD

On Mon, Jan 10, 2022 at 7:06 PM Dominique Devienne 
wrote:

> On Mon, Jan 10, 2022 at 6:39 PM Tom Lane  wrote:
>
>> Dominique Devienne  writes:
>> > I'm trying to DROP a ROLE that has 4 schemas:
>> > * 2 smallish ones (1 or 2 dozen tables each),
>> > * 2 largish ones (250 tables, totalling around 4000 columns each).
>>
>> > And of course there are various indexes, constraints, etc... on each
>> schema.
>>
>> You're going to need a lock per dropped relation.  The number of
>> columns or rows doesn't enter into it, but the number of indexes does.
>>
>
> Here are the current stats of the larguish schema:
>
> const size_t expected_table_count = 244;
> const size_t expected_index_count = 409;
> const size_t expected_unique_index_count = 181;
> const size_t expected_cnstr_count = 989;
> const size_t expected_pk_cnstr_count = 243;
> const size_t expected_fk_cnstr_count = 506;
> const size_t expected_check_cnstr_count = 64;
> const size_t expected_unique_cnstr_count = 176;
>
> > DDL Error: DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)"
>> > CASCADE: #53200: ERROR:  out of shared memory
>> > HINT:  You might need to increase max_locks_per_transaction.
>>
>> I'd not have expected that when dropping 500-or-so tables, but maybe
>> you have a lot of indexes per table?
>>
>
> See above for the total.
> Since we use ON DELETE CASCADE and FKs, and coming from Oracle, I do index
> all my FKs...
>
>
>> > And please note that there could be dozens even hundreds of largish
>> schemas
>> > associated to the dropped ROLE (2 + N), not just the 2+2 it fails with
>> here.
>>
>> There's not a lot of penalty to increasing max_locks_per_transaction,
>> but no you can't make it "unbounded".
>>
>
> Is the HINT valid? How do I determine the current value, and change it?
> Could it be some other problem?
>

> Dropping each largish schema individually is certainly possible, but again
> coming from Oracle,
> I'm used to making transaction as big as they logically need to be, with
> little physical limitations.
> The action is drop all schemas of that instance of the "system", which has
> 2+N schemas. So the
> fact I'd need to use several transactions to work-around
> max_locks_per_transaction is a bummer... --DD
>


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 11:12, Dominique Devienne wrote:

I accidentally replied to Tom only. Here's my response again.

Apparently, I'm using the default max_locks_per_transaction:
```
ddevienne=> show max_locks_per_transaction ;
  max_locks_per_transaction
---
  64
(1 row)
```





Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
What does it mean in this context? relation = table?


https://www.postgresql.org/docs/14/glossary.html#GLOSSARY-RELATION




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Alvaro Herrera
On 2022-Jan-10, Dominique Devienne wrote:

> Given  max_locks_per_transaction * (max_connections
> 
>  + max_prepared_transactions
> )
> from
> https://www.postgresql.org/docs/current/runtime-config-locks.html, and
> max_conn being 100, that's not many locks.

6400 locks, to be precise.  So if your schemas have on average 10 tables
each with 3 indexes per table, you could drop at most 160 schemas in one
go (but only if you're lucky.)

> Given there's only 64 locks per conn by default, how can this work with
> over 100 tables?
> I'm confused... --DD

That value indicates the maximum number of locks that can be taken
across all sessions at a time.  You can have a single session take that
number of locks, or all sessions take 64 locks each.

If you really have many more relations that need to be dropped, you
could try to issue "DROP SCHEMA...CASCADE" for each schema to drop.
It's a lot less convenient than DROP OWNED BY, but it doesn't require to
take as many locks simultaneously.

-- 
Álvaro Herrera   39°49'30"S 73°17'W  —  https://www.EnterpriseDB.com/
"Los trabajadores menos efectivos son sistematicamente llevados al lugar
donde pueden hacer el menor daño posible: gerencia."  (El principio Dilbert)




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread David G. Johnston
On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne 
wrote:

> Given  max_locks_per_transaction * (max_connections
> 
>  + max_prepared_transactions
> )
> from
> https://www.postgresql.org/docs/current/runtime-config-locks.html, and
> max_conn being 100, that's not many locks.
>
> Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
> What does it mean in this context? relation = table?
>
>>
>>>
I'm confused here a bit as well.  The items being talked about are tables
and indexes, both of which manifest as files on the filesystem.  But not
all relations do (e.g., views).  But if this isn't tied to the filesystem
then I would expect that other object types, especially functions, would
require locking as well, but those are decidedly not relations.


> How do I determine the current value, and change it?
>>
>
https://www.postgresql.org/docs/current/runtime-config.html
(you already used the SHOW command to determine the current value)

David J.


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 13:08, David G. Johnston wrote:
On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne > wrote:


Given |max_locks_per_transaction| * (max_connections


 +
max_prepared_transactions

)
from
https://www.postgresql.org/docs/current/runtime-config-locks.html
,
and max_conn being 100, that's not many locks.

Tom wrote "relation" for the number of locks necessary for DROP
OWNED BY.
What does it mean in this context? relation = table?



I'm confused here a bit as well.  The items being talked about are 
tables and indexes, both of which manifest as files on the filesystem.  
But not all relations do (e.g., views).  But if this isn't tied to the 
filesystem then I would expect that other object types, especially 
functions, would require locking as well, but those are decidedly not 
relations.


Pretty sure this is related to:

https://www.postgresql.org/docs/current/explicit-locking.html

In other words to locks on data access.




How do I determine the current value, and change it?


https://www.postgresql.org/docs/current/runtime-config.html 


(you already used the SHOW command to determine the current value)

David J.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Tom Lane
"David G. Johnston"  writes:
> On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne 
> wrote:
>> Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
>> What does it mean in this context? relation = table?

> I'm confused here a bit as well.  The items being talked about are tables
> and indexes, both of which manifest as files on the filesystem.  But not
> all relations do (e.g., views).  But if this isn't tied to the filesystem
> then I would expect that other object types, especially functions, would
> require locking as well, but those are decidedly not relations.

I was wrong actually --- I wrote that thinking that we acquire exclusive
lock when dropping a relation (where relation may be defined as "something
with a pg_class entry").  That's true, but these days we acquire a lock
when deleting *any* cataloged database object.  So you'd also need a lock
for each schema, function, etc that was due to get dropped.  This is
basically to avoid problems in case of concurrent drop commands.

It's still true that the size of a relation in columns or rows is not
relevant here.

regards, tom lane




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Dominique Devienne
On Mon, Jan 10, 2022 at 10:29 PM Tom Lane  wrote:

> "David G. Johnston"  writes:
> > On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne  >
> > wrote:
> >> Tom wrote "relation" for the number of locks necessary for DROP OWNED
> BY.
> >> What does it mean in this context? relation = table?
>
> > I'm confused here a bit as well.  The items being talked about are tables
> > and indexes, both of which manifest as files on the filesystem.  But not
> > all relations do (e.g., views).  But if this isn't tied to the filesystem
> > then I would expect that other object types, especially functions, would
> > require locking as well, but those are decidedly not relations.
>
> I was wrong actually --- I wrote that thinking that we acquire exclusive
> lock when dropping a relation (where relation may be defined as "something
> with a pg_class entry").  That's true, but these days we acquire a lock
> when deleting *any* cataloged database object.  So you'd also need a lock
> for each schema, function, etc that was due to get dropped.  This is
> basically to avoid problems in case of concurrent drop commands.
>
> It's still true that the size of a relation in columns or rows is not
> relevant here.
>

Given that Tom mentions max_locks_per_transaction can be safely increased,
and given the stats I mentioned in this thread, what would a "reasonable"
max_locks_per_transaction
be in my case? By reasonable, I mean "as large as possible w/o being too
large"...

Obviously 64*100 is not quite large enough to be safe in this case. I'd
appreciate some advise. TIA, --DD


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 13:58, Dominique Devienne wrote:



Given that Tom mentions max_locks_per_transaction can be safely increased,
and given the stats I mentioned in this thread, what would a 
"reasonable" max_locks_per_transaction
be in my case? By reasonable, I mean "as large as possible w/o being too 
large"...


Obviously 64*100 is not quite large enough to be safe in this case. I'd 
appreciate some advise. TIA, --DD


I think at this point it might be a good idea to explore what the case is?

In your OP the error occurred here:

DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)" CASCADE

The CASCADE allows to command to recurse to objects that may not be 
owned by the specified role. Is that what you want?


What is the purpose of the process, clearing out given schema, dropping 
objects only owned by a given role, or something else?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: md5 issues Postgres14 on OL7

2022-01-10 Thread Michael Paquier
On Sat, Jan 08, 2022 at 02:00:16PM -0500, Tom Lane wrote:
> This is looking pretty solid to me.  Just a couple of nitpicks:
> 
> * In most places you initialize variables holding error strings to NULL:
> 
> + const char *logdetail = NULL;
> 
> but there are three or so spots that don't, eg PerformRadiusTransaction.
> They should be consistent.  (Even if there's no actual bug, I'd be
> unsurprised to see Coverity carp about the inconsistency.)

Hmm.  I have spotted five of them, with one in passwordcheck.

> * The comments for md5_crypt_verify and plain_crypt_verify claim that
> the error string is "optionally" stored, but I don't see anything
> optional about it.  To me, "optional" would imply coding like
> 
>   if (logdetail)
>   *logdetail = errstr;
> 
> which we don't have here, and I don't think we need it.  But the
> comments need adjustment.  (They were wrong before too, but no
> time like the present to clean them up.)

Makes sense.

> * I'd be inclined to just drop the existing comments like
> 
> -  * We do not bother setting logdetail for any pg_md5_encrypt failure
> -  * below: the only possible error is out-of-memory, which is unlikely, 
> and
> -  * if it did happen adding a psprintf call would only make things worse.
> 
> rather than modify them.   Neither the premise nor the conclusion
> of these comments is accurate anymore.  (I think that the psprintf
> they are talking about is the one that will happen inside elog.c
> to construct an errdetail string.  Yeah, there's some risk there,
> but I think it's minimal because of the fact that we preallocate
> some space in ErrorContext.)

Okay, that's fine by me.

> Other than those things, I think v3 is good to go.

I have done an extra pass on all that, and the result seemed fine to
me, so applied.  I have changed the non-OpenSSL code path of pgcrypto
to deal with that in 14 (does not exist on HEAD).  Thanks a lot for
the successive reviews!

The patch was invasive enough, but we could do more here:
- Add the same facility for HMAC.  That's not worth on REL_14_STABLE
based on the existing set of callers, but I'd like to do something
about that on HEAD as that could be helpful in the future.
- The error areas related to checksum_helper.c and backup_manifest.c
could be improved more.  Now these refer only to scenarios unlikely
going to happen in the field, so I have left that out.
--
Michael


signature.asc
Description: PGP signature