Re: Autovacuum on Partitioned Tables

2022-11-01 Thread Ron

On 10/31/22 23:05, Tom Lane wrote:
[snip]

TBH, if you've got 50m rows, I'm not sure you need partitions at all.


Big rows (i.e. document storage tables with bytea or xml fields) can make 
databases explode in size even with only 50M rows.


(Yes, I know the arguments against it, but it works quite well when the 
database is in a cloud instance.  Worries about backup times, at least, are 
eliminated.)


--
Angular momentum makes the world go 'round.




Delete a table automatic?

2022-11-01 Thread 黄宁
I now have two tables named A and B. Table B is calculated based on the
data of table A. I wonder if table B can be automatically deleted when
table A is deleted?


Re: PQconsumeinput() may close the fd

2022-11-01 Thread Vasily Kulikov
21.10.2022, 17:40, "Tom Lane" :Laurenz Albe  writes: On Fri, 2022-10-21 at 07:27 +0300, Vasily Kulikov wrote: The problem is that sometimes PQconsumeInput() may close the connectionWhat I'd suggest doing is checking for PQstatus(conn) == CONNECTION_BAD,or else directly rechecking PQsocket() each time, rather than assumingthe socket is still there.I've chosen this one. Thanks for the help!




Re: Delete a table automatic?

2022-11-01 Thread jian he
On Tue, Nov 1, 2022 at 2:33 PM 黄宁  wrote:

> I now have two tables named A and B. Table B is calculated based on the
> data of table A. I wonder if table B can be automatically deleted when
> table A is deleted?
>

Your question seems not that specific.
You can use https://dbfiddle.uk/btGcOH30 to showcase your specific
problem/question.

you can use DROP TABLE CASCADE.
DROP TABLE manual:
https://www.postgresql.org/docs/current/sql-droptable.html

-- 
 I recommend David Deutsch's <>

  Jian


Re: Delete a table automatic?

2022-11-01 Thread Rob Sargent

On 11/1/22 03:31, jian he wrote:



On Tue, Nov 1, 2022 at 2:33 PM 黄宁  wrote:

I now have two tables named A and B. Table B is calculated based
on the data of table A. I wonder if table B can be automatically
deleted when table A is deleted?


Your question seems not that specific.
You can use https://dbfiddle.uk/btGcOH30 to showcase your specific 
problem/question.


you can use DROP TABLE CASCADE.
DROP TABLE manual: 
https://www.postgresql.org/docs/current/sql-droptable.html




Only If B has a foreign key reference to A

--
 I recommend David Deutsch's <>

Jian




Re: Delete a table automatic?

2022-11-01 Thread Peter J. Holzer
On 2022-11-01 07:41:14 -0600, Rob Sargent wrote:
> On 11/1/22 03:31, jian he wrote:
> 
> On Tue, Nov 1, 2022 at 2:33 PM 黄宁  wrote:
> 
> I now have two tables named A and B. Table B is calculated based on 
> the
> data of table A. I wonder if table B can be automatically deleted when
> table A is deleted?
[...]
> you can use DROP TABLE CASCADE.
> DROP TABLE manual: https://www.postgresql.org/docs/current/
> sql-droptable.html
> 
> 
> 
> Only If B has a foreign key reference to A

And even then it only drops the constraint, not the table (or the data):

hjp=> create table a (id serial primary key, t text);
CREATE TABLE
hjp=> create table b (id serial primary key, a int references a, t text);
CREATE TABLE
hjp=> \d a
 Table "public.a"
╔╤═╤═══╤══╤═══╗
║ Column │  Type   │ Collation │ Nullable │Default║
╟┼─┼───┼──┼───╢
║ id │ integer │   │ not null │ nextval('a_id_seq'::regclass) ║
║ t  │ text│   │  │   ║
╚╧═╧═══╧══╧═══╝
Indexes:
"a_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "b" CONSTRAINT "b_a_fkey" FOREIGN KEY (a) REFERENCES a(id)

hjp=> \d b
 Table "public.b"
╔╤═╤═══╤══╤═══╗
║ Column │  Type   │ Collation │ Nullable │Default║
╟┼─┼───┼──┼───╢
║ id │ integer │   │ not null │ nextval('b_id_seq'::regclass) ║
║ a  │ integer │   │  │   ║
║ t  │ text│   │  │   ║
╚╧═╧═══╧══╧═══╝
Indexes:
"b_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"b_a_fkey" FOREIGN KEY (a) REFERENCES a(id)

[some inserts later]

hjp=> select * from b;
╔╤═══╤══╗
║ id │ a │  t   ║
╟┼───┼──╢
║  1 │ 1 │ foo1 ║
║  2 │ 1 │ foo2 ║
║  3 │ 2 │ bar1 ║
╚╧═══╧══╝
(3 rows)

hjp=> drop table a cascade;
NOTICE:  drop cascades to constraint b_a_fkey on table b
DROP TABLE

hjp=> \d b
 Table "public.b"
╔╤═╤═══╤══╤═══╗
║ Column │  Type   │ Collation │ Nullable │Default║
╟┼─┼───┼──┼───╢
║ id │ integer │   │ not null │ nextval('b_id_seq'::regclass) ║
║ a  │ integer │   │  │   ║
║ t  │ text│   │  │   ║
╚╧═╧═══╧══╧═══╝
Indexes:
"b_pkey" PRIMARY KEY, btree (id)

As you can see, the table is still there, but the foreign key constraint
is gone.

hjp=> select * from b;
╔╤═══╤══╗
║ id │ a │  t   ║
╟┼───┼──╢
║  1 │ 1 │ foo1 ║
║  2 │ 1 │ foo2 ║
║  3 │ 2 │ bar1 ║
╚╧═══╧══╝
(3 rows)

And the data in the table is also unchanged.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Delete a table automatic?

2022-11-01 Thread Adrian Klaver

On 10/31/22 18:44, 黄宁 wrote:
I now have two tables named A and B. Table B is calculated based on the 
data of table A. I wonder if table B can be automatically deleted when 
table A is deleted?



The only thing I can think of is a sql_drop event trigger:

https://www.postgresql.org/docs/current/functions-event-triggers.html#PG-EVENT-TRIGGER-SQL-DROP-FUNCTIONS

9.29.2. Processing Objects Dropped by a DDL Command

Though this will not be specific to one table.

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Delete a table automatic?

2022-11-01 Thread MuraliPD@GMail
Hi,

I too agree with Adrian Klaver, the trigger is the only option


Thanks,

V Muralidharan
+91 9940302900

On Tue, 1 Nov 2022, 19:46 Peter J. Holzer,  wrote:

> On 2022-11-01 07:41:14 -0600, Rob Sargent wrote:
> > On 11/1/22 03:31, jian he wrote:
> >
> > On Tue, Nov 1, 2022 at 2:33 PM 黄宁  wrote:
> >
> > I now have two tables named A and B. Table B is calculated based
> on the
> > data of table A. I wonder if table B can be automatically
> deleted when
> > table A is deleted?
> [...]
> > you can use DROP TABLE CASCADE.
> > DROP TABLE manual: https://www.postgresql.org/docs/current/
> > sql-droptable.html
> >
> >
> >
> > Only If B has a foreign key reference to A
>
> And even then it only drops the constraint, not the table (or the data):
>
> hjp=> create table a (id serial primary key, t text);
> CREATE TABLE
> hjp=> create table b (id serial primary key, a int references a, t text);
> CREATE TABLE
> hjp=> \d a
>  Table "public.a"
> ╔╤═╤═══╤══╤═══╗
> ║ Column │  Type   │ Collation │ Nullable │Default║
> ╟┼─┼───┼──┼───╢
> ║ id │ integer │   │ not null │ nextval('a_id_seq'::regclass) ║
> ║ t  │ text│   │  │   ║
> ╚╧═╧═══╧══╧═══╝
> Indexes:
> "a_pkey" PRIMARY KEY, btree (id)
> Referenced by:
> TABLE "b" CONSTRAINT "b_a_fkey" FOREIGN KEY (a) REFERENCES a(id)
>
> hjp=> \d b
>  Table "public.b"
> ╔╤═╤═══╤══╤═══╗
> ║ Column │  Type   │ Collation │ Nullable │Default║
> ╟┼─┼───┼──┼───╢
> ║ id │ integer │   │ not null │ nextval('b_id_seq'::regclass) ║
> ║ a  │ integer │   │  │   ║
> ║ t  │ text│   │  │   ║
> ╚╧═╧═══╧══╧═══╝
> Indexes:
> "b_pkey" PRIMARY KEY, btree (id)
> Foreign-key constraints:
> "b_a_fkey" FOREIGN KEY (a) REFERENCES a(id)
>
> [some inserts later]
>
> hjp=> select * from b;
> ╔╤═══╤══╗
> ║ id │ a │  t   ║
> ╟┼───┼──╢
> ║  1 │ 1 │ foo1 ║
> ║  2 │ 1 │ foo2 ║
> ║  3 │ 2 │ bar1 ║
> ╚╧═══╧══╝
> (3 rows)
>
> hjp=> drop table a cascade;
> NOTICE:  drop cascades to constraint b_a_fkey on table b
> DROP TABLE
>
> hjp=> \d b
>  Table "public.b"
> ╔╤═╤═══╤══╤═══╗
> ║ Column │  Type   │ Collation │ Nullable │Default║
> ╟┼─┼───┼──┼───╢
> ║ id │ integer │   │ not null │ nextval('b_id_seq'::regclass) ║
> ║ a  │ integer │   │  │   ║
> ║ t  │ text│   │  │   ║
> ╚╧═╧═══╧══╧═══╝
> Indexes:
> "b_pkey" PRIMARY KEY, btree (id)
>
> As you can see, the table is still there, but the foreign key constraint
> is gone.
>
> hjp=> select * from b;
> ╔╤═══╤══╗
> ║ id │ a │  t   ║
> ╟┼───┼──╢
> ║  1 │ 1 │ foo1 ║
> ║  2 │ 1 │ foo2 ║
> ║  3 │ 2 │ bar1 ║
> ╚╧═══╧══╝
> (3 rows)
>
> And the data in the table is also unchanged.
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Autovacuum on Partitioned Tables

2022-11-01 Thread Ryan Ruenroeng
Thanks all for your responses. We have a couple of tables. Each with
50-70mil rows currently, but they are expected to grow. Partitioning seems
to be a better long-term strategy, queries to these tables, using their
existing indexes, leaves them basically unusable (lng run times).

FYI, we are currently on PG v11.16

Let's assume we do go with my current proposal because it is inline with
how we are planning to drop partitions that stop seeing activity in the
future:

[snip] from Ron's post above:
Autovacuum will handle it.

I still have some doubts based on this:

"Tuples changed in partitions and inheritance children do not trigger
analyze on the parent table. If the parent table is empty or rarely
changed, it may never be processed by autovacuum, and the statistics
for the inheritance tree as a whole won't be collected. It is
necessary to run ANALYZE* on the parent table manually in order to
keep the statistics up to date." *[Link
]

Q1: Will we at least need to call Analyze via a cron job on the parent
table to ensure that the statistics are up to date for autovacuum to
catch the tables?

>From reading the documentation that a few of you have pointed me to,
I'm led to believe that the parent table is the "Partition" table. The
children tables are treated by the autovacuum as tables

Q2: Autovacuum will act on the partitions/children to the parent
table. *Is that a correct statement?*

--
It's good to know that the query optimizer will improve with partitions on
versions 12+. Thank you.

Best,
Ryan

Ryan N Ruenroeng (He/His)
(717) 578-3929
| rruenro...@gmail.com
| Madison, WI 
 



On Tue, Nov 1, 2022 at 2:54 AM Ron  wrote:

> On 10/31/22 23:05, Tom Lane wrote:
> [snip]
> > TBH, if you've got 50m rows, I'm not sure you need partitions at all.
>
> Big rows (i.e. document storage tables with bytea or xml fields) can make
> databases explode in size even with only 50M rows.
>
> (Yes, I know the arguments against it, but it works quite well when the
> database is in a cloud instance.  Worries about backup times, at least,
> are
> eliminated.)
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Re: Delete a table automatic?

2022-11-01 Thread Tom Lane
Adrian Klaver  writes:
> On 10/31/22 18:44, 黄宁 wrote:
>> I now have two tables named A and B. Table B is calculated based on the 
>> data of table A. I wonder if table B can be automatically deleted when 
>> table A is deleted?

> The only thing I can think of is a sql_drop event trigger:

I wonder whether the OP's purposes wouldn't be better served by
making B a view or materialized view, instead of a separate table.
Then the cascaded-drop behavior would be automatic.

regards, tom lane




Re: Autovacuum on Partitioned Tables

2022-11-01 Thread Ron

On 11/1/22 10:11, Ryan Ruenroeng wrote:T
Thanks all for your responses. We have a couple of tables. Each with 
50-70mil rows currently, but they are expected to grow. Partitioning seems 
to be a better long-term strategy, queries to these tables, using their 
existing indexes, leaves them basically unusable (lng run times).


FYI, we are currently on PG v11.16

Let's assume we do go with my current proposal because it is inline with 
how we are planning to drop partitions that stop seeing activity in the 
future:


[snip] from Ron's post above:
Autovacuum will handle it.

I still have some doubts based on this:
"Tuples changed in partitions and inheritance children do not trigger 
analyze on the parent table. If the parent table is empty or rarely 
changed, it may never be processed by autovacuum, and the statistics for 
the inheritance tree as a whole won't be collected. It is necessary to run 
|ANALYZE|/ on the parent table manually in order to keep the statistics up to 
date." /[Link ]
Q1: Will we at least need to call Analyze via a cron job on the parent 
table to ensure that the statistics are up to date for autovacuum to catch 
the tables?


Tuning auto-analyze variables is supposed to obviate the need for manual 
vacuuming *analyzing*.  I do manual vacuuming *anayzing* anyway in a cron 
job. "This set of tables have had more than X amount of changes, so I'll 
/explictly/ analyze them."


Once a week, I do bloat calculations and vacuum based on that.

From reading the documentation that a few of you have pointed me to, I'm 
led to believe that the parent table is the "Partition" table.


"\d" will show you exactly what you need to know.


The children tables are treated by the autovacuum as tables


Child tables *are* tables.

Q2: Autovacuum will act on the partitions/children to the parent table. 
*Is that a correct statement?*


Parents are meta-tables.  There's nothing to vacuum or analyze.


--
It's good to know that the query optimizer will improve with partitions on 
versions 12+. Thank you.


Best,
Ryan

Ryan N Ruenroeng (He/His)
(717) 578-3929 


| rruenro...@gmail.com



|   Madison, WI 

 	 
	




On Tue, Nov 1, 2022 at 2:54 AM Ron  wrote:

On 10/31/22 23:05, Tom Lane wrote:
[snip]
> TBH, if you've got 50m rows, I'm not sure you need partitions at all.

Big rows (i.e. document storage tables with bytea or xml fields) can make
databases explode in size even with only 50M rows.

(Yes, I know the arguments against it, but it works quite well when the
database is in a cloud instance.  Worries about backup times, at
least, are
eliminated.)

-- 
Angular momentum makes the world go 'round.





--
Angular momentum makes the world go 'round.

SSL/TLS encryption without

2022-11-01 Thread Ron

AWS RDS Postgresql 12.11

sides=> select * from pg_stat_ssl where pid = 362;
-[ RECORD 1 ]-+
pid   | 362
ssl   | t
version   | TLSv1.2
cipher    | ECDHE-RSA-AES256-GCM-SHA384
bits  | 256
compression   | f
client_dn |
client_serial |
issuer_dn |

I've got 85 connections where ssl='t'. How can connections be encrypted 
using SSL/TLS without a client certificate?


--
Angular momentum makes the world go 'round.




Aw: Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-01 Thread Karsten Hilbert
> The client user should *never* read the PostgreSQL configuration files, so if 
> changing
> the permissions (which you should *never* do) has an effect, you must be 
> doing something
> very strange, like trying to start the database server with the wrong user.

It smells of trying to *embed* PostgreSQL ?

But that would not go with the account of multi-tenancy that's been presented.

Karsten




Re: SSL/TLS encryption without

2022-11-01 Thread Peter J. Holzer
On 2022-11-01 11:13:01 -0500, Ron wrote:
> sides=> select * from pg_stat_ssl where pid = 362;
> -[ RECORD 1 ]-+
> pid   | 362
> ssl   | t
> version   | TLSv1.2
> cipher    | ECDHE-RSA-AES256-GCM-SHA384
> bits  | 256
> compression   | f
> client_dn |
> client_serial |
> issuer_dn |
> 
> I've got 85 connections where ssl='t'. How can connections be encrypted
> using SSL/TLS without a client certificate?

Given that you probably never generated a client certificate for your
browser: How could HTTPS work?

The certificates are used for authentication, not for encryption. The
public key(s) included in the certificates may be used during key
exchange, but there are key exchange algorithms which don't need that
(in fact I think ECDHE is one of them) at all, and even those that do
need only one key, so it is sufficient that only the server has a
certificate.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-01 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> It would seem proper to put any user who you want to set up for "local", 
>> "peer" authentication into the "postgres" group
> 
> Did you really mean to write that?
> 
> The postgres o/s user should be able to login using peer.  It is a one-way 
> idea though.  Wanting to login using peer says nothing about whether the user 
> getting that capability should be allowed to mess with the running server in 
> the operating system.
> 
> As for the rest, all I see is that you are using an opinionated package 
> manager to install software whose opinions you don't agree with.  Maybe there 
> is some buggy behavior with respect to shared o/s db administration among 
> users in a common group...you haven't demonstrated that one way or the other 
> here.  I think it is pointless to have the o/s admin and postgres bootstrap 
> user be anything but postgres and this whole thing is counter-productive.  
> But if you are going down to first principles maybe you should install from 
> source and build your own "package" from that.

I meant only to ask a question—and not to pre-judge anything. I should have 
thought more carefully about its wording. (I'm thinking specifically of a PG 
installation on Linux—and only that.)

About "opinionated package manager", I created my installation by following the 
steps described here:

https://wiki.postgresql.org/wiki/Apt

My aim is simply to conform to recommended practice. When I've said (like I 
believe that I have) that I didn't get a chance, during the flow, to specify 
, my intention was only to note this fact and to imply that what 
I got without any intervention coincided with the recommended practice. This 
suits me.

I'm going to need to use a term to denote the O/S user that very often ends up 
with the name "postgres". I don't know what the official term is. I'll simply 
say "postgres" here. I haven't yet found an overview of the set of files that 
jointly implement a freshly installed PG system. But, of course, I've noticed 
where critical directories are. For example:

/usr/lib/postgresql/11/bin

Lots of familiar names, like "initdb" are found there. They all have 
owner/group "root". And they're all executable by "all". The point has been 
made that its not who owns them that matters but, rather, what the effective 
user ID is at execution time. I can therefore ignore these for the purpose of 
my present question.

In contrast, on (my) "data_directory" (I have just one) here:

/var/lib/postgresql/11/main

I see that every file has owner/group "postgres/postgres". The "owner" has all 
privileges. And each of "group" and "all" have no privileges. This doesn't seem 
at all surprising. But it does indicate that whatever privileges some critical 
file has (anywhere), the set has been determined by design.

There are several "*.conf" files on the same directory as the "config_file ". 
For me, it's this:

/etc/postgresql/11/main

These, too, all have owner/group "postgres/postgres" — and "postgres" has all 
meaningful privileges (presumably "execute" means nothing for a text config 
file). But "group” has only "read" on them all and "all" has no privileges 
except (I believe) for “postgresql.conf"—which is readable by all. 

I mentioned that I'll presently re-do my PG installation from scratch. Then 
I'll record the permissions that all the config files are set up with and know 
for sure. I looked in this section:

20.2. File Locations
https://www.postgresql.org/docs/current/runtime-config-file-locations.html 


But "permission" isn't found on the page.

Anyway, it's only natural to assume that privileges on all of the 
"postgres/postgres" files have been determined in accordance with a deliberate 
design. The docs tell me to edit certain of these files to achieve certain 
intended effects. I've done this—by authorizing as the "postgres" O/S user. So 
there's an inevitable risk (at least for me) that I might have changed some 
permissions accidentally.

I've observed that, for my present use case (enable "local", "peer" 
authentication for an O/S user other than "postgres), the authorization attempt 
fails unless that user can read one critical config file. I just did a careful 
test with the brand-new O/S user "bob". Here's what "id" shows:

id=1003(bob) gid=1003(bob) groups=1003(bob)
uid=1001(postgres) gid=1001(postgres) 
groups=1001(postgres),27(sudo),114(ssl-cert)


I know that I've been told off for allowing "sudo" for "postgres". I'm only 
experimenting on my laptop. But I want to be able to stop the server, delete 
the datafiles, create a new cluster, and then start that using a single script. 
I can't use "pg_ctl stop/start" because it expects to find its config files on 
the data directory. (That's a different story. And I'm not ready to ask about 
that yet.) So I use "sudo systemctl stop/start postgresql" because this

Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-01 Thread David G. Johnston
On Tue, Nov 1, 2022 at 1:20 PM Bryn Llewellyn  wrote:

>
> About "opinionated package manager", I created my installation by
> following the steps described here:
>
> https://wiki.postgresql.org/wiki/Apt
>
>
Yes, apt is opinionated.  And of important note here - its opinions, and
the supplemental wrapper scripts it implements to make those opinions
works, are outside the scope of what the official PostgreSQL documentation
is going to cover.  (This is what the email you linked to is calling
pg_wrapper)


>
> My experiment shows that my attempt to authorize  as "bob" using "local",
> "peer" authorization fails when every config file is readable only by
> "owner" and "group". And it shows that a sufficient fix is to make just
> "postgresql.conf" readable by "all" (as I believe is the intention). Of
> course, an alternative fix (in the sesne that it would work) would be to
> have "postgresql.conf" not readable by all but to put "bob" in the
> "postgres" group.
>
> All this leads to an obvious question:
>
> *«*
> *Given that all of the config files have been made readable by "group" (in
> contrast to the regime for the data files), what is the intention of this
> design? In other words, when is it proper to put an O/S user in the
> "postgres" group? After all, if the answer is "never" than no privileges on
> "postgres/postgres" files would ever have been granted to "group".*
> *»*
>
>
I think the intent of the design is for the custom Debian wrapper scripts
to be able to read the configuration files for the named version "11" and
configuration "main" to find out where certain things like the socket file
are being written to.  The argument being the configuration files don't
actually contain secret data so reading shouldn't be an issue and can be
useful.  Obviously the same does not apply to data files.  On that basis it
would indeed make more sense to grant read to "all" rather than try and add
users to "postgres" to make the reading of the configuration files work.

David J.


Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-01 Thread David G. Johnston
On Tue, Nov 1, 2022 at 1:39 PM David G. Johnston 
wrote:

> On Tue, Nov 1, 2022 at 1:20 PM Bryn Llewellyn  wrote:
>
>>
>> All this leads to an obvious question:
>>
>> *«*
>> *Given that all of the config files have been made readable by "group"
>> (in contrast to the regime for the data files), what is the intention of
>> this design? In other words, when is it proper to put an O/S user in the
>> "postgres" group? After all, if the answer is "never" than no privileges on
>> "postgres/postgres" files would ever have been granted to "group".*
>> *»*
>>
>>
> I think the intent of the design is for the custom Debian wrapper scripts
> to be able to read the configuration files for the named version "11" and
> configuration "main" to find out where certain things like the socket file
> are being written to.  The argument being the configuration files don't
> actually contain secret data so reading shouldn't be an issue and can be
> useful.  Obviously the same does not apply to data files.  On that basis it
> would indeed make more sense to grant read to "all" rather than try and add
> users to "postgres" to make the reading of the configuration files work.
>
>
Also, per the initdb documentation:

For security reasons the new cluster created by initdb
will only be accessible by the cluster user by default.  The
--allow-group-access option allows any user in the same
group as the cluster owner to read files in the cluster.  This is useful
for performing backups as a non-privileged user.
David J.


Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-01 Thread Adrian Klaver

On 11/1/22 13:20, Bryn Llewellyn wrote:

david.g.johns...@gmail.com  wrote:









I know that I've been told off for allowing "sudo" for "postgres". I'm 
only experimenting on my laptop. But I want to be able to stop the 
server, delete the datafiles, create a new cluster, and then start that 
using a single script. I can't use "pg_ctl stop/start" because it 
expects to find its config files on the data directory. (That's a 
different story. And I'm not ready to ask about that yet.) So I use 
"sudo systemctl stop/start postgresql" because this method looks in the 
right place for the config files.


If you are going to use the Debian/Ubuntu packaging then you will need 
to follow its "rules".


See here:

https://wiki.debian.org/PostgreSql

https://ubuntu.com/server/docs/databases-postgresql

Also do:

man pg_wrapper

Your best bet is to systemctl to start stop Postgres, but if you want to 
use pg_ctl then you need to use the wrapped version. As example:


pg_lsclusters
Ver Cluster Port Status OwnerData directory  Log file
14  main5432 online postgres /var/lib/postgresql/14/main 
/var/log/postgresql/postgresql-14-main.log
15  main5434 down   postgres /var/lib/postgresql/15/main 
/var/log/postgresql/postgresql-15-main.log


sudo pg_ctlcluster stop 14/main

pg_lsclusters
Ver Cluster Port Status OwnerData directory  Log file
14  main5432 down   postgres /var/lib/postgresql/14/main 
/var/log/postgresql/postgresql-14-main.log
15  main5434 down   postgres /var/lib/postgresql/15/main 
/var/log/postgresql/postgresql-15-main.log


sudo pg_ctlcluster start 14/main

 pg_lsclusters
Ver Cluster Port Status OwnerData directory  Log file
14  main5432 online postgres /var/lib/postgresql/14/main 
/var/log/postgresql/postgresql-14-main.log
15  main5434 down   postgres /var/lib/postgresql/15/main 
/var/log/postgresql/postgresql-15-main.log



There are more pg_*cluster* wrapped commands:

pg_archivecleanup  pg_buildextpg_createcluster   pg_dump 
   pg_lsclusters  pg_recvlogical pg_restorecluster 
pg_virtualenv
pg_backupcluster   pg_config  pg_ctlcluster  pg_dumpall 
   pg_receivewal  pg_renamecluster   pg_updatedicts
pg_basebackup  pg_conftoolpg_dropcluster pg_isready 
   pg_receivexlog pg_restore pg_upgradecluster






Here's what my "pg_hba.conf" has:

*  local   all   bob   peer*

My experiment shows that my attempt to authorize  as "bob" using 
"local", "peer" authorization fails when every config file is readable 
only by "owner" and "group". And it shows that a sufficient fix is to 
make just "postgresql.conf" readable by "all" (as I believe is the 
intention). Of course, an alternative fix (in the sesne that it would 
work) would be to have "postgresql.conf" not readable by all but to put 
"bob" in the "postgres" group.


All this leads to an obvious question:

*«*
*Given that all of the config files have been made readable by "group" 
(in contrast to the regime for the data files), what is the intention of 
this design? In other words, when is it proper to put an O/S user in the 
"postgres" group? After all, if the answer is "never" than no privileges 
on "postgres/postgres" files would ever have been granted to "group".*

*»*









--
Adrian Klaver
adrian.kla...@aklaver.com





Migrating postgres client(libpq) library from 12.10 to 14(Need some information)

2022-11-01 Thread M Tarkeshwar Rao
Hi All,

We are migrating our product from postgres client library libpq 12.10 to libpq 
14. This is major upgrade in the client library for us.
I am quite new in this migration activity. Can you please guide me  or share 
standard steps for this migration.

Which area I should see in detail so that I can avoid any bugs in our 
production?

Regards
Tarkeshwar