Re: unable to install postgreql 13.4

2022-11-03 Thread shashidhar Reddy
Hello Julien,

Actually production databases are on version 13.4 and the team needs the
same version to test something on it, is there a way to install the same
version?

On Thu, Nov 3, 2022 at 12:29 PM Julien Rouhaud  wrote:

> Hi,
>
> On Thu, Nov 03, 2022 at 12:18:05PM +0530, shashidhar Reddy wrote:
> >
> > I need to install postgresql 13.4 on development server , but getting
> below
> > errors, please help to resolve this
> >
> > sudo apt-get install postgresql-13.4
> >
> > Reading package lists... Done
> > Building dependency tree
> > Reading state information... Done
> > E: Unable to locate package postgresql-13.4
> > E: Couldn't find any package by glob 'postgresql-13.4'
> > E: Couldn't find any package by regex 'postgresql-13.4'
> >
> > Note: This is on ubuntu and postgresql 11 is already there on the
> machine.
>
> You need to specify the major version but not the minor version.  The
> package
> will take care of installing the latest version, and updating it with the
> rest
> of the system, so you should write:
>
> sudo apt-get install postgresql-13
>
> You can refer to the pgdg apt documentation for more details:
> https://wiki.postgresql.org/wiki/Apt.
>


-- 
Shashidhar


Re: unable to install postgreql 13.4

2022-11-03 Thread Julien Rouhaud
On Thu, Nov 03, 2022 at 12:38:53PM +0530, shashidhar Reddy wrote:
> Hello Julien,
> 
> Actually production databases are on version 13.4 and the team needs the
> same version to test something on it, is there a way to install the same
> version?

Ah I see.  You can try to use https://apt-archive.postgresql.org/ it may
contain the version you want.  Otherwise you may have to compile it from source
code.




Re: unable to install postgreql 13.4

2022-11-03 Thread shashidhar Reddy
Thank you Julien!

On Thu, Nov 3, 2022 at 12:52 PM Julien Rouhaud  wrote:

> On Thu, Nov 03, 2022 at 12:38:53PM +0530, shashidhar Reddy wrote:
> > Hello Julien,
> >
> > Actually production databases are on version 13.4 and the team needs the
> > same version to test something on it, is there a way to install the same
> > version?
>
> Ah I see.  You can try to use https://apt-archive.postgresql.org/ it may
> contain the version you want.  Otherwise you may have to compile it from
> source
> code.
>


-- 
Shashidhar


AW: Reducing bandwidth usage of database replication

2022-11-03 Thread Sascha Zenglein
It is, unfortunately, too much for our requirements.
I was able to increase wal_receiver_status_interval so that the publisher only 
contacts the subscriber every 30 seconds instead of every 10.
But I am not able to increase it further.


Sascha Zenglein

Produktentwicklung


[cid:gessler_email_logo_23bb5200-2c8c-4c2d-a63e-71d5bf29d89f.gif]





Gessler GmbH
Gutenbergring 14
63110 Rodgau
Deutschland

Tel.:  +49 6106 8709 693
Fax:  +49 6106 8709 50

E-Mail: zengl...@gessler.de
Web: www.gessler.de 




Geschaeftsfuehrer: Helmut Gessler, Dipl.-Ing. Marcus Gessler
Gerichtsstand: Offenbach/Main, Amtsgericht Offenbach HRB 20857
USt.-IdNr.: DE 113 551 141
Sparkasse Dieburg, IBAN DE94 5085 2651 0057 0025 03, SWIFT HELADEF1DIE


Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail
und/oder der Inhalte dieser Mail ist nicht gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail.
Any unauthorised copying, disclosure or distribution of the material of this 
e-mail is strictly forbidden.

Von: Ron 
Gesendet: Mittwoch, 2. November 2022 17:29
An: pgsql-general@lists.postgresql.org 
Betreff: Re: Reducing bandwidth usage of database replication

On 11/2/22 09:56, Sascha Zenglein wrote:
Hi all,

I want to use the postgres-native logical replication to have multiple clients 
receive and send data to a central database.
Real-time is far less important than network usage, and with my current test 
setup it appears both instances communicate frequently if a subscription is 
active, even if nothing is happening.

Is there a good way to reduce data usage, for example by limiting the amount of 
keep-alive messages? One database will likely be idle most of the time.

I estimated the current solution to idle at around 1.4MiB per day. Ideally it 
would use less than 100KiB a day.

1.4MiB/day is 17 bytes per second.  That's not too much.

--
Angular momentum makes the world go 'round.


Postgres to edb AS, need conversion?

2022-11-03 Thread milist ujang
Hi all,

Anyone have experiences or someone from edb can guid about change engine
from postgres open source to edb as, di I need to convert the data or
simply plug-and-play? Or need run a tool to add catalog table owned by edb
as?


-- 
regards

ujang jaenudin | DBA Consultant (Freelancer)
http://ora62.wordpress.com
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab


Would postgresql add a pg_sleep_backend() method

2022-11-03 Thread lz ma
Postgresql already pg_cancel_backend and pg_terminate_backend which will stop a 
backend.
Sometimes the system is busy (e.g. disk IO) for a long time, but I don’t want 
to stop each backend
If postgres has a sleep method, I may let some pid just sleep a few times, so 
disk busy may decend
and the most import backend may  run faster.
Backend1 -disk busy 99% 
   Backend1 -disk busy 99%
Backend2 -disk bush 99%  pg_sleep_backend(1, 10min) Backend2 
-disk busy 0%
Backend3 -disk bush 99%  pg_sleep_backend(1, 11min) Backend3 
-disk busy 0%
Backend4 -disk bush 99%  pg_sleep_backend(1, 12min) Backend4 
-disk busy 0%






从 Windows 版邮件发送



Re: Postgres to edb AS, need conversion?

2022-11-03 Thread Abdul Sayeed
After migration you need to execute below command from SQL Prompt:

vacuum (verbose, analyze);  --> to update statistics.

Rest is good.


Regards,
Abdul Sayeed

On Thu, Nov 3, 2022 at 3:15 PM milist ujang  wrote:

> Hi all,
>
> Anyone have experiences or someone from edb can guid about change engine
> from postgres open source to edb as, di I need to convert the data or
> simply plug-and-play? Or need run a tool to add catalog table owned by edb
> as?
>
>
> --
> regards
>
> ujang jaenudin | DBA Consultant (Freelancer)
> http://ora62.wordpress.com
> http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab
>


-- 
Thanks & Regards,
Abdul Sayeed
PostgreSQL DBA
Postgres Professional Certified
Skype: abdul.sayeed24


Re: unable to install postgreql 13.4

2022-11-03 Thread Ron

On 11/3/22 01:48, shashidhar Reddy wrote:

Hello,

I need to install postgresql 13.4 on development server , but getting 
below errors, please help to resolve this


sudo apt-get install postgresql-13.4

Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package postgresql-13.4
E: Couldn't find any package by glob 'postgresql-13.4'
E: Couldn't find any package by regex 'postgresql-13.4'

Note: This is on ubuntu and postgresql 11 is already there on the machine.


Did you specify the minor version when installing postgresql 11, or did you 
install "postgresql-11"?


--
Angular momentum makes the world go 'round.




pg_restore error on function

2022-11-03 Thread Post Gresql

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db

but then

pg_restore --single-transaction -v -U postgres -O -e -d my_other_db my_dump

failed with

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854 FUNCTION 
update_b() previous_owner
pg_restore: [archiver (db)] could not execute query: ERROR:  schema my_schema" 
does not exist
Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN
PERFORM pg_advisory_xact_lock(1);

INSERT INTO ...;
END;
$$;


This is PG version 10.19

Is there a solution for this (apart from upgrading to a newer PG version)?

The only workaround I can think of is to dump in plain text and then edit the 
dump file.




Re: pg_restore error on function

2022-11-03 Thread Adrian Klaver

On 11/3/22 07:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db


but then

pg_restore --single-transaction -v -U postgres -O -e -d my_other_db my_dump

failed with

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854 
FUNCTION update_b() previous_owner
pg_restore: [archiver (db)] could not execute query: ERROR:  schema 
my_schema" does not exist

     Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
     LANGUAGE plpgsql
     AS $$
     BEGIN
     PERFORM pg_advisory_xact_lock(1);

     INSERT INTO ...;
     END;
     $$;


This is PG version 10.19

Is there a solution for this (apart from upgrading to a newer PG version)?


Was there another error before this?

Did the -v  show the schema my_schema being created?



The only workaround I can think of is to dump in plain text and then 
edit the dump file.





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_restore error on function

2022-11-03 Thread Ron

On 11/3/22 09:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db


but then

pg_restore --single-transaction -v -U postgres -O -e -d my_other_db my_dump

failed with

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854 
FUNCTION update_b() previous_owner
pg_restore: [archiver (db)] could not execute query: ERROR: schema 
my_schema" does not exist

    Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
    LANGUAGE plpgsql
    AS $$
    BEGIN
    PERFORM pg_advisory_xact_lock(1);

    INSERT INTO ...;
    END;
    $$;


This is PG version 10.19

Is there a solution for this (apart from upgrading to a newer PG version)?


What was the *complete* pg_dump command?

--
Angular momentum makes the world go 'round.

Re: pg_restore error on function

2022-11-03 Thread Adrian Klaver

On 11/3/22 07:45, Ron wrote:

On 11/3/22 09:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db





What was the *complete* pg_dump command?


I'm going to say the above.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Reducing bandwidth usage of database replication

2022-11-03 Thread Ben Chobot

Sascha Zenglein wrote on 11/2/22 7:56 AM:

Hi all,

I want to use the postgres-native logical replication to have multiple 
clients receive and send data to a central database.
Real-time is far less important than network usage, and with my 
current test setup it appears both instances communicate frequently if 
a subscription is active, even if nothing is happening.


Is there a good way to reduce data usage, for example by limiting the 
amount of keep-alive messages? One database will likely be idle most 
of the time.


I estimated the current solution to idle at around 1.4MiB per day. 
Ideally it would use less than 100KiB a day.


I'm also open for other solutions if anything comes to mind!


It sounds like you are trying to use logical replication to give 
yourself a multi-master database setup, and that you've squeezed as much 
optimization as you can from logical replication and found it to be 
unworkable. If that's a fair assessment, you might look into something 
like bucardo instead. I haven't done the network comparison but it is a 
different solution that might meet your goals.


Re: pg_restore error on function

2022-11-03 Thread Post Gresql



On 2022-11-03 15:57, Adrian Klaver wrote:

On 11/3/22 07:45, Ron wrote:

On 11/3/22 09:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db




What was the *complete* pg_dump command?


I'm going to say the above.


That is the complete command (actual names redacted)






Re: pg_restore error on function

2022-11-03 Thread Post Gresql



On 2022-11-03 15:43, Adrian Klaver wrote:

On 11/3/22 07:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db

but then

pg_restore --single-transaction -v -U postgres -O -e -d my_other_db my_dump

failed with

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854 FUNCTION 
update_b() previous_owner
pg_restore: [archiver (db)] could not execute query: ERROR: schema my_schema" 
does not exist
 Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
 LANGUAGE plpgsql
 AS $$
 BEGIN
 PERFORM pg_advisory_xact_lock(1);

 INSERT INTO ...;
 END;
 $$;


This is PG version 10.19

Is there a solution for this (apart from upgrading to a newer PG version)?


Was there another error before this?

Did the -v  show the schema my_schema being created?



No there were no other output apart from what I showed above.

No, no mention of creating the schema.


Btw. it worked fine when I used plain text output and moved the part of 
creating the function to near the end of the dump file.

Could it be that the

create function  .

in the dump file does not implicitly create the schema as a

create table .

would?






The only workaround I can think of is to dump in plain text and then edit the 
dump file.









Re: pg_restore error on function

2022-11-03 Thread David G. Johnston
On Thu, Nov 3, 2022 at 8:39 AM Post Gresql  wrote:

>
> On 2022-11-03 15:43, Adrian Klaver wrote:
> > On 11/3/22 07:28, Post Gresql wrote:
> >> Hello
> >>
> >> I first successfully ran
> >>
> >> pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema
> --no-owner -v my_db
> >>
> >> but then
> >>
> >> pg_restore --single-transaction -v -U postgres -O -e -d my_other_db
> my_dump
> >>
> >> failed with
> >>
> >> pg_restore: connecting to database for restore
> >> pg_restore: creating FUNCTION "my_schema.update_b()"
> >> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> >> pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854
> FUNCTION update_b() previous_owner
> >> pg_restore: [archiver (db)] could not execute query: ERROR: schema
> my_schema" does not exist
> >>  Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
> >>  LANGUAGE plpgsql
> >>  AS $$
> >>  BEGIN
> >>  PERFORM pg_advisory_xact_lock(1);
> >>
> >>  INSERT INTO ...;
> >>  END;
> >>  $$;
> >>
> >>
> >> This is PG version 10.19
> >>
> >> Is there a solution for this (apart from upgrading to a newer PG
> version)?
> >
> > Was there another error before this?
> >
> > Did the -v  show the schema my_schema being created?
>
>
> No there were no other output apart from what I showed above.
>
> No, no mention of creating the schema.
>
>
> Btw. it worked fine when I used plain text output and moved the part of
> creating the function to near the end of the dump file.
>
> Could it be that the
>
> create function  .
>
> in the dump file does not implicitly create the schema as a
>
> create table .
>
> would?
>
>
I don't know where you got the idea that a schema is implicitly created via
create table...a schema is never implicitly created.

It is unlikely, though possible (casts were just fixed in 10.20), that
you've discovered a dependency tracking bug.  At this point you would need
to produce and show a self-contained test case to provide further help.
The information you have shown so far indeed looks problematic but
insufficient to diagnose further.

pg_dump -n schema is documented to dump both the schema and its objects so
pg_restore should be restoring both, and should be restoring schemas before
trying to restore most anything else.

You could try playing with pg_restore -L and try to get a sequence that
works.

David J.


Re: pg_restore error on function

2022-11-03 Thread Adrian Klaver

On 11/3/22 08:38, Post Gresql wrote:


On 2022-11-03 15:43, Adrian Klaver wrote:

On 11/3/22 07:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n 
my_schema --no-owner -v my_db


but then

pg_restore --single-transaction -v -U postgres -O -e -d my_other_db 
my_dump


failed with

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7142; 1255 8618854 
FUNCTION update_b() previous_owner
pg_restore: [archiver (db)] could not execute query: ERROR: schema 
my_schema" does not exist

 Command was: CREATE FUNCTION "my_schema".update_b() RETURNS void
 LANGUAGE plpgsql
 AS $$
 BEGIN
 PERFORM pg_advisory_xact_lock(1);

 INSERT INTO ...;
 END;
 $$;


This is PG version 10.19

Is there a solution for this (apart from upgrading to a newer PG 
version)?


Was there another error before this?

Did the -v  show the schema my_schema being created?



No there were no other output apart from what I showed above.


The -v to pg_restore should output the progress:

https://www.postgresql.org/docs/current/app-pgrestore.html

-v
--verbose

Specifies verbose mode. This will cause pg_restore to output 
detailed object comments and start/stop times to the output file, and 
progress messages to standard error. Repeating the option causes 
additional debug-level messages to appear on standard error.



Something like:

pg_dump -d test -U postgres -Fc -n test -f test_sch.out

pg_restore -d t -U postgres -v test_sch.out
pg_restore: connecting to database for restore
pg_restore: creating SCHEMA "test"
pg_restore: creating TABLE "test.c2"
pg_restore: creating TABLE "test.supplies"
pg_restore: creating SEQUENCE "test.supplies_id_seq"
pg_restore: creating SEQUENCE OWNED BY "test.supplies_id_seq"
pg_restore: creating TABLE "test.up_test"
pg_restore: creating DEFAULT "test.supplies id"
pg_restore: processing data for table "test.c2"
pg_restore: processing data for table "test.supplies"
pg_restore: processing data for table "test.up_test"
pg_restore: executing SEQUENCE SET supplies_id_seq
pg_restore: creating CONSTRAINT "test.c2 pk1"
pg_restore: creating CONSTRAINT "test.supplies supplies_pkey"



No, no mention of creating the schema.


As you see the schema was created first.




Btw. it worked fine when I used plain text output and moved the part of 
creating the function to near the end of the dump file.


Could it be that the

create function  .


Well the schema should have already been created.

Assuming this:

pg_restore: connecting to database for restore
pg_restore: creating FUNCTION "my_schema.update_b()"

was not the result of an edit then something is not working correctly.

What is my_schema.update_b() doing?




in the dump file does not implicitly create the schema as a

create table .

would?






The only workaround I can think of is to dump in plain text and then 
edit the dump file.







--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_restore error on function

2022-11-03 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Nov 3, 2022 at 8:39 AM Post Gresql  wrote:
>> Could it be that the
>> create function  .
>> in the dump file does not implicitly create the schema as a
>> create table .
>> would?

> I don't know where you got the idea that a schema is implicitly created via
> create table...a schema is never implicitly created.

Indeed.

> It is unlikely, though possible (casts were just fixed in 10.20), that
> you've discovered a dependency tracking bug.  At this point you would need
> to produce and show a self-contained test case to provide further help.
> The information you have shown so far indeed looks problematic but
> insufficient to diagnose further.

Yeah.  The explanation would seem to be that pg_dump didn't dump
"CREATE SCHEMA my_schema", or that pg_restore chose to skip that
object, or that one of them mistakenly thought that the CREATE
FUNCTION should be issued first.  None of those things should happen
AFAICS, and none of them does happen in a simple test case, so
there's something going on here that we've not been shown.

regards, tom lane




Re: pg_restore error on function

2022-11-03 Thread Ron

On 11/3/22 09:57, Adrian Klaver wrote:

On 11/3/22 07:45, Ron wrote:

On 11/3/22 09:28, Post Gresql wrote:

Hello

I first successfully ran

pg_dump -U postgres -b -E UTF8 -f my_dump --format=custom -n my_schema 
--no-owner -v my_db





What was the *complete* pg_dump command?


I'm going to say the above.


I'm embarrassed.

--
Angular momentum makes the world go 'round.




Some questions about Postgres

2022-11-03 Thread Siddharth Jain
Hi all,

I am new to Postgres. I read an old article comparing MySQL to Postgres and
wanted to get answers to following questions. All questions are w.r.t.
Postgres 14:

- do postgres secondary indexes point directly to tuples on disk?
- does postgres use a separate process for each connection?
- does postgres use any application cache similar to innodb_buffer_pool?

Thanks a lot. If you can provide references to documentation, that would be
great.


Re: Some questions about Postgres

2022-11-03 Thread Christophe Pettus



> On Nov 3, 2022, at 10:38, Siddharth Jain  wrote:
> I read an old article comparing MySQL to Postgres and wanted to get answers 
> to following questions. 

Given your questions, I suspect that you read the technical note from Uber 
about why they switched back to MySQL from PostgreSQL.  There are quite a few 
responses out there to it, including:

https://thebuild.com/presentations/uber-perconalive-2017.pdf

To answer the specific questions:

1. Yes, all PostgreSQL indexes point directly to tuples on disk.  They do not 
point to a primary key (which a table might not even have).

2. Yes, each backend connection to PostgreSQL is a separate process.  The usual 
way of mitigating any performance issues this might introduce is to use a 
pooler such as pgbouncer.

3. This is equivalent to PostgreSQL's shared buffers.



Re: Some questions about Postgres

2022-11-03 Thread Siddharth Jain
Thanks Christophe. Yes that is the article and I read the response as well.

On Thu, Nov 3, 2022 at 10:42 AM Christophe Pettus  wrote:

>
>
> > On Nov 3, 2022, at 10:38, Siddharth Jain  wrote:
> > I read an old article comparing MySQL to Postgres and wanted to get
> answers to following questions.
>
> Given your questions, I suspect that you read the technical note from Uber
> about why they switched back to MySQL from PostgreSQL.  There are quite a
> few responses out there to it, including:
>
> https://thebuild.com/presentations/uber-perconalive-2017.pdf
>
> To answer the specific questions:
>
> 1. Yes, all PostgreSQL indexes point directly to tuples on disk.  They do
> not point to a primary key (which a table might not even have).
>
> 2. Yes, each backend connection to PostgreSQL is a separate process.  The
> usual way of mitigating any performance issues this might introduce is to
> use a pooler such as pgbouncer.
>
> 3. This is equivalent to PostgreSQL's shared buffers.


shutdown Postgres (standby) host causing timeout on other servers in replication

2022-11-03 Thread Joanna Xu
Hi All,

We have PostgreSQL master-slave replication configuration with 4 servers.
After shutting down one standby server, we found below errors on all other 3 
servers and the applications couldn't access the database due to the timeout.
DAO-2 datasource connection failure:Time out waiting for a DefaultDataSource
ERROR SQLException code: 0 SQLState: null

What could be the cause of the errors?  Would anyting be helpful to prevent it 
from happening?

Thank you for the help!

Cheers,
Joanna Xu
DB Specialist | Amdocs OPEN NETWORK

+1 613 595 5234 (office)
+1 613 899 6178 (mobile)
+1 877 943 3772 (24hr support line)

This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 



Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Bryn Llewellyn
> adrian.kla...@aklaver.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> Adrian gave me this link:
>> https://ubuntu.com/server/docs/databases-postgresql
>> Of course I'd read that right at the outset. The subtext is loud and clear. 
>> You need to do some things as the "postgres" user and some things as "root". 
>> That's why I enabled "sudo" for "postgres" (just as the code examples on 
>> that page imply).
> 
> You don't need to do that. Just use sudo as what ever user you log in as. For 
> example:
> 
> aklaver@arkansas:~$ sudo pg_ctlcluster stop 14/main
> [sudo] password for aklaver:
> 
> Again very simple explanation, the OS postgres user is just created to run 
> the server. It does not even have a home directory.

I should make it clear that my VM has four "ordinary" users. Two are present on 
first use when VM creation finishes: the system administrator (called 
"parallels") and, of course, "root". The "parallels" user is enabled for 
"sudo". Installing PG brings "postgres". (I created mine before hand, and it 
was unperturbed by the PG installation. This is my private laptop. And it suits 
me to give it a home directory and to customize its ".bashrc". But that's so 
that I can look around with the minimum of fuss.) Finally, there's my 
"clstr_mgr" O/S user that acts as the authorization vehicle for my "clstr$mgr" 
cluster-role. Each of "postgres" and "clstr_mgr" is in its own singleton 
group—and in no other groups. And neither is set up for "sudo".)

The doc explains how to edit (at least) these: the "config_file", the 
"hba_file", and the "ident_file". You edit them to achieve certain supported 
outcomes. I mentioned that, during my "for real" installation (from a 
re-established freshly created Ubuntu VM) I noted how the config files were set 
up:

-rw-r--r-- 1 postgres postgres   143 Nov  1 15:48 
/etc/postgresql/11/main/pg_ctl.conf
-rw-r- 1 postgres postgres  4686 Nov  1 15:48 
/etc/postgresql/11/main/pg_hba.conf
-rw-r- 1 postgres postgres  1636 Nov  1 15:48 
/etc/postgresql/11/main/pg_ident.conf
-rw-r--r-- 1 postgres postgres 24321 Nov  1 15:48 
/etc/postgresql/11/main/postgresql.conf
-rw-r--r-- 1 postgres postgres   317 Nov  1 15:48 
/etc/postgresql/11/main/start.conf

So only "postgres" can edit the files that must be so edited.

Apparently, an unwritten rule says that one must never end up so that "whoami" 
shows "postgres". I see that I can, then, always do, for example, this (from 
"parallels"):

sudo -u postgres vi pg_hba.conf

And, given that one major theme in our recent mutual, extended, exchanges is 
that I want to use "local", "peer" authentication for the cluster-role 
"postgres" via the O/S user with the same name, I see that I can always run all 
the SQL scripts that I want, using this authentication, like this:

sudo -u postgres psql -f my_script.sql

With this in mind, I re-wrote my "clean start" script thus:

#!/bin/bash

# do this as ANY user that can do "sudo" (and this includes "root" itself)

sudo pg_ctlcluster stop 11/main
sudo rm -Rf /var/lib/postgresql/11/main

sudo -u postgres initdb \
  -U postgres --encoding UTF8 --locale=C --lc-collate=C --lc-ctype=en_US.UTF-8 \
  -A md5 --pwfile=my_password_file \
  -D /var/lib/postgresql/11/main

sudo pg_ctlcluster start 11/main

sudo -u postgres psql -f my_script.sql

It certainly works. And when it's finished, this:

sudo -u postgres psql -c " select name, setting from pg_settings where category 
= 'File Locations'; "

shows that my files are where they ought to be. Then, further tests show that 
"local", "peer" authorization works as intended for my "clstr$mgr" role and 
that I can connect from client machines. So all is good. 

Why, though, is "pg_ctlcluster stop/start... " preferred over "systemctl 
stop/start postgresql?". Both are quick enough. And I can't see any difference 
in effect. Moreover, the latter is what 
https://ubuntu.com/server/docs/databases-postgresql 
 recommends.

Notice that I'm still using the off-limits "initdb" here. (I wired it up with a 
link on "/usr/bin".) Is there any reason to change that and to try to work out 
how to use what David (in an email soon after Adrian's) suggested?

sudo pg_dropcluster --stop 11 main
sudo pg_createcluster 11 main
sudo pg_ctlcluster start 11/main

I tried it. But it "did not work". I don't have the energy to describe the 
errors that were reported and the nasty outcomes that I got when I tried to use 
the new cluster. There's no useful doc for that approach and I've already 
established that Internet search gets me nowhere. So I'm inclined not to use it.

Rather, I want, now, simply to declare victory with the script that I showed 
and return to ordinary productive work.



Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
> Some repetition of what Adrian just posted ahead...
> 
>> b...@yugabyte.com wrote:
>> 
>> How can it be that the PG doc itself leads you by the hand to a regime where 
>> you need to use undocumented features?
> 
> The documentation tries to make clear that if you use third-party packaging 
> to install PostgreSQL (which most people should) that the documentation for 
> the packaging should describe this layer where PostgreSQL and the operating 
> system intersect.  You even quoted it: "follow the instructions for the 
> specific platform.", though reading that now I think something along the 
> lines of:
> 
>  "Additionally, while reading the next chapter, Server Setup and Operation, 
> is recommended if you are using a binary package the setup and operational 
> environment it creates is likely to be somewhat different than what is 
> described in this documentation.  Please read the documentation for the 
> packages you install to learn how it behaves and what additional 
> platform-specific features it provides."
> 
> Actually, not sure on the best approach here, since the Server Setup chapter 
> already says:
> 
> https://www.postgresql.org/docs/current/runtime.html
> 
> "The directions in this chapter assume that you are working with plain 
> PostgreSQL without any additional infrastructure, for example a copy that you 
> built from source according to the directions in the preceding chapters. If 
> you are working with a pre-packaged or vendor-supplied version of PostgreSQL, 
> it is likely that the packager has made special provisions for installing and 
> starting the database server according to your system's conventions. Consult 
> the package-level documentation for details."
> 
> However, that appears below-the-fold after a decent sized table of contents.
> 
> Changing anything now feels like an over-reaction to a single incident, but I 
> sympathize with the general confusion all this causes, and the fact it is 
> only in the recent past that we've made this first attempt to rectify the 
> situation by adding these comments.  A second-pass based upon this encounter 
> seems at least reasonable.  Whether I or others end up deciding it is worth 
> proposing a patch remains to be seen.

Thanks for your explanations, David. I believe that my point about how all this 
seems to me is well taken. I might concede that the Debian/Ubuntu packaging 
provides adequate reference doc by implementing its "man" pages. But I haven't 
found anything like a user guide that explains *why* ordinarily documented PG 
features have been hidden from sight (but not removed) and how (if the 
Debian/Ubuntu alternatives are just wrappers for the native PG) one might do 
that wrapping by hand. Doing this would demonstrate what benefits the wrapping 
brings.

Anyway, I now have a working PG system and useful notes. When, presently, I 
make a second VM for PG 15 (I prefer separate VMs over having both versions in 
the same VM) it should all go quickly and smoothly.

I have no reason to describe to anybody else how to install and configure 
PG—and I certainly won't do this.

My interest in being able to re-establish the pristine cluster starting state 
reliably and quickly is to support my own productivity. I'll presently have SQL 
scripts that establish the "multitenancy by self-imposed discipline" scheme 
that I've referred to from any arbitrary state of population of a cluster. I 
don't intend my scheme to co-exist with other schemes. And I don't expect there 
to be any real use cases for starting with an arbitrarily populated cluster and 
taking it to a state that conforms with my scheme. Rather, all this is about 
demonstrating how to establish the scheme on the assumption (but not 
requirement) that one starts with a brand-new cluster that will be dedicated to 
the approach that I've sketched.

I'm looking forward to returning to that project and putting all that we've 
been discussing here behind me.









Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Adrian Klaver

On 11/3/22 14:49, Bryn Llewellyn wrote:

I should make it clear that my VM has four "ordinary" users. Two are 
present on first use when VM creation finishes: the system administrator 
(called "parallels") and, of course, "root". The "parallels" user is 
enabled for "sudo". Installing PG brings "postgres". (I created mine 
before hand, and it was unperturbed by the PG installation. This is my 
private laptop. And it suits me to give it a home directory and to 
customize its ".bashrc". But that's so that I can look around with the 
minimum of fuss.) Finally, there's my "clstr_mgr" O/S user that acts as 
the authorization vehicle for my "clstr$mgr" cluster-role. Each of 
"postgres" and "clstr_mgr" is in its own singleton group—and in no other 
groups. And neither is set up for "sudo".)





So only "postgres" can edit the files that must be so edited.


That is not true:

aklaver@arkansas:~$ whoami
aklaver


aklaver@arkansas:~$ sudo vi /etc/postgresql/14/main/pg_hba.conf
[sudo] password for aklaver:

which opens pg_hba.conf for editing.



Apparently, an unwritten rule says that one must never end up so that 
"whoami" shows "postgres". I see that I can, then, always do, for 
example, this (from "parallels"):


*sudo -u postgres vi pg_hba.conf*

And, given that one major theme in our recent mutual, extended, 
exchanges is that I want to use "local", "peer" authentication for the 
cluster-role "postgres" via the O/S user with the same name, I see that 
I can always run all the SQL scripts that I want, using this 
authentication, like this:


You want to use local peer with OS user postgres, that is not a 
requirement. You could set up Postgres to log in the db user postgres by 
any of the other auth means and do the below without sudo(ing) to OS 
user postgres.




*sudo -u postgres psql -f my_script.sql*

With this in mind, I re-wrote my "clean start" script thus:

*#!/bin/bash

*
*# do this as ANY user that can do "sudo" (and this includes "root" itself)

sudo pg_ctlcluster stop 11/main
sudo rm -Rf /var/lib/postgresql/11/main

sudo -u postgres initdb \
   -U postgres --encoding UTF8 --locale=C --lc-collate=C 
--lc-ctype=en_US.UTF-8 \

   -A md5 --pwfile=my_password_file \
   -D /var/lib/postgresql/11/main

sudo pg_ctlcluster start 11/main

sudo -u postgres **psql -f my_script.sql**
*
It certainly works. And when it's finished, this:

*sudo -u postgres psql -c " select name, setting from pg_settings where 
category = 'File Locations'; "

*
shows that my files are where they ought to be. Then, further tests show 
that "local", "peer" authorization works as intended for my "clstr$mgr" 
role and that I can connect from client machines. So all is good.


Why, though, is "pg_ctlcluster stop/start... " preferred over "systemctl 
stop/start postgresql?". Both are quick enough. And I can't see any 
difference in effect. Moreover, the latter is what 
https://ubuntu.com/server/docs/databases-postgresql 
 recommends.


It isn't you where using pg_ctl and in the Debian/Ubuntu packaging the 
better option for that is pg_ctlcluster. I generally use the systemd 
scripts to start/stop Postgres instances, though when I do pg_lsclusters 
I tend to fall into using pg_ctlcluster as the cluster info is right there.




Notice that I'm still using the off-limits "initdb" here. (I wired it up 
with a link on "/usr/bin".) Is there any reason to change that and to 
try to work out how to use what David (in an email soon after Adrian's) 
suggested?


Yes and since that is basically coloring outside the lines, then that 
leads to the below blowing up.




*sudo pg_dropcluster --stop 11 main
sudo pg_createcluster 11 main
sudo pg_ctlcluster start 11/main-
*
I tried it. But it "did not work". I don't have the energy to describe 
the errors that were reported and the nasty outcomes that I got when I 
tried to use the new cluster. There's no useful doc for that approach 
and I've already established that Internet search gets me nowhere. So 
I'm inclined not to use it.


Per the saying, "In a ham and eggs breakfast the chicken is involved but 
the pig is committed", right now you are involved in the Debian/Ubuntu 
process not committed. Until you commit you will not get the results you 
want.




Rather, I want, now, simply to declare victory with the script that I 
showed and return to ordinary productive work.




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: shutdown Postgres (standby) host causing timeout on other servers in replication

2022-11-03 Thread Tom Lane
Joanna Xu  writes:
> We have PostgreSQL master-slave replication configuration with 4 servers.
> After shutting down one standby server, we found below errors on all other 3 
> servers and the applications couldn't access the database due to the timeout.
> DAO-2 datasource connection failure:Time out waiting for a 
> DefaultDataSource
> ERROR SQLException code: 0 SQLState: null

There's no such error message, indeed nothing about "DefaultDataSource"
at all, in the community Postgres code.  I guess it's coming from some
extension or maybe some client-side replication logic.  I suggest
finding where it's coming from and then asking the appropriate authors.

regards, tom lane




Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Peter J. Holzer
On 2022-11-03 15:37:07 -0700, Adrian Klaver wrote:
> On 11/3/22 14:49, Bryn Llewellyn wrote:
> > So only "postgres" can edit the files that must be so edited.
> 
> That is not true:
> 
> aklaver@arkansas:~$ whoami
> aklaver
> 
> 
> aklaver@arkansas:~$ sudo vi /etc/postgresql/14/main/pg_hba.conf
> [sudo] password for aklaver:
> 
> which opens pg_hba.conf for editing.

Well, yes. Root can edit the file, too. But root can edit anything[1].

hp

[1] Except ... lots of stuff, actually.

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: shutdown Postgres (standby) host causing timeout on other servers in replication

2022-11-03 Thread Ron

On 11/3/22 15:55, Joanna Xu wrote:


Hi All,

We have PostgreSQL master-slave replication configuration with 4 servers.

After shutting down one standby server, we found below errors on all other 
3 servers and the applications couldn’t access the database due to the 
timeout.


DAO-2 datasource connection failure:Time out waiting for a 
DefaultDataSource


ERROR SQLException code: 0 SQLState: null

What could be the cause of the errors? Would anyting be helpful to prevent 
it from happening?




Synchronous or asynchronous replication?

Go to the database server and look in /*it's*/ logs.

--
Angular momentum makes the world go 'round.

Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Bryn Llewellyn
> adrian.kla...@aklaver.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> So only "postgres" can edit the files that must be so edited.
> 
> That is not true [sudo vi some-file] which opens [it for editing].


By all means. I didn't bother to spell that out;

> It isn't you where using pg_ctl and in the Debian/Ubuntu packaging the better 
> option for that is pg_ctlcluster. I generally use the systemd scripts to 
> start/stop Postgres instances, though when I do pg_lsclusters I tend to fall 
> into using pg_ctlcluster as the cluster info is right there.


Can't parse this. Sorry.

>> Notice that I'm still using the off-limits "initdb" here. (I wired it up 
>> with a link on "/usr/bin".) Is there any reason to change that and to try to 
>> work out how to use what David (in an email soon after Adrian's) suggested?
> 
> Yes and since that is basically coloring outside the lines, then that leads 
> to ... blowing up ... Per the saying, "In a ham and eggs breakfast the 
> chicken is involved but the pig is committed", right now you are involved in 
> the Debian/Ubuntu process but not committed. Until you commit you will not 
> get the results you want.

Fair enough. I started again from the state where my "config_file", my 
"hba_file", and my "ident_file" are all customized as I want them to be but 
where I hadn't yet tried to trash my cluster and re-create it. Then I tried 
with "pg_dropcluster --stop" and "pg_createcluster". I discovered immediately 
that this approach (in contrast to the "initdb" approach) blows away all the 
*.conf files and recreates them in canonical form—just as the "man" page says. 
This is a bit of a nuisance. But it's negotiable. I installed my files as I 
want them in a safe place, outside of the entire PG world, and then used this 
script:

sudo pg_dropcluster --stop 11 main

sudo pg_createcluster 11 main \
  -e UTF8 --locale=C --lc-collate=C --lc-ctype=en_US.UTF-8 \
  -d /var/lib/postgresql/11/main \
  > /dev/null

sudo cp /etc/ybmt-code/pg-and-yb-config-files/*.conf \
/etc/postgresql/11/main

sudo chown postgres /etc/postgresql/11/main/*.conf
sudo chgrp postgres /etc/postgresql/11/main/*.conf
sudo chmod 644  /etc/postgresql/11/main/*.conf
sudo chmod 640  /etc/postgresql/11/main/pg_hba.conf
sudo chmod 640  /etc/postgresql/11/main/pg_ident.conf

sudo pg_ctlcluster start 11/main

sudo -u postgres psql -c " select name, setting from pg_settings where category 
= 'File Locations'; "

sudo -u postgres psql -f 
/etc/ybmt-code/cluster-mgmt/01-initialize-brand-new-YB-or-PG-clstr/00.sql

It worked without error and had the intended effect. My old approach with the 
uncommitted chicken used to take ~3 sec. This new approach takes ~5 sec. The 
difference is completely unimportant.

For various reasons, I need the non-standard "--lc-collate=C" choice. I could 
doubtless leave all the other options unspecified. But that's the style 
discussion we had the other day—and I prefer, here, to self-doc my choices.

I discovered that using, say, the "initdb" approach, then the "kosher" 
approach, and then the "initdb" approach brought all sorts of errors. That's 
the beauty of using a VM and file backups (or snapshots). I suppose this is to 
be expected.

Can I declare victory, now, with the approach that I showed above?

I'm impatient to get back to my real project.

p.s. Is my pessimism justified—that there simply exists no plain English user 
guide for this whole Debian/Ubuntu apparatus—correct. Or is it, rather, that my 
search skills are just too feeble?



autovacuum hung on simple tables

2022-11-03 Thread senor
Hi All,

I'm still trying to get a better understanding of the autovacuum process. 
This is a different postgres installation as my previous posts and confusing me 
in new ways.
Still 11.4 running on CentOS 7 and 8 nvme in software raid

This issue started with postgres "...not accepting commands to avoid 
wraparound...".
On this server I was able to stop all access to DB and dedicate resources to 
only postgres. I thought I could allow autovacuum to do its thing with a ton of 
workers. 

I think everything boils down to 2 questions:
1. Can autovacuum or manual vacuum be coerced into dealing with oldest first?
1a. Where might I find advice on configuring postgres resources for maximum 
cpu & memory maintenance use. In other words quickest path out of "not 
accepting commands" land. Besides increasing autovacuum_freeze_max_age.
2. What can cause autovacuum to stall? Could associated toast or index bne the 
cause.

It appeared that autovacuum was not choosing the tables with the oldest xmin so 
I produced an ordered list of oldest tables with:
SELECT oid::regclass, age(relfrozenxid)
FROM pg_class
WHERE relkind IN ('r', 't', 'm')
AND age(relfrozenxid) > 20
ORDER BY 2 DESC

The list contained over 6000 tables from pg_toast. They all belonged to daily 
reports tables. The reports are created daily and not touched again.

Most of the autovacuums that did start seem to be hung. Never completing even 
on the simplest tables. 
The newest 2 autovacuums in the list are completing about one every couple 
seconds.
CPU and disk IO are nearly idle.
An example table is shown here:

phantom=# select
phantom-#   pg_size_pretty(pg_total_relation_size(relid)) as total_size,
phantom-#   pg_size_pretty(pg_relation_size(relid, 'main')) as 
relation_size_main,
phantom-#   pg_size_pretty(pg_relation_size(relid, 'fsm')) as 
relation_size_fsm,
phantom-#   pg_size_pretty(pg_relation_size(relid, 'vm')) as 
relation_size_vm,
phantom-#   pg_size_pretty(pg_relation_size(relid, 'init')) as 
relation_size_init,
phantom-#   pg_size_pretty(pg_table_size(relid)) as table_size,
phantom-#   pg_size_pretty(pg_total_relation_size(relid) - 
pg_relation_size(relid)) as external_size
phantom-#  from
phantom-#   pg_catalog.pg_statio_user_tables
phantom-# where
phantom-#   relname like 'report_user_439';
 total_size | relation_size_main | relation_size_fsm | relation_size_vm | 
relation_size_init | table_size | external_size
++---+--+++---
 80 kB  | 8192 bytes | 24 kB | 8192 bytes   | 0 
bytes| 48 kB  | 72 kB
(1 row)


I scripted a vacuum loop using the oldest table list. It's extremely slow but 
it was making better progress than autovacuum was.

Using ps I see that there were as many worker processes as defined with 
autovacuum_max_workers but pg_stat_activity consistantly showed 19. I killed 
the script thinking there might be a conflict. I saw no difference after 30 
minutes so restarted script. Never saw anything in pg_stat_progress_vacuum.

vacuum settings:
name |  setting  
-+---
 autovacuum  | on
 autovacuum_analyze_scale_factor | 0.1   
 autovacuum_analyze_threshold| 50
 autovacuum_freeze_max_age   | 2 
 autovacuum_max_workers  | 40
 autovacuum_multixact_freeze_max_age | 4 
 autovacuum_naptime  | 4 
 autovacuum_vacuum_cost_delay| 0 
 autovacuum_vacuum_cost_limit| 5000  
 autovacuum_vacuum_scale_factor  | 0.2   
 autovacuum_vacuum_threshold | 50
 autovacuum_work_mem | -1
 log_autovacuum_min_duration | 0 
 vacuum_cleanup_index_scale_factor   | 0.1   
 vacuum_cost_delay   | 0 
 vacuum_cost_limit   | 200   
 vacuum_cost_page_dirty  | 20
 vacuum_cost_page_hit| 1 
 vacuum_cost_page_miss   | 10
 vacuum_defer_cleanup_age| 0 
 vacuum_freeze_min_age   | 5000  
 vacuum_freeze_table_age | 15000 
 vacuum_multixact_freeze_min_age | 500   
 vacuum_multixact_freeze_table_age   | 15000 

I'm now thinking that autovacuum getting hung up is what caused the issue to 
begin with. I see nothing but the successful vacuums from the script and my own 
fat-fingering commands in the postgres logs (set at info).

Any hints are appreciated.
Senor



Re: Postgres to edb AS, need conversion?

2022-11-03 Thread milist ujang
So, do I have to perform conversion/migration?


On Thursday, November 3, 2022, Abdul Sayeed  wrote:

> After migration you need to execute below command from SQL Prompt:
>
>
>
> vacuum (verbose, analyze);  --> to update statistics.
>
> Rest is good.
>
>
> Regards,
> Abdul Sayeed
>

>
> --
> Thanks & Regards,
> Abdul Sayeed
> PostgreSQL DBA
> Postgres Professional Certified
> Skype: abdul.sayeed24
>


-- 
regards

ujang jaenudin | Self-Employed, DBA Consultant
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab


Re: Postgres to edb AS, need conversion?

2022-11-03 Thread Ron

On 11/3/22 04:44, milist ujang wrote:

Hi all,

Anyone have experiences or someone from edb can guid about change engine 
from postgres open source to edb as, di I need to convert the data or 
simply plug-and-play? Or need run a tool to add catalog table owned by edb as?


You'd better ask EDB.

--
Angular momentum makes the world go 'round.




Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"

2022-11-03 Thread Adrian Klaver

On 11/3/22 18:50, Bryn Llewellyn wrote:

adrian.kla...@aklaver.com  wrote:





It isn't you where using pg_ctl and in the Debian/Ubuntu packaging the 
better option for that is pg_ctlcluster. I generally use the systemd 
scripts to start/stop Postgres instances, though when I do 
pg_lsclusters I tend to fall into using pg_ctlcluster as the cluster 
info is right there.


Can't parse this. Sorry.


Short version, use what works for you.


Fair enough. I started again from the state where my "config_file", my 
"hba_file", and my "ident_file" are all customized as I want them to be 
but where I hadn't yet tried to trash my cluster and re-create it. Then 
I tried with "pg_dropcluster --stop" and "pg_createcluster". I 
discovered immediately that this approach (in contrast to the "initdb" 


initdb will not work on an directory with existing files, so this:

sudo -u postgres initdb \
  -U postgres --encoding UTF8 --locale=C --lc-collate=C 
--lc-ctype=en_US.UTF-8 \

  -A md5 --pwfile=my_password_file \
  -D /var/lib/postgresql/11/main

Will only work if /var/lib/postgresql/11/main is empty, so none of your 
customized files will be there. You will have to copy them in just as 
you do below.


As example:

postgres@maura:/usr/local/pgsql15> bin/initdb -D data/
The files belonging to this database system will be owned by user 
"postgres".

This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

initdb: error: directory "data" exists but is not empty
initdb: hint: If you want to create a new database system, either remove 
or empty the directory "data" or run initdb with an argument other than 
"data".



FYI, you might want to take a look at the the includes mechanism:

https://www.postgresql.org/docs/15/config-setting.html#CONFIG-SETTING-CONFIGURATION-FILE


approach) blows away all the *.conf files and recreates them in 
canonical form—just as the "man" page says. This is a bit of a nuisance. 
But it's negotiable. I installed my files as I want them in a safe 
place, outside of the entire PG world, and then used this script:

sudo pg_dropcluster --stop 11 main


sudo pg_createcluster 11 main \
  -e UTF8 --locale=C --lc-collate=C --lc-ctype=en_US.UTF-8 \
  -d /var/lib/postgresql/11/main \
  > /dev/null

sudo cp /etc/ybmt-code/pg-and-yb-config-files/*.conf \
/etc/postgresql/11/main

sudo chown postgres /etc/postgresql/11/main/*.conf
sudo chgrp postgres /etc/postgresql/11/main/*.conf
sudo chmod 644  /etc/postgresql/11/main/*.conf
sudo chmod 640  /etc/postgresql/11/main/pg_hba.conf
sudo chmod 640  /etc/postgresql/11/main/pg_ident.conf

sudo pg_ctlcluster start 11/main

sudo -u postgres psql -c " select name, setting from pg_settings where 
category = 'File Locations'; "


sudo -u postgres psql -f 
/etc/ybmt-code/cluster-mgmt/01-initialize-brand-new-YB-or-PG-clstr/00.sql


It worked without error and had the intended effect. My old approach 
with the uncommitted chicken used to take ~3 sec. This new approach 
takes ~5 sec. The difference is completely unimportant.


For various reasons, I need the non-standard "--lc-collate=C" choice. I 
could doubtless leave all the other options unspecified. But that's the 
style discussion we had the other day—and I prefer, here, to self-doc my 
choices.


I discovered that using, say, the "initdb" approach, then the "kosher" 
approach, and then the "initdb" approach brought all sorts of errors. 
That's the beauty of using a VM and file backups (or snapshots). I 
suppose this is to be expected.


Hence commitment instead of involvement.



*Can I declare victory, now, with the approach that I showed above?*


You are setting the goals not us, that is your decision.

p.s. Is my pessimism justified—that there simply exists no plain English 
user guide for this whole Debian/Ubuntu apparatus—correct. Or is it, 
rather, that my search skills are just too feeble?


Not that I know of.

--
Adrian Klaver
adrian.kla...@aklaver.com





Unable to use pg_verify_checksums

2022-11-03 Thread shashidhar Reddy
Hello,

I am using postgres version 11, in the process of upgrade using pg_upgrade
I am trying to run checksums on version 11 but getting error

pg_verify_checksums: data checksums are not enabled in cluster

when trying to enable /pg_verify_checksums: unrecognized option '--enable'

I need help on this.

-- 
Shashidhar