get last timestamp of table ddl

2021-11-24 Thread Luca Ferrari
Hi all,
I think I already know the answer, however I came across this table in
Oracle 

that has two columns that triggered my attention: CREATED and
LAST_DDL_TIME.
Apart from being dates (in the Oracle way), they store the time of
creation and last modification to the table structure.
I don't have any particular use case except from blaming someone about
a change in the database structure, however I'm curious: is there a
smarter way to achieve this in PostgreSQL than simply relying on logs
and commit timestamps?

Luca




How to set alias data type?

2021-11-24 Thread Shaozhong SHI
select 'Total' as Total generate result that set Total as a column name
with unknown type

When trying to cast
select 'Total' as Total:: text

It simply does not work.

Regards,

David


Re: get last timestamp of table ddl

2021-11-24 Thread hubert depesz lubaczewski
On Wed, Nov 24, 2021 at 02:53:24PM +0100, Luca Ferrari wrote:
> Hi all,
> I think I already know the answer, however I came across this table in
> Oracle 
> 
> that has two columns that triggered my attention: CREATED and
> LAST_DDL_TIME.
> Apart from being dates (in the Oracle way), they store the time of
> creation and last modification to the table structure.
> I don't have any particular use case except from blaming someone about
> a change in the database structure, however I'm curious: is there a
> smarter way to achieve this in PostgreSQL than simply relying on logs
> and commit timestamps?

You could add event triggers to achieve similar functionality.
https://www.depesz.com/2012/07/29/waiting-for-9-3-event-triggers/
and
https://www.postgresql.org/docs/current/sql-createeventtrigger.html

depesz




Re: How to set alias data type?

2021-11-24 Thread hubert depesz lubaczewski
On Wed, Nov 24, 2021 at 01:57:06PM +, Shaozhong SHI wrote:
> select 'Total' as Total generate result that set Total as a column name
> with unknown type
> 
> When trying to cast
> select 'Total' as Total:: text

You need to cast value, and not name.

select 'Total'::text as Total;

Best regards,

depesz





Re: get last timestamp of table ddl

2021-11-24 Thread Achilleas Mantzios

On 24/11/21 3:53 μ.μ., Luca Ferrari wrote:

Hi all,
I think I already know the answer, however I came across this table in
Oracle 

that has two columns that triggered my attention: CREATED and
LAST_DDL_TIME.

That would be handy .. :(

Apart from being dates (in the Oracle way), they store the time of
creation and last modification to the table structure.
I don't have any particular use case except from blaming someone about
a change in the database structure, however I'm curious: is there a
smarter way to achieve this in PostgreSQL than simply relying on logs
and commit timestamps?

You mean like trying to correlate pg_class.xmin with some timestamp via 
track_commit_timestamp or other means?


Luca





--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





RPM package issue

2021-11-24 Thread Aleš Zelený
Hello all,

I've found on the wiki ( https://wiki.postgresql.org/wiki/Packagers ) that
the packagers list Is closed, so the first question is how to properly
report a package issue?

Particularly I've realized, that in the repository list
/etc/yum.repos.d/pgdg-redhat-all.repo provided by
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
pgdg14-updates-debuginfo repository definition is missing.

Tested on CentOS7 VM:

[root@centos7-base ~]# yum -y repolist --enablerepo=pgdg*
...
repo id

repo name


status
base/7/x86_64

CentOS-7 - Base


10 072
epel/x86_64

Extra Packages for Enterprise Linux 7 - x86_64


 13 687
extras/7/x86_64

CentOS-7 - Extras


 500
pgdg-common/7/x86_64

 PostgreSQL common RPMs for RHEL/CentOS 7 - x86_64


 342
pgdg-common-srpm-testing/7/x86_64

PostgreSQL common testing SRPMs for RHEL/CentOS 7 -
x86_64

189
pgdg-common-testing/7/x86_64

 PostgreSQL common testing RPMs for RHEL/CentOS 7 -
x86_64

 342
pgdg-source-common/7/x86_64

PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source


 189
pgdg10/7/x86_64

PostgreSQL 10 for RHEL/CentOS 7 - x86_64


1 003
pgdg10-source/7/x86_64

 PostgreSQL 10 for RHEL/CentOS 7 - x86_64 - Source


 519
pgdg10-updates-debuginfo/7/x86_64

PostgreSQL 10 for RHEL/CentOS 7 - x86_64 -
Debuginfo

405
pgdg11/7/x86_64

PostgreSQL 11 for RHEL/CentOS 7 - x86_64


1 111
pgdg11-source/7/x86_64

 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Source


 535
pgdg11-source-updates-testing/7/x86_64

 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Source
update testing

0
pgdg11-updates-debuginfo/7/x86_64

PostgreSQL 11 for RHEL/CentOS 7 - x86_64 -
Debuginfo

411
pgdg11-updates-testing/7/x86_64

PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Updates
testing

  6
pgdg11-updates-testing-debuginfo/7/x86_64

PostgreSQL 11 for RHEL/CentOS 7 - x86_64 -
Debuginfo

  1
pgdg12/7/x86_64

PostgreSQL 12 for RHEL/CentOS 7 - x86_64


693
pgdg12-source/7/x86_64

 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source


 379
pgdg12-source-updates-testing/7/x86_64

 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source
update testing

0
pgdg12-updates-debuginfo/7/x86_64

PostgreSQL 12 for RHEL/CentOS 7 - x86_64 -
Debuginfo

283
pgdg12-updates-testing/7/x86_64

PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Updates
testing

  0
pgdg12-updates-testing-debuginfo/7/x86_64

PostgreSQL 12 for RHEL/CentOS 7 - x86_64 -
Debuginfo

  0
pgdg13/7/x86_64

PostgreSQL 13 for RHEL/CentOS 7 - x86_64


436
pgdg13-source-updates-testing/7/x86_64

 PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Source
updates testing

   0
pgdg13-updates-debuginfo/7/x86_64

PostgreSQL 13 for RHEL/CentOS 7 - x86_64 -
Debuginfo

176
pgdg13-updates-testing/7/x86_64

PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Updates
testing

  0
pgdg13-updates-testing-debuginfo/7/x86_64

PostgreSQL 13 for RHEL/CentOS 7 - x86_64 -
Debuginfo

  0
pgdg14/7/x86_64

PostgreSQL 14 for RHEL/CentOS 7 - x86_64


173
pgdg14-source-updates-testing/7/x86_64

 PostgreSQL 14 for RHEL/CentOS 7 - x86_64 - Source
updates testing

   0
pgdg14-updates-testing/7/x86_64

PostgreSQL 14 for RHEL/CentOS 7 - x86_64 - Updates
testing

  1
pgdg14-updates-testing-debuginfo/7/x86_64

PostgreSQL 14 for RHEL/CentOS 7 - x86_64 -
Debuginfo

  1
pgdg15-updates-testing/7/x86_64

PostgreSQL 15 for RHEL/CentOS 7 - x86_64 - Updates
testing

 77
pgdg96/7/x86_64

PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64


 959
pgdg96-source/7/x86_64

 PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64 - Source


481
pgdg96-updates-debuginfo/7/x86_64

PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64 -
Debuginfo

   394
updates/7/x86_64

 CentOS-7 - Updates


2 963
repolist: 36 328

[root@centos7-base ~]# yum -y info postgresql14-debuginfo
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.karneval.cz
 * epel: mirror.karneval.cz
 * extras: mirror.karneval.cz
 * updates: mirror.karneval.cz
Error: No matching Packages to list


Adding the missing repository section:

# Debuginfo/debugsource packages for stable repos

Re: RPM package issue

2021-11-24 Thread Adrian Klaver

On 11/24/21 08:47, Aleš Zelený wrote:

Hello all,

I've found on the wiki ( https://wiki.postgresql.org/wiki/Packagers 
 ) that the packagers list 
Is closed, so the first question is how to properly report a package issue?


https://www.postgresql.org/download/linux/redhat/ -->:

https://yum.postgresql.org/ -->:

https://yum.postgresql.org/contact/

Or issue tracker:

https://redmine.postgresql.org/projects/pgrpms/

though you will need to setup/have a Postgres community account to access.



Particularly I've realized, that in the repository list
/etc/yum.repos.d/pgdg-redhat-all.repo provided by 
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 
 


pgdg14-updates-debuginfo repository definition is missing.



Kind regards Ales Zeleny



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: get last timestamp of table ddl

2021-11-24 Thread Luca Ferrari
On Wed, Nov 24, 2021 at 3:09 PM hubert depesz lubaczewski
 wrote:
> You could add event triggers to achieve similar functionality.
> https://www.depesz.com/2012/07/29/waiting-for-9-3-event-triggers/
> and
> https://www.postgresql.org/docs/current/sql-createeventtrigger.html


Thanks, I was ware of event triggers, and of course it is pretty much
the same approach I see (quite often) in "userland" to track (audit)
changes in data: I mean a table trigger that logs some data change.
However, I was curious about already available ways to do that.

Luca




Re: get last timestamp of table ddl

2021-11-24 Thread Luca Ferrari
On Wed, Nov 24, 2021 at 3:30 PM Achilleas Mantzios
 wrote:
> You mean like trying to correlate pg_class.xmin with some timestamp via 
> track_commit_timestamp or other means?

Pretty much yes: since pg_class and pg_attribute comes to my mind.

Luca




Re: RPM package issue

2021-11-24 Thread Adrian Klaver

On 11/24/21 08:47, Aleš Zelený wrote:

Hello all,

I've found on the wiki ( https://wiki.postgresql.org/wiki/Packagers 
 ) that the packagers list 
Is closed, so the first question is how to properly report a package issue?


Particularly I've realized, that in the repository list
/etc/yum.repos.d/pgdg-redhat-all.repo provided by 
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 
 


pgdg14-updates-debuginfo repository definition is missing.


This was reported and fixed?:

https://redmine.postgresql.org/issues/6849

Again you will need community account to access.






Kind regards Ales Zeleny



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: RPM package issue

2021-11-24 Thread Aleš Zelený
Hi Adrian,

thanks for the hint!

Ales

st 24. 11. 2021 v 18:16 odesílatel Adrian Klaver 
napsal:

> On 11/24/21 08:47, Aleš Zelený wrote:
> > Hello all,
> >
> > I've found on the wiki ( https://wiki.postgresql.org/wiki/Packagers
> >  ) that the packagers list
> > Is closed, so the first question is how to properly report a package
> issue?
>
> https://www.postgresql.org/download/linux/redhat/ -->:
>
> https://yum.postgresql.org/ -->:
>
> https://yum.postgresql.org/contact/
>
> Or issue tracker:
>
> https://redmine.postgresql.org/projects/pgrpms/
>
> though you will need to setup/have a Postgres community account to access.
>
> >
> > Particularly I've realized, that in the repository list
> > /etc/yum.repos.d/pgdg-redhat-all.repo provided by
> >
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
> > <
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm>
>
> >
> > pgdg14-updates-debuginfo repository definition is missing.
>
> > Kind regards Ales Zeleny
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Inserts and bad performance

2021-11-24 Thread Godfrin, Philippe E
Greetings
I am inserting a large number of rows, 5,10, 15 million. The python code 
commits every 5000 inserts. The table has partitioned children.

At first, when there were a low number of rows inserted, the inserts would run 
at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 1.5 
Billion rows, the insert rate has dropped to around 5000 inserts per second. I 
dropped the unique index , rebuilt the other indexes and no change. The 
instance is 16 vcpu and 64GB ram.

I'm perplexed, I can't see to find any reason for the slow down...
Thanks,
pg

Phil Godfrin | Database Administration
NOV
NOV US | Engineering Data
9720 Beechnut St | Houston, Texas 77036
M  281.825.2311
E   philippe.godf...@nov.com



Re: Inserts and bad performance

2021-11-24 Thread Kenneth Marshall
On Wed, Nov 24, 2021 at 07:15:31PM +, Godfrin, Philippe E wrote:
> Greetings
> I am inserting a large number of rows, 5,10, 15 million. The python code 
> commits every 5000 inserts. The table has partitioned children.
> 
> At first, when there were a low number of rows inserted, the inserts would 
> run at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 
> 1.5 Billion rows, the insert rate has dropped to around 5000 inserts per 
> second. I dropped the unique index , rebuilt the other indexes and no change. 
> The instance is 16 vcpu and 64GB ram.
> 
> I'm perplexed, I can't see to find any reason for the slow down...
> Thanks,
> pg

Hi,

With not much information, it may be I/O related. CPU and RAM cannot fix
that once items need to be written to disk. Are there any errors in the
logs or CPUs maxxed out?

Regards,
Ken




RE: [EXTERNAL] Re: Inserts and bad performance

2021-11-24 Thread Godfrin, Philippe E
My apologies for the dearth of details. No on both the cpu and errors. But I do 
believe it is IO related. I just can't find it. 
I thought maybe it was index splitting so I altered the unique index with 
filterfactor=40 and reindexed. No change. 
I then dropped the unique index. No change.
I thought maybe it was checkpoint timeouts, but there was no correlation.
Oddly enough other jobs running concurrently, are also inserting, most likely 
into different partitions, are running about 2x faster than others.

I'm rather perplexed.
pg


-Original Message-
From: Kenneth Marshall  
Sent: Wednesday, November 24, 2021 1:20 PM
To: Godfrin, Philippe E 
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance

On Wed, Nov 24, 2021 at 07:15:31PM +, Godfrin, Philippe E wrote:
> Greetings
> I am inserting a large number of rows, 5,10, 15 million. The python code 
> commits every 5000 inserts. The table has partitioned children.
> 
> At first, when there were a low number of rows inserted, the inserts would 
> run at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 
> 1.5 Billion rows, the insert rate has dropped to around 5000 inserts per 
> second. I dropped the unique index , rebuilt the other indexes and no change. 
> The instance is 16 vcpu and 64GB ram.
> 
> I'm perplexed, I can't see to find any reason for the slow down...
> Thanks,
> pg

Hi,

With not much information, it may be I/O related. CPU and RAM cannot fix that 
once items need to be written to disk. Are there any errors in the logs or CPUs 
maxxed out?

Regards,
Ken






Re: Inserts and bad performance

2021-11-24 Thread Tom Lane
"Godfrin, Philippe E"  writes:
> I am inserting a large number of rows, 5,10, 15 million. The python code 
> commits every 5000 inserts. The table has partitioned children.
> At first, when there were a low number of rows inserted, the inserts would 
> run at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 
> 1.5 Billion rows, the insert rate has dropped to around 5000 inserts per 
> second. I dropped the unique index , rebuilt the other indexes and no change. 
> The instance is 16 vcpu and 64GB ram.

Can you drop the indexes and not rebuild them till after the bulk load is
done?  Once the indexes exceed available RAM, insert performance is going
to fall off a cliff, except maybe for indexes that are receiving purely
sequential inserts (so that only the right end of the index gets touched).

Also see

https://www.postgresql.org/docs/current/populate.html

regards, tom lane




Re: Inserts and bad performance

2021-11-24 Thread Gavin Roy
On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E <
philippe.godf...@nov.com> wrote:

> Greetings
>
> I am inserting a large number of rows, 5,10, 15 million. The python code
> commits every 5000 inserts. The table has partitioned children.
>

On the Python client side, if you're using psycopg, you should consider
using using COPY instead of INSERT if you're not:

https://www.psycopg.org/psycopg3/docs/basic/copy.html#copy

And if using psycopg2, execute_batch might be of value:

https://www.psycopg.org/docs/extras.html?highlight=insert#psycopg2.extras.execute_batch

Regards,

Gavin


RE: [EXTERNAL] Re: Inserts and bad performance

2021-11-24 Thread Godfrin, Philippe E
Hi Tom. Good point about the index paging out of the buffer. I did that and no 
change. I do have the shared buffers at 40GB, so there's a good bit there, but 
I also did all those things on the page you referred, except for using copy. At 
this point the data has not been scrubbed, so I'm trapping data errors and 
duplicates. I am curios though, as sidebar, why copy is considered faster than 
inserts. I was unable to get COPY faster than around 25K inserts a second 
(pretty fast anyway). Frankly, initially I was running 3 concurrent insert jobs 
and getting 90K ins/sec ! but after a certain number of records, the speed just 
dropped off.

pg

From: Tom Lane 
Sent: Wednesday, November 24, 2021 1:32 PM
To: Godfrin, Philippe E 
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance

"Godfrin, Philippe E" 
mailto:philippe.godf...@nov.com>> writes:
> I am inserting a large number of rows, 5,10, 15 million. The python code 
> commits every 5000 inserts. The table has partitioned children.
> At first, when there were a low number of rows inserted, the inserts would 
> run at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 
> 1.5 Billion rows, the insert rate has dropped to around 5000 inserts per 
> second. I dropped the unique index , rebuilt the other indexes and no change. 
> The instance is 16 vcpu and 64GB ram.

Can you drop the indexes and not rebuild them till after the bulk load is
done? Once the indexes exceed available RAM, insert performance is going
to fall off a cliff, except maybe for indexes that are receiving purely
sequential inserts (so that only the right end of the index gets touched).

Also see

https://www.postgresql.org/docs/current/populate.html

regards, tom lane


RE: [EXTERNAL] Re: Inserts and bad performance

2021-11-24 Thread Godfrin, Philippe E
The notion of COPY blocks and asynchronously is very interesting

From: Gavin Roy 
Sent: Wednesday, November 24, 2021 1:50 PM
To: Godfrin, Philippe E 
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance

On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E 
mailto:philippe.godf...@nov.com>> wrote:GreetingsI am 
inserting a large number of rows, 5,10, 15 million. The python code commits 
every 5000 inserts. The table has partitioned children.On the Python client 
side, if you're usi ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌




On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E 
mailto:philippe.godf...@nov.com>> wrote:
Greetings
I am inserting a large number of rows, 5,10, 15 million. The python code 
commits every 5000 inserts. The table has partitioned children.

On the Python client side, if you're using psycopg, you should consider using 
using COPY instead of INSERT if you're not:

https://www.psycopg.org/psycopg3/docs/basic/copy.html#copy

And if using psycopg2, execute_batch might be of value:

https://www.psycopg.org/docs/extras.html?highlight=insert#psycopg2.extras.execute_batch

Regards,

Gavin




Re: [EXTERNAL] Re: Inserts and bad performance

2021-11-24 Thread Michael Lewis
How many partitions? How many rows do they have when performance is slowing
considerably? Does this table get many updates or is it insert only? What
version of PostgreSQL? Are the inserts randomly distributed among the
partitions or targeting one or a few partitions? Are you able to capture an
example and run it in a transaction with explain (analyze, buffers,
verbose) and then rollback?


*Michael Lewis  |  Database Engineer*
*Entrata*


Re: [EXTERNAL] Re: Inserts and bad performance

2021-11-24 Thread David Rowley
On Thu, 25 Nov 2021 at 08:59, Godfrin, Philippe E
 wrote:
> Hi Tom. Good point about the index paging out of the buffer. I did that and 
> no change. I do have the shared buffers at 40GB, so there’s a good bit there, 
> but I also did all those things on the page you referred, except for using 
> copy. At this point the data has not been scrubbed, so I’m trapping data 
> errors and duplicates. I am curios though, as sidebar, why copy is considered 
> faster than inserts. I was unable to get COPY faster than around 25K inserts 
> a second (pretty fast anyway). Frankly, initially I was running 3 concurrent 
> insert jobs and getting 90K ins/sec ! but after a certain number of records, 
> the speed just dropped off.

EXPLAIN (ANALYZE, BUFFERS) works with INSERTs. You just need to be
aware that using ANALYZE will perform the actual insert too. So you
might want to use BEGIN; and ROLLBACK; if it's not data that you want
to keep.

SET track_io_timing = on; might help you too.

David




Re: Max connections reached without max connections reached

2021-11-24 Thread James Sewell
>
> The hypothesis I'm thinking of is that incoming sessions are being blocked
> somewhere before they can acquire a ProcArray entry; if so, they'd not
> show up in either pg_stat_activity or pg_locks.  What we have to look for
> then is evidence of somebody holding a strong lock on a shared relation.
> Try "select * from pg_locks where locktype = 'relation' and database = 0".
>
>
That doesn't show anything when the issue is happening.

Strace shows the startup processes looping like so:

13:51:15.510850 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=733559666}) = 0
13:51:15.511254 futex(0x7ee891405238, FUTEX_WAKE, 1) = 1
13:51:15.511422 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=734037545}) = 0
13:51:15.511690 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.520470 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=743136450}) = 0
13:51:15.520772 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=743376746}) = 0
13:51:15.521005 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.539309 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=761920081}) = 0
13:51:15.539508 futex(0x7ee891406bb8, FUTEX_WAKE, 1) = 1
13:51:15.539690 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=762298276}) = 0
13:51:15.539882 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.547311 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=769929341}) = 0
13:51:15.547757 futex(0x7ee89140a738, FUTEX_WAKE, 1) = 1
13:51:15.547931 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=770524120}) = 0
13:51:15.548089 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.551482 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN
(Resource temporarily unavailable)
13:51:15.551950 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.593631 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN
(Resource temporarily unavailable)
13:51:15.593807 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.626167 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.627590 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=850204434}) = 0
13:51:15.627817 futex(0x7ee8914033b8, FUTEX_WAKE, 1) = 1
13:51:15.627987 open("pg_subtrans/A100", O_RDWR) = 16
13:51:15.628270 lseek(16, 131072, SEEK_SET) = 131072
13:51:15.628461 read(16,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8192) = 8192
13:51:15.628677 close(16)   = 0
13:51:15.628911 open("pg_subtrans/A0E1", O_RDWR) = 16
13:51:15.629132 lseek(16, 122880, SEEK_SET) = 122880
13:51:15.629309 read(16,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8192) = 8192
13:51:15.629503 close(16)   = 0
13:51:15.629685 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=852272645}) = 0
13:51:15.629863 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.656694 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=879314944}) = 0
13:51:15.656923 futex(0x7ee8914125b8, FUTEX_WAKE, 1) = 1
13:51:15.657129 open("pg_subtrans/A0EE", O_RDWR) = 16
13:51:15.657385 lseek(16, 73728, SEEK_SET) = 73728
13:51:15.657638 read(16,
"\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"...,
8192) = 8192
13:51:15.657907 close(16)   = 0
13:51:15.658141 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=880753952}) = 0
13:51:15.658346 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.689705 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=912331058}) = 0
13:51:15.689950 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
13:51:15.690182 futex(0x7ee8914118b8, FUTEX_WAKE, 1) = 1
13:51:15.690420 open("pg_subtrans/A0EE", O_RDWR) = 16
13:51:15.690742 lseek(16, 73728, SEEK_SET) = 73728
13:51:15.691037 read(16,
"\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"...,
8192) = 8192
13:51:15.691303 close(16)   = 0
13:51:15.691582 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=914202777}) = 0
13:51:15.691837 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.713319 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.716423 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=939067321}) = 0
13:51:15.716737 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
13:51:15.717010 futex(0x7ee89140f3b8, FUTEX_WAKE, 1) = 1
13:51:15.717286 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=939899218}) = 0
13:51:15.717578 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.722270 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
13:51:15.746391 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358,
tv_nsec=969020657}) = 0
13:51:15.746664 futex(0x7ee89140fa38, FUTEX_WAKE, 1) = 1
13:51:15.746877 open("pg_subtrans/A0EE", O_RDWR) = 16


While gdb shows similar to:

   24 ^[[?1034h(gdb) bt
   23 #0  0x7f18fb9deb3b in do_futex_wait.constprop.1 () from
/lib64/libpthread.so.0
   22 #1  0x7f18fb9debcf in __new_sem_wait_slow.constprop.0 () from
/lib64/libpthread.so.0
   21 #2  0x7f18fb9dec6b in sem_wait@@GLIBC_2.

Re: Max connections reached without max connections reached

2021-11-24 Thread Dilip Kumar
On Thu, Nov 25, 2021 at 8:58 AM James Sewell  wrote:
>>
>> The hypothesis I'm thinking of is that incoming sessions are being blocked
>> somewhere before they can acquire a ProcArray entry; if so, they'd not
>> show up in either pg_stat_activity or pg_locks.  What we have to look for
>> then is evidence of somebody holding a strong lock on a shared relation.
>> Try "select * from pg_locks where locktype = 'relation' and database = 0".
>>
>
> That doesn't show anything when the issue is happening.
>
> Strace shows the startup processes looping like so:
>
> 13:51:15.510850 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=733559666}) = 0
> 13:51:15.511254 futex(0x7ee891405238, FUTEX_WAKE, 1) = 1
> 13:51:15.511422 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=734037545}) = 0
> 13:51:15.511690 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.520470 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=743136450}) = 0
> 13:51:15.520772 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=743376746}) = 0
> 13:51:15.521005 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.539309 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=761920081}) = 0
> 13:51:15.539508 futex(0x7ee891406bb8, FUTEX_WAKE, 1) = 1
> 13:51:15.539690 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=762298276}) = 0
> 13:51:15.539882 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.547311 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=769929341}) = 0
> 13:51:15.547757 futex(0x7ee89140a738, FUTEX_WAKE, 1) = 1
> 13:51:15.547931 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=770524120}) = 0
> 13:51:15.548089 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.551482 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN 
> (Resource temporarily unavailable)
> 13:51:15.551950 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.593631 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN 
> (Resource temporarily unavailable)
> 13:51:15.593807 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.626167 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.627590 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=850204434}) = 0
> 13:51:15.627817 futex(0x7ee8914033b8, FUTEX_WAKE, 1) = 1
> 13:51:15.627987 open("pg_subtrans/A100", O_RDWR) = 16
> 13:51:15.628270 lseek(16, 131072, SEEK_SET) = 131072
> 13:51:15.628461 read(16, 
> "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192) 
> = 8192
> 13:51:15.628677 close(16)   = 0
> 13:51:15.628911 open("pg_subtrans/A0E1", O_RDWR) = 16
> 13:51:15.629132 lseek(16, 122880, SEEK_SET) = 122880
> 13:51:15.629309 read(16, 
> "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192) 
> = 8192
> 13:51:15.629503 close(16)   = 0
> 13:51:15.629685 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=852272645}) = 0
> 13:51:15.629863 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.656694 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=879314944}) = 0
> 13:51:15.656923 futex(0x7ee8914125b8, FUTEX_WAKE, 1) = 1
> 13:51:15.657129 open("pg_subtrans/A0EE", O_RDWR) = 16
> 13:51:15.657385 lseek(16, 73728, SEEK_SET) = 73728
> 13:51:15.657638 read(16, 
> "\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"..., 
> 8192) = 8192
> 13:51:15.657907 close(16)   = 0
> 13:51:15.658141 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=880753952}) = 0
> 13:51:15.658346 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.689705 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=912331058}) = 0
> 13:51:15.689950 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
> 13:51:15.690182 futex(0x7ee8914118b8, FUTEX_WAKE, 1) = 1
> 13:51:15.690420 open("pg_subtrans/A0EE", O_RDWR) = 16
> 13:51:15.690742 lseek(16, 73728, SEEK_SET) = 73728
> 13:51:15.691037 read(16, 
> "\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"..., 
> 8192) = 8192
> 13:51:15.691303 close(16)   = 0
> 13:51:15.691582 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=914202777}) = 0
> 13:51:15.691837 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.713319 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.716423 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=939067321}) = 0
> 13:51:15.716737 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
> 13:51:15.717010 futex(0x7ee89140f3b8, FUTEX_WAKE, 1) = 1
> 13:51:15.717286 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=939899218}) = 0
> 13:51:15.717578 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.722270 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> 13:51:15.746391 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> tv_nsec=969020657}) = 0
> 13:51:15.746664 futex(0x7ee89140fa38, FUTEX_WAKE, 1) = 1
> 13:51:15.746877 open("pg_subtrans/A0EE", O_RDWR) = 16
>
>
> While gdb shows si

Re: Max connections reached without max connections reached

2021-11-24 Thread Dilip Kumar
On Thu, Nov 25, 2021 at 9:50 AM Dilip Kumar  wrote:
>
> On Thu, Nov 25, 2021 at 8:58 AM James Sewell  
> wrote:
> >>
> >> The hypothesis I'm thinking of is that incoming sessions are being blocked
> >> somewhere before they can acquire a ProcArray entry; if so, they'd not
> >> show up in either pg_stat_activity or pg_locks.  What we have to look for
> >> then is evidence of somebody holding a strong lock on a shared relation.
> >> Try "select * from pg_locks where locktype = 'relation' and database = 0".
> >>
> >
> > That doesn't show anything when the issue is happening.
> >
> > Strace shows the startup processes looping like so:
> >
> > 13:51:15.510850 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=733559666}) = 0
> > 13:51:15.511254 futex(0x7ee891405238, FUTEX_WAKE, 1) = 1
> > 13:51:15.511422 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=734037545}) = 0
> > 13:51:15.511690 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.520470 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=743136450}) = 0
> > 13:51:15.520772 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=743376746}) = 0
> > 13:51:15.521005 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.539309 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=761920081}) = 0
> > 13:51:15.539508 futex(0x7ee891406bb8, FUTEX_WAKE, 1) = 1
> > 13:51:15.539690 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=762298276}) = 0
> > 13:51:15.539882 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.547311 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=769929341}) = 0
> > 13:51:15.547757 futex(0x7ee89140a738, FUTEX_WAKE, 1) = 1
> > 13:51:15.547931 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=770524120}) = 0
> > 13:51:15.548089 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.551482 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN 
> > (Resource temporarily unavailable)
> > 13:51:15.551950 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.593631 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = -1 EAGAIN 
> > (Resource temporarily unavailable)
> > 13:51:15.593807 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.626167 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.627590 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=850204434}) = 0
> > 13:51:15.627817 futex(0x7ee8914033b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.627987 open("pg_subtrans/A100", O_RDWR) = 16
> > 13:51:15.628270 lseek(16, 131072, SEEK_SET) = 131072
> > 13:51:15.628461 read(16, 
> > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
> > 8192) = 8192
> > 13:51:15.628677 close(16)   = 0
> > 13:51:15.628911 open("pg_subtrans/A0E1", O_RDWR) = 16
> > 13:51:15.629132 lseek(16, 122880, SEEK_SET) = 122880
> > 13:51:15.629309 read(16, 
> > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
> > 8192) = 8192
> > 13:51:15.629503 close(16)   = 0
> > 13:51:15.629685 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=852272645}) = 0
> > 13:51:15.629863 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.656694 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=879314944}) = 0
> > 13:51:15.656923 futex(0x7ee8914125b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.657129 open("pg_subtrans/A0EE", O_RDWR) = 16
> > 13:51:15.657385 lseek(16, 73728, SEEK_SET) = 73728
> > 13:51:15.657638 read(16, 
> > "\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"...,
> >  8192) = 8192
> > 13:51:15.657907 close(16)   = 0
> > 13:51:15.658141 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=880753952}) = 0
> > 13:51:15.658346 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.689705 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=912331058}) = 0
> > 13:51:15.689950 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.690182 futex(0x7ee8914118b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.690420 open("pg_subtrans/A0EE", O_RDWR) = 16
> > 13:51:15.690742 lseek(16, 73728, SEEK_SET) = 73728
> > 13:51:15.691037 read(16, 
> > "\0\0\0\0\0H\356\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5H\356\240\0\0\0\0"...,
> >  8192) = 8192
> > 13:51:15.691303 close(16)   = 0
> > 13:51:15.691582 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=914202777}) = 0
> > 13:51:15.691837 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.713319 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.716423 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=939067321}) = 0
> > 13:51:15.716737 futex(0x7ee89140a0b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.717010 futex(0x7ee89140f3b8, FUTEX_WAKE, 1) = 1
> > 13:51:15.717286 clock_gettime(CLOCK_MONOTONIC, {tv_sec=19439358, 
> > tv_nsec=939899218}) = 0
> > 13:51:15.717578 futex(0x7ee891405938, FUTEX_WAIT, 0, NULL) = 0
> > 13:51:15.722270 futex(0x7ee891405938, FUTEX_WAIT,

Re: Max connections reached without max connections reached

2021-11-24 Thread Dilip Kumar
On Thu, Nov 25, 2021 at 9:50 AM Dilip Kumar  wrote:

> > Does that shed any light?
>
> Seems like some of the processes are taking a long time or stuck while
> reading/writing SLRU pages, and due to that while creating a new
> connection the backend process is not able to check the transaction
> status (from pg_xlog) of the pg_class tuple and gets stuck/taking a
> long time in a startup.

Correction, I mean pg_xact (pg_clog in older version) not pg_xlog :)

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Inserts and bad performance

2021-11-24 Thread Ron

On 11/24/21 1:15 PM, Godfrin, Philippe E wrote:
[snip]

I dropped the unique index , rebuilt the other indexes and no change.


IMNSHO, this is the worst possible approach.  Drop everything *except* the 
unique index, and then (if possible) sort the input file by the unique 
key.   That'll increase buffered IO; otherwise, you're bopping all around 
the filesystem.


Using a bulk loader if possible would increase speeds

--
Angular momentum makes the world go 'round.


Re: Max connections reached without max connections reached

2021-11-24 Thread James Sewell
> > Seems like some of the processes are taking a long time or stuck while
> > reading/writing SLRU pages, and due to that while creating a new
> > connection the backend process is not able to check the transaction
> > status (from pg_xlog) of the pg_class tuple and gets stuck/taking a
> > long time in a startup.
>
> My next question would be whether this particular process shown is
> stack is stuck forever or finally, it is able to make a connection?  I
> want to understand is this just due to slow I/O or something else?
>

They don't ever seem to clear, (more and more show up)  and IO doesn't seem
to be deadlocked at that time.

It's a really odd one!

-- 
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Re: Error message while trying to connect from PGAdmin 4

2021-11-24 Thread sivapostg...@yahoo.com
Hello
PostgreSQL 11.11,  PGAdmin 4.27, Windows 10 Pro 20H2
Working fine till yesterday (24.11.2021).   When we tried to connect from 
PGAdmin, it refused to list server list.  Thought of some corruption happened, 
we tried to add a server we got the following error message

What could be the reason for this error?  Any steps to diagnose this issue?
When we try to connect the same PG Server from other machines, it's working 
fine.   This issue happens when we try to connect from the same machine where 
PG is installed.
Happiness Always
BKR Sivaprakash


Re: Max connections reached without max connections reached

2021-11-24 Thread Dilip Kumar
On Thu, Nov 25, 2021 at 10:53 AM James Sewell  wrote:
>
>
>> > Seems like some of the processes are taking a long time or stuck while
>> > reading/writing SLRU pages, and due to that while creating a new
>> > connection the backend process is not able to check the transaction
>> > status (from pg_xlog) of the pg_class tuple and gets stuck/taking a
>> > long time in a startup.
>>
>> My next question would be whether this particular process shown is
>> stack is stuck forever or finally, it is able to make a connection?  I
>> want to understand is this just due to slow I/O or something else?
>
>
> They don't ever seem to clear, (more and more show up)  and IO doesn't seem 
> to be deadlocked at that time.

How did you verify that, maybe some process started IO and stuck
there? Can we check pg_stat_activity that is there some process that
shows in the wait event as SLRURead/SLRUWrite and not coming out of
that state?

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com