LISTEN/NOTIFY ultra slow speed

2022-03-14 Thread Арсен Арутюнян

    Hello! I'm having trouble with LISTEN/NOTIFY speed. Now I have about 
100-200 (few?) clients who are actively messaging. But recently I started to 
notice a drop in speed. Changing the log_min_duration_statement parameter did 
not show slow queries in the log.
    As it turned out, the pg_notify(text ,text) function
          or the
          NOTIFY text, 'text' queries takes an extremely long time (0.7-1.5sec),
    but the select pg_notification_queue_usage() query always returns 0.
Please tell me what's the matter, where did I make a mistake? 
What needs to be done to increase the speed of adding to the message queue?
 
--
Arsen Arutyunyan

Restoring using PG_DUMP in PG12 results in

2022-03-14 Thread Shukla, Pranjal
Hello,
We tried importing into an empty database in PG 12 from the dump that was 
created in PG 10. Import was successful but we got an message that an error was 
ignored. We agin imported with -e option and the following message was printed:


pg_restore: while PROCESSING TOC:

pg_restore: from TOC entry 6; 2615 2200 SCHEMA public postgres

pg_restore: error: could not execute query: ERROR:  schema "public" already 
exists

Command was: CREATE SCHEMA public;

There are some blogs that ar pointing to that fact the error is because of 
different versions of PG Admin Utility namely different versions of pg_dump() 
and pg_resotre() command in source and destination versions. We use the 
following command for pg_dump and pg_restore.

pg_dump -U postgres -h 127.0.0.1 --verbose --port=5432 -j 5 -Fd mydb 
--no-synchronized-snapshots -f /var/mydata/dbbackup

pg_restore -W -h 127.0.0.1 -p 5432 -U postgres -d mydb -v 
"/var/mydata/dbbackup"

To mitigate the same, we took SQL (.sql) dump of the above database in PG10 and 
restored in an empty database in PG12, it worked. Using this process, we 
migrated our database from PG10 to 12.

To confirm whether the problem is really because of the difference in version, 
we performed 2 exercises, let’s call them approaches. Let me narrate them below.

Approach 1:

  1.  Again, took Folder backup of mydb database using pg_dump in PG12 (The 
same was brought in PG 12 using SQL Import).
  2.  Created another database named mydb_temp.
  3.  Imported the dump taken in step-1 using pg_import command in temp 
database, it worked.

Conclusion of Approach 1:

  1.  pg_dump and pg_import from 10 to 12 gave an error that “Schema public 
already exists”.
  2.  pg_dump and pg_import from 12 to 12 did not give any error.

Approach 2:

  1.  Took a dump of mydb PG10 database FROM PG12 machine using the below 
command. Not the IP here, instead of localhost, we gave ip of the machine with 
PG10 installation.
  2.  FROM machine with PG12 binaries. pg_dump -U postgres -h 172.27.2.131 
--verbose --port=5432 -j 5 -Fd mydb --no-synchronized-snapshots -f 
/var/mydata/dbbackup
  3.  Restored in an empty database in PG 12 using command: pg_restore -W -h 
127.0.0.1 -p 5432 -U postgres -d mydb -v 
"/var/mydata/dbbackup"

Conclusion of Approach 2:

  1.  Both backup and restore through same version of utilities worked.

Question to you all:

  1.  Have you faced this scenario?
  2.  Off the approaches mentioned above, which one do you think we should opt 
for while doing migration from PG 10 to 12 (in different machines)?


PS: Below is the command we use to create database:

CREATE DATABASE mydb WITH ENCODING 'UTF8' LC_COLLATE 'C' LC_CTYPE 'C' TEMPLATE 
template0;


Thanks & Regards
Pranjal Shukla


Re: Restoring using PG_DUMP in PG12 results in

2022-03-14 Thread Adrian Klaver

On 3/14/22 06:39, Shukla, Pranjal wrote:

Hello,

We tried importing into an empty database in PG 12 from the dump that 
was created in PG 10. Import was successful but we got an message that 
an error was ignored. We agin imported with -e option and the following 
message was printed:


pg_restore: while PROCESSING TOC:

pg_restore: from TOC entry 6; 2615 2200 SCHEMA public postgres

pg_restore: error: could not execute query: ERROR:schema "public" 
already exists


Basically an informational error that tells you that the restore was 
trying to create an object that already exists, in this case the pubic 
schema. Not really an issue.


Command was: CREATE SCHEMA public;

There are some blogs that ar pointing to that fact the error is because 
of different versions of PG Admin Utility namely different versions of 
pg_dump() and pg_resotre() command in source and destination versions. 
We use the following command for pg_dump and pg_restore.





 1. Have you faced this scenario?
 2. Off the approaches mentioned above, which one do you think we should
opt for while doing migration from PG 10 to 12 (in different machines)?


Honestly I could not follow what you did. Best practice when moving from 
older version to newer version is to use the newer version's pg_dump 
against the older version and then use the newer versions pg_restore or 
psql(for text dumps) to restore.




PS: Below is the command we use to create database:

/CREATE DATABASE mydb WITH ENCODING 'UTF8' LC_COLLATE 'C' LC_CTYPE 'C' 
TEMPLATE template0;/


*Thanks & Regards*

*Pranjal Shukla*




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: LISTEN/NOTIFY ultra slow speed

2022-03-14 Thread Tom Lane
=?UTF-8?B?0JDRgNGB0LXQvSDQkNGA0YPRgtGO0L3Rj9C9?=  writes:
>     Hello! I'm having trouble with LISTEN/NOTIFY speed. Now I have about 
> 100-200 (few?) clients who are actively messaging. But recently I started to 
> notice a drop in speed. Changing the log_min_duration_statement parameter did 
> not show slow queries in the log.
>     As it turned out, the pg_notify(text ,text) function
>           or the
>           NOTIFY text, 'text' queries takes an extremely long time 
> (0.7-1.5sec),
>     but the select pg_notification_queue_usage() query always returns 0.

What PG version is this exactly?  What is in $PGDATA/pg_notify/ ?

regards, tom lane




Re: foreign key on delete cascade order?

2022-03-14 Thread George Woodring
For the update processes, all other tables are read-only reference tables,
I don't think they would be locked.


iGLASS Networks
www.iglass.net


On Fri, Mar 11, 2022 at 10:03 PM Michael Lewis  wrote:

> Could you set lock_timeout, lock table explicitly for SHARE UPDATE
> EXCLUSIVE (pretty sure that would be the proper level), then retry if it
> fails because a delete is already going on?


I don't think I can take a lock on the table, there are multiple process
that update the status table. While each process does not overlap, I think
locking the table would cause them to block each other.
I think we would just have to retry the delete after the deadlock, which is
currently done manually by refreshing the web page.  The update never seems
to be interupted, probably because it longer running and starts before the
random delete.


> Also, are you confident that before you call 'begin' to do the update, you
> are not already in a transaction which might have some lock on row(s) in
> mach, or one of the other tables involved?
>

  For the update processes, all other tables are read-only reference
tables, I don't think they would be locked.

 Thank you for your help.
George


Re: Indexes that would span partitions.

2022-03-14 Thread Michael Lewis
How many partitions do you expect to have? Why are you partitioning on that
column? Do you have need to drop old data all at the same time? How many
rows in each partition or in total do you expect to have?