Re: How to explicitly lock and unlock tables in pgsql?

2022-03-17 Thread Laurenz Albe
On Wed, 2022-03-16 at 20:30 +, Shaozhong SHI wrote:
> Table locks present a barrier for progressing queries.
> 
> How to explicitly lock and unlock tables in pgsql, so that we can guarantee 
> the progress of running scripts?

You cannot unlock tables except by ending the transaction which took the lock.

The first thing you should do is to make sure that all your database 
transactions are short.

Also, you should nevr explicitly lock tables.  Table locks are taken 
automatically
by the SQL statements you are executing.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





PSQL bug?

2022-03-17 Thread Dominique Devienne
Made a typo, using } instead of ) to test a weird table name, and got
disconnected. Normal? --DD

ddevienne=> create table "t |" ( id int};
ddevienne(> );
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
ddevienne=> create table "t |" ( id int);
ddevienne=> drop table "t |";
ddevienne=> \! psql -V
psql (PostgreSQL) 12.1
ddevienne=> select version();
 version
-
 PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-39), 64-bit
(1 row)




Re: PSQL bug?

2022-03-17 Thread Pavel Stehule
Hi

čt 17. 3. 2022 v 11:35 odesílatel Dominique Devienne 
napsal:

> Made a typo, using } instead of ) to test a weird table name, and got
> disconnected. Normal? --DD
>
> ddevienne=> create table "t |" ( id int};
> ddevienne(> );
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> ddevienne=> create table "t |" ( id int);
> ddevienne=> drop table "t |";
> ddevienne=> \! psql -V
> psql (PostgreSQL) 12.1
> ddevienne=> select version();
>  version
>
> -
>  PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-39), 64-bit
> (1 row)
>

I don't see any problem on master


Re: PSQL bug?

2022-03-17 Thread Dominique Devienne
On Thu, Mar 17, 2022 at 11:38 AM Pavel Stehule  wrote:
>> Made a typo, [...], and got disconnected. Normal? --DD
> I don't see any problem on master

OK, good. Must have been fixed already then. Or was a fluke. Thanks
for checking. --DD




Re: PSQL bug?

2022-03-17 Thread Josef Šimánek
čt 17. 3. 2022 v 12:06 odesílatel Dominique Devienne
 napsal:
>
> On Thu, Mar 17, 2022 at 11:38 AM Pavel Stehule  
> wrote:
> >> Made a typo, [...], and got disconnected. Normal? --DD
> > I don't see any problem on master
>
> OK, good. Must have been fixed already then. Or was a fluke. Thanks
> for checking. --DD

I tried to reproduce with 14.2 client and latest 14, 13, 12 and 11
servers without luck.

>




High Postgres postmaster CPU when My Background Worker is loaded.

2022-03-17 Thread GoLang Developere
Hi,

Please help me troubleshoot the issue I am facing currently. I have tried
various methods I can think with no success.

I am using PostgreSql & golang in my project I have created a postgresql
background worker using golang. This background worker listens to
LISTEN/NOTIFY channel in postgres and writes the data it receives to a
file.

The trigger(which supplies data to channel), background worker registration
and the background worker's main function are all in one .so file. The core
background process logic is in another helper .so file. The background
process main loads the helper .so file using dlopen and executes the core
logic from it.

*Issue faced:*

It works fine in windows, but in linux where I tried with postgres 13.6,
there is a problems. It runs as intended for a while, but after about
2-3hrs, postgresql postmaster's CPU utilization shoots up and seems to get
stuck. The CPU shoot up is very sudden (less than 5 mins, till it starts
shooting up it is normal). I am not able to establish connection with
postgres from psql client. In the log the following message keeps
repeating: WARNING: worker took too long to start; canceled.

I tried commenting out various areas of my code, adding sleeps at different
places of the core processing loop and even disabling the trigger but the
issue occurs.

*Softwares and libraries used:*

go version go1.17.5 linux/amd64,

Postgres 13.6, in Ubuntu

LibPq: https://github.com/lib/pq,

*My core processing loop looks like this:*

maxReconn := time.Minute
listener := pq.NewListener(, minReconn, maxReconn,
EventCallBackFn); // libpq used here.defer listener.UnlistenAll();if
err = listener.Listen("mystream"); err != nil {
panic(err);
}
var itemsProcessedSinceLastSleep int = 0;for {
select {
case signal := <-signalChan:
PgLog(PG_LOG, "Exiting loop due to termination signal :
%d", signal);
return 1;
case pgstatus := <-pmStatusChan:
PgLog(PG_LOG, "Exiting loop as postmaster is not running :
%d ", pgstatus);
return 1;
case data := <-listener.Notify:
itemsProcessedSinceLastSleep = itemsProcessedSinceLastSleep + 1;
if itemsProcessedSinceLastSleep >= 1000 {
time.Sleep(time.Millisecond * 10);
itemsProcessedSinceLastSleep = 0;
}
ProcessChangeReceivedAtStream(data); // This performs the
data processing
case <-time.After(10 * time.Second):
time.Sleep(100 * time.Millisecond);
var cEpoch = time.Now().Unix();
if cEpoch - lastConnChkTime > 1800 {
lastConnChkTime = cEpoch;
if err := listener.Ping(); err!=nil {
PgLog(PG_LOG, "Seems to be a problem with connection")
}
}
default:
time.Sleep(time.Millisecond * 100);
}
}

Thanks you,
Regards,
Golang Developer


Query on WAL Optimization and Streaming Replication

2022-03-17 Thread Shukla, Pranjal
During upgrades of our application, we generally shutdown all Secondary servers 
which are getting stream replicated from Primary Servers. This is to maintain a 
copy of database on other servers should we wish to revert (of course we take 
DB Backups too before starting the activity). After the application upgrade is 
done, when we start the secondary, often the replication is broken, and we need 
to again setup using pg_basebackup. How do we ensure that secondary is able to 
resume the replication without the need of base back up again?

Below are some of the WAL related settings:


shared_buffers = 48GB

wal_level = replica

max_prepared_transactions = 200

max_wal_senders = 5

wal_keep_segments = 32

hot_standby = ON

effective_cache_size = 144GB

work_mem = 1GB

maintenance_work_mem = 2GB

wal_buffers = 16MB

min_wal_size = 1GB

max_wal_size = 2GB

Thanks & Regards
Pranjal Shukla


Re: Query on WAL Optimization and Streaming Replication

2022-03-17 Thread Laurenz Albe
On Thu, 2022-03-17 at 12:36 +, Shukla, Pranjal wrote:
> uring upgrades of our application, we generally shutdown all Secondary servers
> which are getting stream replicated from Primary Servers. This is to maintain
> a copy of database on other servers should
> we wish to revert (of course we take DB Backups too before starting the 
> activity).
> After the application upgrade is done, when we start the secondary, often the
> replication is broken, and we need to
> again setup using pg_basebackup. How do we ensure that secondary is able to
> resume the replication without the need of base back up again?

There are three ways:

1. have a WAL archive and configure "restore_command" on the standby

2. set "wal_keep_size" on the primary high enough

3. use a replication slot

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Query on WAL Optimization and Streaming Replication

2022-03-17 Thread Laurenz Albe
On Thu, 2022-03-17 at 14:05 +, Shukla, Pranjal wrote:
> From the configuration we have, does it mean that the primary will retain 32 
> WAL's
> of 1 GB each and then start evicting the first WAL as soon as the last one 
> gets filled?
> In layman's term, 32GB is huge amount of data and I don't think that much 
> changes
> during upgrades. In fact the total size of our database is 56 GB. Is my 
> understanding
> correct?
> 
> wal_keep_segments = 32

No, it means that the last 32 WAL segments of size 16MB are retained,
that is 0.5 GB.

"max_wal_size" is not the size of a WAL segment.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Query on WAL Optimization and Streaming Replication

2022-03-17 Thread Shukla, Pranjal
Hi Laurenze,
From the configuration we have, does it mean that the primary will retain 32 
WAL's of 1 GB each and then start evicting the first WAL as soon as the last 
one gets filled? In layman's term, 32GB is huge amount of data and I don't 
think that much changes during upgrades. In fact the total size of our database 
is 56 GB. Is my understanding correct?

shared_buffers = 48GB
wal_level = replica
max_prepared_transactions = 200
max_wal_senders = 5
wal_keep_segments = 32
hot_standby = ON
effective_cache_size = 144GB
work_mem = 1GB
maintenance_work_mem = 2GB
wal_buffers = 16MB
min_wal_size = 1GB
max_wal_size = 2GB

Thanks & Regards
Pranjal Shukla

On 3/17/22, 6:50 PM, "Laurenz Albe"  wrote:

On Thu, 2022-03-17 at 12:36 +, Shukla, Pranjal wrote:
> uring upgrades of our application, we generally shutdown all Secondary 
servers
> which are getting stream replicated from Primary Servers. This is to 
maintain
> a copy of database on other servers should
> we wish to revert (of course we take DB Backups too before starting the 
activity).
> After the application upgrade is done, when we start the secondary, often 
the
> replication is broken, and we need to
> again setup using pg_basebackup. How do we ensure that secondary is able 
to
> resume the replication without the need of base back up again?

There are three ways:

1. have a WAL archive and configure "restore_command" on the standby

2. set "wal_keep_size" on the primary high enough

3. use a replication slot

Yours,
Laurenz Albe
-- 
Cybertec | 
https://urldefense.com/v3/__https://www.cybertec-postgresql.com__;!!GjvTz_vk!DWWCoWlC7gBG3UPcdGdgbBT_1hKnCxfiO7qpf7QV1Q-bOqCJ1JkNSBYlD2yvLg$
 




Re: Query on WAL Optimization and Streaming Replication

2022-03-17 Thread Shukla, Pranjal
Ok,
Then what will increase the size of the WAL?

Sent from my iPhone

> On 17-Mar-2022, at 8:08 PM, Laurenz Albe  wrote:
> On Thu, 2022-03-17 at 14:05 +, Shukla, Pranjal wrote:
>> From the configuration we have, does it mean that the primary will retain 32 
>> WAL's
>> of 1 GB each and then start evicting the first WAL as soon as the last one 
>> gets filled?
>> In layman's term, 32GB is huge amount of data and I don't think that much 
>> changes
>> during upgrades. In fact the total size of our database is 56 GB. Is my 
>> understanding
>> correct?
>> 
>> wal_keep_segments = 32
> 
> No, it means that the last 32 WAL segments of size 16MB are retained,
> that is 0.5 GB.
> 
> "max_wal_size" is not the size of a WAL segment.
> 
> Yours,
> Laurenz Albe
> -- 
> Cybertec | 
> https://urldefense.com/v3/__https://www.cybertec-postgresql.com__;!!GjvTz_vk!CfWoaVi0_A_FauqITD6xevYV9YuIJGyWWueduHyTOiD3CY65CJorQpYhD5zbUA$


Re[5]: LISTEN/NOTIFY ultra slow speed

2022-03-17 Thread Арсен Арутюнян

I want to tell you about the problem solution. I had to move to postgres 13.
           As can be seen from the async.c code, since the 13th version, the 
implementation of the  LISTEN/NOTIFY mechanism has undergone major changes. The 
transfer of tables and database data from version 10 to version 13 went without 
problems.
           But there were serious difficulties with launching an application to 
work with the database. It turned out that since the 12th version there is no 
longer a pg_attrdef.adsr column and I had to patch and rebuild Qt 5.9.6.

result
 
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
 Время: 3,069 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,490 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,932 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,455 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,572 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
 Время: 2,443 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,145 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,581 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 47,704 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,631 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
 Время: 2,427 мс
rmdatabase=# select 
pg_notify('rmdb_client_message','{"Command":239,"MsgId":"49064562"}');
Время: 2,647 мс
 
 
--
Arsen Arutyunyan
 
  
>Вторник, 15 марта 2022, 23:48 +03:00 от Арсен Арутюнян :
> 
>I decided to rebuild pg 10 first.
>I commented out the lines of code with duplicate checking
>(/src/backend/commands/async.c)
> 
>//    if (AsyncExistsPendingNotify(channel, payload))
>//        return;
> 
>and launched this version of pg into production.
>This was the only way I could figure out if there was a significant difference 
>in speed. And the results turned out to be very interesting, the query speed 
>increased  by about two times. But unfortunately it's still far from the 
>desired level
> 
>before the code changes
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 974,119 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 859,855 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 695,741 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 77,001 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 475,590 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 973,449 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 1039,825 мс (00:01,040)
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 1233,576 мс (00:01,234)
> 
> 
>after the code changes
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 431,337 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 460,794 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 498,042 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 604,931 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 438,763 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 587,827 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 621,079 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 549,264 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 447,218 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 526,980 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48119697"}');
>Время: 417,556 мс
> 
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 185,374 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"MsgId":"48117898"}');
>Время: 554,529 мс
>rmdatabase=# select 
>pg_notify('rmdb_client_message','{"Command":239,"

Re: PSQL bug?

2022-03-17 Thread Kyotaro Horiguchi
At Thu, 17 Mar 2022 11:38:30 +0100, Dominique Devienne  
wrote in 
> Made a typo, using } instead of ) to test a weird table name, and got
> disconnected. Normal? --DD
> 
> ddevienne=> create table "t |" ( id int};
> ddevienne(> );
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.

The last line "Attempting reset: Succeeded." means that the connection
is once disconnected before the second input line then reconnected by
the input.

Specifically I can reproduce the same result by the following steps.

> ddevienne=> create table "t |" ( id int};
$ pg_ctl restart
> ddevienne(> );
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center