SQl help to build a result with custom aliased bool column

2019-04-08 Thread Arup Rakshit
I have 2 tables Company and Feature. They are connected via a join table called 
CompanyFeature. I want to build a result set where it will have id, name and a 
custom boolean column. This boolean column is there to say if the feature is 
present for the company or not.

Company table:

| id | name | 
||--| 
| 1  | c1   | 
| 2  | c2   | 
| 3  | c3   | 

Feature table:

| id | name | 
||--| 
| 1  | f1   | 
| 2  | f2   | 
| 3  | f3   | 

Company Feature table:

| id | feature_id | company_id | 
|||| 
| 1  | 2  | 1  | 
| 2  | 1  | 1  | 
| 3  | 3  | 2  | 

The result should look like for company `c1`:

| id | name | active | 
||--|| 
| 1  | f1   | t  | 
| 2  | f2   | t  | 
| 3  | f3   | f  | 

I tried something like:

SELECT
features.id,
features.name,
CASE WHEN company_features.company_id = 1 THEN
TRUE
ELSE
FALSE
END AS active
FROM
features
LEFT JOIN company_features ON company_features.feature_id = features.id

It works. But is there any better way to achieve this?


Thanks,

Arup Rakshit
a...@zeit.io







Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread Szymon Lipiński
Hey,
you could just use

SELECT
features.id,
features.name,
company_features.company_id = 1 as active

regards,
Szymon

On Mon, 8 Apr 2019 at 09:55, Arup Rakshit  wrote:

> I have 2 tables Company and Feature. They are connected via a join table
> called CompanyFeature. I want to build a result set where it will have id,
> name and a custom boolean column. This boolean column is there to say if
> the feature is present for the company or not.
>
> Company table:
>
> | id | name |
> ||--|
> | 1  | c1   |
> | 2  | c2   |
> | 3  | c3   |
>
> Feature table:
>
> | id | name |
> ||--|
> | 1  | f1   |
> | 2  | f2   |
> | 3  | f3   |
>
> Company Feature table:
>
> | id | feature_id | company_id |
> ||||
> | 1  | 2  | 1  |
> | 2  | 1  | 1  |
> | 3  | 3  | 2  |
>
> The result should look like for company `c1`:
>
> | id | name | active |
> ||--||
> | 1  | f1   | t  |
> | 2  | f2   | t  |
> | 3  | f3   | f  |
>
> I tried something like:
>
> SELECT
> features.id,
> features.name,
> CASE WHEN company_features.company_id = 1 THEN
> TRUE
> ELSE
> FALSE
> END AS active
> FROM
> features
> LEFT JOIN company_features ON company_features.feature_id =
> features.id
>
> It works. But is there any better way to achieve this?
>
>
> Thanks,
>
> Arup Rakshit
> a...@zeit.io
>
>
>
>
>
>


Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread Arup Rakshit
I knew that will be more compact way. Thanks for showing it. One thing I still 
would like to handle is that, to make sure the column contains only True/False. 
But right now sometimes it shows NULL. How can I fix this?

id|name|active|
--||--|
 1|f1  |true  |
 2|f2  |true  |
 3|f3  |false |
 4|f4  |  |


Thanks,

Arup Rakshit
a...@zeit.io



> On 08-Apr-2019, at 3:28 PM, Szymon Lipiński  wrote:
> 
> Hey,
> you could just use
> 
> SELECT
> features.id,
> features.name,
> company_features.company_id = 1 as active
> 
> regards,
> Szymon
> 
> On Mon, 8 Apr 2019 at 09:55, Arup Rakshit  wrote:
> I have 2 tables Company and Feature. They are connected via a join table 
> called CompanyFeature. I want to build a result set where it will have id, 
> name and a custom boolean column. This boolean column is there to say if the 
> feature is present for the company or not.
> 
> Company table:
> 
> | id | name | 
> ||--| 
> | 1  | c1   | 
> | 2  | c2   | 
> | 3  | c3   | 
> 
> Feature table:
> 
> | id | name | 
> ||--| 
> | 1  | f1   | 
> | 2  | f2   | 
> | 3  | f3   | 
> 
> Company Feature table:
> 
> | id | feature_id | company_id | 
> |||| 
> | 1  | 2  | 1  | 
> | 2  | 1  | 1  | 
> | 3  | 3  | 2  | 
> 
> The result should look like for company `c1`:
> 
> | id | name | active | 
> ||--|| 
> | 1  | f1   | t  | 
> | 2  | f2   | t  | 
> | 3  | f3   | f  | 
> 
> I tried something like:
> 
> SELECT
> features.id,
> features.name,
> CASE WHEN company_features.company_id = 1 THEN
> TRUE
> ELSE
> FALSE
> END AS active
> FROM
> features
> LEFT JOIN company_features ON company_features.feature_id = 
> features.id
> 
> It works. But is there any better way to achieve this?
> 
> 
> Thanks,
> 
> Arup Rakshit
> a...@zeit.io
> 
> 
> 
> 
> 





Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread Arup Rakshit
I am still having some bugs. I am getting duplicate in the result set.

psql (11.0, server 10.5)
Type "help" for help.

aruprakshit=# select * from features;
 id | name 
+--
  1 | f1
  2 | f2
  3 | f3
  4 | f4
(4 rows)

aruprakshit=# select * from company;
 id | name 
+--
  1 | c1
  2 | c2
(2 rows)

aruprakshit=# select * from company_features;
 id | company_id | feature_id 
++
  1 |  1 |  1
  2 |  1 |  2
  3 |  2 |  3
  4 |  1 |  3
(4 rows)

aruprakshit=# SELECT
aruprakshit-# features.id,
aruprakshit-# features.name,
aruprakshit-# coalesce(company_features.company_id = 1, false) AS active
aruprakshit-# FROM
aruprakshit-# features
aruprakshit-# LEFT JOIN company_features ON features.id = 
company_features.feature_id;
 id | name | active 
+--+
  1 | f1   | t
  2 | f2   | t
  3 | f3   | f
  3 | f3   | t
  4 | f4   | f
(5 rows)


I should get in the results only 3, as total number of features are 3.

Thanks,

Arup Rakshit
a...@zeit.io



> On 08-Apr-2019, at 3:28 PM, Szymon Lipiński  wrote:
> 
> Hey,
> you could just use
> 
> SELECT
> features.id,
> features.name,
> company_features.company_id = 1 as active
> 
> regards,
> Szymon
> 
> On Mon, 8 Apr 2019 at 09:55, Arup Rakshit  wrote:
> I have 2 tables Company and Feature. They are connected via a join table 
> called CompanyFeature. I want to build a result set where it will have id, 
> name and a custom boolean column. This boolean column is there to say if the 
> feature is present for the company or not.
> 
> Company table:
> 
> | id | name | 
> ||--| 
> | 1  | c1   | 
> | 2  | c2   | 
> | 3  | c3   | 
> 
> Feature table:
> 
> | id | name | 
> ||--| 
> | 1  | f1   | 
> | 2  | f2   | 
> | 3  | f3   | 
> 
> Company Feature table:
> 
> | id | feature_id | company_id | 
> |||| 
> | 1  | 2  | 1  | 
> | 2  | 1  | 1  | 
> | 3  | 3  | 2  | 
> 
> The result should look like for company `c1`:
> 
> | id | name | active | 
> ||--|| 
> | 1  | f1   | t  | 
> | 2  | f2   | t  | 
> | 3  | f3   | f  | 
> 
> I tried something like:
> 
> SELECT
> features.id,
> features.name,
> CASE WHEN company_features.company_id = 1 THEN
> TRUE
> ELSE
> FALSE
> END AS active
> FROM
> features
> LEFT JOIN company_features ON company_features.feature_id = 
> features.id
> 
> It works. But is there any better way to achieve this?
> 
> 
> Thanks,
> 
> Arup Rakshit
> a...@zeit.io
> 
> 
> 
> 
> 





Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread mariusz
On Mon, 8 Apr 2019 15:32:36 +0530
Arup Rakshit  wrote:

hi,

> I am still having some bugs. I am getting duplicate in the result set.
> 
> psql (11.0, server 10.5)
> Type "help" for help.
> 
> aruprakshit=# select * from features;
>  id | name 
> +--
>   1 | f1
>   2 | f2
>   3 | f3
>   4 | f4
> (4 rows)
> 
> aruprakshit=# select * from company;
>  id | name 
> +--
>   1 | c1
>   2 | c2
> (2 rows)
> 
> aruprakshit=# select * from company_features;
>  id | company_id | feature_id 
> ++
>   1 |  1 |  1
>   2 |  1 |  2
>   3 |  2 |  3
>   4 |  1 |  3
> (4 rows)
> 
> aruprakshit=# SELECT
> aruprakshit-# features.id,
> aruprakshit-# features.name,
> aruprakshit-# coalesce(company_features.company_id = 1, false) AS
> active aruprakshit-# FROM
> aruprakshit-# features
> aruprakshit-# LEFT JOIN company_features ON features.id =
> company_features.feature_id; id | name | active 
> +--+
>   1 | f1   | t
>   2 | f2   | t
>   3 | f3   | f
>   3 | f3   | t
>   4 | f4   | f
> (5 rows)

> 
> I should get in the results only 3, as total number of features are 3.

not only dups, but also you read too much (not an issue with so small
number of tuples, but...)

what you really need is all features and subset of commpany_features
with company_id = 1 for that specific example, not the whole
company_features table

something like

SELECT f.id, f.name, fc.id IS NOT NULL AS active
FROM features f
 LEFT OUTER JOIN company_features cf
   ON cf.company_id = 1 and cf.feature_id = f.id

would be enough,

or something like

SELECT f.id, f.name,
   EXISTS (SELECT 0 FROM company_features cf
 WHERE cf.company_id = 1 AND cf.feature_id = f.id)
   AS active
FROM features f

or couple more ways to achieve what you want for a given company (id=1
here)

bear in mind that with a large number of companies and proper index on
company_features the optimizer could limit company_features as
necessary, while your examples read everything anyway and mangle output
to get proper result (with dups and bugs, but also not optimal)

furthermore i see some inconsistencies with naming, like tables
(relations) company, feature, companyfeature in your first mail and
features, company_features downward.
i wrote above examples as in your last query, but honestly i would not
really agree with such naming. for me relations (yes, relation like in
math background, more than table of objects) would be company, feature,
company_feature, you may prefer something other but try to make it
consistent

regards, mariusz

> 
> Thanks,
> 
> Arup Rakshit
> a...@zeit.io
> 
> 
> 
> > On 08-Apr-2019, at 3:28 PM, Szymon Lipiński 
> > wrote:
> > 
> > Hey,
> > you could just use
> > 
> > SELECT
> > features.id,
> > features.name,
> > company_features.company_id = 1 as active
> > 
> > regards,
> > Szymon
> > 
> > On Mon, 8 Apr 2019 at 09:55, Arup Rakshit  wrote:
> > I have 2 tables Company and Feature. They are connected via a join
> > table called CompanyFeature. I want to build a result set where it
> > will have id, name and a custom boolean column. This boolean column
> > is there to say if the feature is present for the company or not.
> > 
> > Company table:
> > 
> > | id | name | 
> > ||--| 
> > | 1  | c1   | 
> > | 2  | c2   | 
> > | 3  | c3   | 
> > 
> > Feature table:
> > 
> > | id | name | 
> > ||--| 
> > | 1  | f1   | 
> > | 2  | f2   | 
> > | 3  | f3   | 
> > 
> > Company Feature table:
> > 
> > | id | feature_id | company_id | 
> > |||| 
> > | 1  | 2  | 1  | 
> > | 2  | 1  | 1  | 
> > | 3  | 3  | 2  | 
> > 
> > The result should look like for company `c1`:
> > 
> > | id | name | active | 
> > ||--|| 
> > | 1  | f1   | t  | 
> > | 2  | f2   | t  | 
> > | 3  | f3   | f  | 
> > 
> > I tried something like:
> > 
> > SELECT
> > features.id,
> > features.name,
> > CASE WHEN company_features.company_id = 1 THEN
> > TRUE
> > ELSE
> > FALSE
> > END AS active
> > FROM
> > features
> > LEFT JOIN company_features ON company_features.feature_id =
> > features.id
> > 
> > It works. But is there any better way to achieve this?
> > 
> > 
> > Thanks,
> > 
> > Arup Rakshit
> > a...@zeit.io
> > 
> > 
> > 
> > 
> > 
> 
> 
> 





Re: How to use full-text search URL parser to filter query results by domain name?

2019-04-08 Thread Arthur Zakirov

On 07.04.2019 07:06, Jess Wren wrote:
However, I can't figure out how I would integrate this into the above 
query to filter out duplicate domains from the results. And because this 
is the docs for "testing and debugging text search 
", 
I don't know if this use of `ts_parse()` is even related to how the URL 
parser is intended to be used in practice.


How would I use the "host" parser in my query above to return one row 
per domain? Also, how would I appropriately index the "links" table for 
"host" and "url" token lookup?


I think it is normal to use ts_parse(). And I suppose you might use 
windows functions.


For example, you have table links:

=# create table links (score int, link text);
=# insert into links values
  (1, 'http://www.foo.com/bar'),
  (2, 'http://www.foo.com/foo'),
  (2, 'http://www.bar.com/foo'),
  (1, 'http://www.bar.com/bar');

You can use the following query:

=# with l as (
  select score, token, link,
rank() over (partition by token order by score) as rank
  from links,
lateral ts_parse('default', link)
  where tokid = 6)
select score, token, link from l where rank = 1;
 score |token|  link
---+-+
 1 | www.bar.com | http://www.bar.com/bar
 1 | www.foo.com | http://www.foo.com/bar

It is just the idea, probably the query might be simpler.

--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




Re: 10.2: high cpu usage on update statement

2019-04-08 Thread Kevin Wilkinson

resolved. sorry for not posting the resolution earlier.

it was a good puzzler. turns out the postgresql server used 
network-attached disks. and the updated table had no index for the 
updated columns. so, the update required a serial scan of the table over 
the network. thus, the high cpu usage for updating a single row.


kevin

On 4/7/2019 11:41 PM, Laurenz Albe wrote:

Kevin Wilkinson wrote:

on 10.2, we're seeing very high cpu usage when doing an update statement
on a relatively small table (1GB). one of the updated columns is text,
about 1k bytes. there are four threads doing similar updates
concurrently to the same table (but different rows). each thread does an
update about every two seconds, i.e., the tables gets updated every 1/2
second. the stack trace below shows the process stuck in reading the
update results. this seems very odd. has anyone seen something similar?
this is a modest server of 8 cores, all of which are 90% busy.

Try to profile the server ("perf" on Linux) to see where the time is spent.

Are there any foreign key constraints pointing to the table being updated?
Then make sure that either no key column is updates or that the foreign
keys are indexed.

Yours,
Laurenz Albe





Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Raghavendra Rao J S V
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to
run the pg_basebackup through port 5433(PGBOUNCER port) we are receiving
below error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
--format=tar --gzip --compress=1 --pgdata=- -D /opt/rao

*pg_basebackup: could not connect to server: ERROR:  Unsupported startup
parameter: replication*


-- 
Regards,
Raghavendra Rao J S V


RE: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Scot Kreienkamp
Replication and several other admin type operations must connect directly to 
PG.  They are not supported through PGBouncer.

From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 9:21 AM
To: pgsql-general@lists.postgresql.org
Subject: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port 5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is 
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to run 
the pg_basebackup through port 5433(PGBOUNCER port) we are receiving below 
error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar 
--gzip --compress=1 --pgdata=- -D /opt/rao

pg_basebackup: could not connect to server: ERROR:  Unsupported startup 
parameter: replication


--
Regards,
Raghavendra Rao J S V


Re: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Pavel Stehule
po 8. 4. 2019 v 15:42 odesílatel Raghavendra Rao J S V <
raghavendra...@gmail.com> napsal:

> Hi All,
>
> We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
> 5433.
>
> Postgres database port number is 6433. By using port 5433 PGBOUNCER is
> connecting to postgres port 6433 database.
>
> Now PGBOUNCER is establishing the connections properly but when I try to
> run the pg_basebackup through port 5433(PGBOUNCER port) we are receiving
> below error. Please guide me.
>
>
>  /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
> --format=tar --gzip --compress=1 --pgdata=- -D /opt/rao
>
> *pg_basebackup: could not connect to server: ERROR:  Unsupported startup
> parameter: replication*
>

looks like pgbouncer cannot to forward connection parameter "replication"

Why you need pgbouncer for pg_basebackup? It has not too mach sense.

Pavel


>
> --
> Regards,
> Raghavendra Rao J S V
>
>


Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread Arup Rakshit
Hi,

Thanks for showing different ways to achieve the goal. So what should be the 
optimal way to solve this. I have an composite index using company_id and 
feature_id columns for project_features table.

I do ruby on rails development, where table names are plural always by 
convention. The tables I created above in different schema to ask question with 
sample data and test the query output. So they are little inconsistent, yes you 
are right.

Thanks,

Arup Rakshit
a...@zeit.io



> On 08-Apr-2019, at 4:36 PM, mariusz  wrote:
> 
> bear in mind that with a large number of companies and proper index on
> company_features the optimizer could limit company_features as
> necessary, while your examples read everything anyway and mangle output
> to get proper result (with dups and bugs, but also not optimal)





RE: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Scot Kreienkamp
Basically anything that is not written as a sql query should be connected 
directly to PG.  PGBouncer is really only meant for SQL query type connections.

From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 10:19 AM
To: Scot Kreienkamp 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Thank you very much for your prompt response.

Could you explain other admin type operations, which are not supported by 
pgbouncer?

Regards,
Raghavendra Rao.

On Mon, 8 Apr 2019 at 19:16, Scot Kreienkamp 
mailto:scot.kreienk...@la-z-boy.com>> wrote:
Replication and several other admin type operations must connect directly to 
PG.  They are not supported through PGBouncer.

From: Raghavendra Rao J S V 
[mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 9:21 AM
To: 
pgsql-general@lists.postgresql.org
Subject: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port 5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is 
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to run 
the pg_basebackup through port 5433(PGBOUNCER port) we are receiving below 
error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar 
--gzip --compress=1 --pgdata=- -D /opt/rao

pg_basebackup: could not connect to server: ERROR:  Unsupported startup 
parameter: replication


--
Regards,
Raghavendra Rao J S V


--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Fwd: Postgresql with nextcloud in Windows Server

2019-04-08 Thread Adrian Klaver

On 4/7/19 9:53 PM, 김준형 wrote:

Sorry for late but my server works without problem for a while.

 > What problem occurs?
 > Where is the Windows server?

Problem means Windows server doesn't accept new connection and non-admin 
connection.

Only connected admin connection lives.
Windows server is not so close on my work space. So I use Windows server 
with remote connection.


 > Not sure you know that if you cannot connect to the Windows server?

I know I cannot connect to the Windows server but sometimes I got remote 
connection of admin when occurred connection problem.

That time, I had tried to stop the PostgreSQL server but It didn't work.


At last, I leave logs what I got.

1) The Windows system logs

1096 Process closing UDP socket with local port number 64347 is running 
longer than expected. The local port number can not be used until the 
close operation is completed. This problem is usually caused by 
malfunctioning network drivers. Make sure you have the latest updates 
for third-party networking software, including Windows and NIC drivers, 
firewalls, and other security products.


I rarely use Windows anymore so I am not sure how to interpret the 
above. Some searching found that it is often helpful to look in the 
Event Log over the same time period.  Postgres does use UDP, so it may 
be related I am just not sure how?


More below.



2) PostgreSQL logs

...
2019-04-06 04:34:03.984 KST [129704] LOG:  connection received: 
host=128.1.99.51 port=40602
2019-04-06 04:34:03.990 KST [129704] LOG:  connection authorized: 
user=oc_abmiadmin database=abminext
2019-04-06 04:34:05.117 KST [129720] LOG:  connection received: 
host=128.1.99.51 port=40604
2019-04-06 04:34:05.125 KST [129720] LOG:  connection authorized: 
user=oc_abmiadmin database=abminext
2019-04-06 04:34:05.125 KST [129720] FATAL:  remaining connection slots 
are reserved for non-replication superuser connections
2019-04-06 04:34:05.171 KST [129736] LOG:  connection received: 
host=128.1.99.51 port=40606
2019-04-06 04:34:05.179 KST [129736] LOG:  connection authorized: 
user=oc_abmiadmin database=abminext
2019-04-06 04:34:05.179 KST [129736] FATAL:  remaining connection slots 
are reserved for non-replication superuser connections

...

p.s) PostgreSQL max_connections =1200. Is there problem in here?


Yeah, it looks like you are maxing out the connections. The overhead of 
maintaining 1200 connections is probably a contributing factor to your 
issues. So:


1) Why is 1200 connections set?

2) Have you looked at connection poolers?

3) When the server is running properly you should monitor pg_stat_activity:

https://www.postgresql.org/docs/11/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

to see what is connecting to the server and what the connections are doing.





2019년 4월 2일 (화) 오후 11:30, Adrian Klaver >님이 작성:


On 4/1/19 10:03 PM, 김준형 wrote:
 >
 > 보낸사람: *김준형* mailto:wnsuddl...@gmail.com> >>
 > Date: 2019년 4월 2일 (화) 오후 2:02
 > Subject: Re: Postgresql with nextcloud in Windows Server
 > To: Adrian Klaver mailto:adrian.kla...@aklaver.com>
 > >>
 >
 >
 > Thanks for your reply and I hope this answers can help your questions
 >
 > 1) The nextcloud is running on PostgreSQL server. Cloud system needs
 > PostgreSQL server.

Well I got that backwards. Forget that Nextcloud is an ownCloud fork.

 >
 > 2) Nextcloud system try to connect PostgreSQL server all time.
 > 2019-03-27 20:46:59.396 LOG:  connection received:
host=xxx.xxx.xxx.xxx
 > port=
 > 2019-03-27 20:46:59.403 LOG:  connection authorized: user=user_name
 > database=db_name
 > 2019-03-27 20:46:59.463 LOG:  disconnection: session time:
0:00:00.067
 > user=user_name database=db_name host=xxx.xxx.xxx.xxx port=
 > this connections repeat almost per 10sec.
 > Other clients well... use this PostgreSQL but not so much.(almost 30
 > people use this PostgreSQL include nextcloud system users)

There is nothing unusual about the above, just shows a client
successfully connecting and then disconnecting. I set up an ownCloud
server years ago as a test and as I remember it was constantly talking
to the Postgres server. In postgresql.conf you could set
log_statement =
'all' to see what is actually being done by the client(s). Just be
aware
this will generate a lot of logs so you probably want to do this for
short period only.

 >
 >
 > 3) Yes. log files doesn't shows problems clearly. I just checked log
 > files and saw difference when server couldn't connected.
 > 2019-03-27 20:46:59.396 LOG:  connection received:
host=xxx.xxx.xxx.xxx
 > port=
 > 2019-03-27 20:46:59.403 LOG:  connection authorized: user=user_name
 > database=db_name
 > 

Re: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

Could you explain other admin type operations, which are not supported by
pgbouncer?

Regards,
Raghavendra Rao.

On Mon, 8 Apr 2019 at 19:16, Scot Kreienkamp 
wrote:

> Replication and several other admin type operations must connect directly
> to PG.  They are not supported through PGBouncer.
>
>
>
> *From:* Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
> *Sent:* Monday, April 8, 2019 9:21 AM
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* Getting error while running the pg_basebackup through PGBOUNCER
>
>
>
>
> *ATTENTION:   This email was sent to La-Z-Boy from an external source.
> Be vigilant when opening attachments or clicking links.*
>
> Hi All,
>
>
>
> We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
> 5433.
>
>
>
> Postgres database port number is 6433. By using port 5433 PGBOUNCER is
> connecting to postgres port 6433 database.
>
>
>
> Now PGBOUNCER is establishing the connections properly but when I try to
> run the pg_basebackup through port 5433(PGBOUNCER port) we are receiving
> below error. Please guide me.
>
>
>
>
>
>  /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
> --format=tar --gzip --compress=1 --pgdata=- -D /opt/rao
>
>
>
> *pg_basebackup: could not connect to server: ERROR:  Unsupported startup
> parameter: replication*
>
>
>
>
>
> --
>
> Regards,
> Raghavendra Rao J S V
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: SQl help to build a result with custom aliased bool column

2019-04-08 Thread mariusz
On Mon, 8 Apr 2019 19:21:37 +0530
Arup Rakshit  wrote:

> Hi,
> 
> Thanks for showing different ways to achieve the goal. So what should
> be the optimal way to solve this. I have an composite index using
> company_id and feature_id columns for project_features table.

there are even more ways for that simple task. i can imagine some fancy
ways including lateral joins, cte returning subset of company_features
to produce positive results and be reused to produce set difference for
negative results, etc. but too fancy isn't good. the simpler the better.

those already mentioned should be enough. since you need all features
and all company_features for a given company id, there won't be any
much better.

it is enough to limit company_features to company_id which we already
do in join condition, and for big tables optimizer could use your index.

we can probably assume there won't be so much companies and so much
features to make really big table of three ids tuples to make optimizer
even consider using an index, but it may be good habit to think how we
could help optimizer to filter out unnecessary data sooner than later.

regards, mariusz

> I do ruby on rails development, where table names are plural always
> by convention. The tables I created above in different schema to ask
> question with sample data and test the query output. So they are
> little inconsistent, yes you are right.
> 
> Thanks,
> 
> Arup Rakshit
> a...@zeit.io
> 
> 
> 
> > On 08-Apr-2019, at 4:36 PM, mariusz  wrote:
> > 
> > bear in mind that with a large number of companies and proper index
> > on company_features the optimizer could limit company_features as
> > necessary, while your examples read everything anyway and mangle
> > output to get proper result (with dups and bugs, but also not
> > optimal)
> 
> 
> 





Re: Unable to Vacuum Large Defragmented Table

2019-04-08 Thread Pavel Stehule
po 8. 4. 2019 v 17:22 odesílatel Igal Sapir  napsal:

> Pavel,
>
> On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule 
> wrote:
>
>>
>> po 8. 4. 2019 v 7:57 odesílatel Igal Sapir  napsal:
>>
>>> David,
>>>
>>> On Sun, Apr 7, 2019 at 8:11 PM David Rowley <
>>> david.row...@2ndquadrant.com> wrote:
>>>
 On Mon, 8 Apr 2019 at 14:57, Igal Sapir  wrote:
 > However, I have now deleted about 50,000 rows more and the table has
 only 119,688 rows.  The pg_relation_size() still shows 31MB and
 pg_total_relation_size() still shows 84GB.
 >
 > It doesn't make sense that after deleting about 30% of the rows the
 values here do not change.

 deleting rows does not make the table any smaller, it just creates
 dead rows in the table.  VACUUM tries to release the space used by
 those dead rows and turns it back into free space.  Normal vacuum (not
 FULL) can only shrink the table if completely empty pages are found at
 the end of the table.

>>>
>>> ACK
>>>
>>>

 > Attempting to copy the data to a different table results in the out
 of disk error as well, so that is in line with your assessment.  But it
 actually just shows the problem.  The new table to which the data was
 copied (though failed due to out of disk) shows 0 rows, but
 pg_total_relation_size() for that table shows 27GB.  So now I have an
 "empty" table that takes 27GB of disk space.

 I think the best thing to do is perform a normal VACUUM on the table

>>>
>>> Running VACUUM on the newly created table cleared the 27GB so that's
>>> good (I was planning to run normal VACUUM but ran FULL).
>>>
>>
>> you can drop some indexes, then you can run vacuum full, and create
>> dropped indexes again.
>>
>
> The table does not have any indexes.  It is mostly an append-only table.
>
>
>>
>>
>>
>>>
 then CREATE EXTENSION IF NOT EXISTS pgstattuple; then do; SELECT *
 FROM pgstattuple('); and the same again on the toast table.
 If your table still contains many dead rows then perhaps an open
 transaction is stopping rows from being turned into free space.
>>>
>>>
>>> I am not sure how to read the below.  I see a lot of "free_space" but
>>> not "dead":
>>>
>>> -[ RECORD 1 ]--+
>>> ?column?   | primary
>>> table_len  | 32055296
>>> tuple_count| 120764
>>> tuple_len  | 9470882
>>> tuple_percent  | 29.55
>>> dead_tuple_count   | 0
>>> dead_tuple_len | 0
>>> dead_tuple_percent | 0
>>> free_space | 20713580
>>> free_percent   | 64.62
>>> -[ RECORD 2 ]--+
>>> ?column?   | toast
>>> table_len  | 88802156544
>>> tuple_count| 15126830
>>> tuple_len  | 30658625743
>>> tuple_percent  | 34.52
>>> dead_tuple_count   | 0
>>> dead_tuple_len | 0
>>> dead_tuple_percent | 0
>>> free_space | 57653329312
>>> free_percent   | 64.92
>>>
>>
>>
>> it say, so your table can be reduced about 60%
>>
>
> That's what I thought, and releasing 65% of 84GB would be major here, but
> unfortunately I am unable to release it because VACUUM FULL requires more
> space than I currently have available.
>
> Perhaps disabling the WAL, if possible, could help VACUUM FULL complete.
> Or some way to do an in-place VACUUM so that it doesn't write all the data
> to a new table.
>

maybe this article can be interesting for you

https://www.depesz.com/2013/06/21/bloat-removal-by-tuples-moving/





> Thank you,
>
> Igal
>
>
>
>>
>>
>>>
>>>
 Once pgstattuples reports that "tuple_len" from the table, its toast
 table
 and all its indexes has been reduced to an acceptable value then you
 should try a VACUUM FULL.  Remember that VACUUM FULL must also write
 WAL, so if WAL is on the same volume, then you'll need to consider
 space required for that when deciding how much data to remove from the
 table.

>>>
>>> WAL is on the same volume.  The PGDATA directory is mounted in a Docker
>>> container.
>>>
>>> Isn't there any way to do an in-place VACUUM or pause the WAL at the
>>> risk of losing some data if recovery is required?
>>>
>>> There is a catch-22 here.  I can't reclaim the disk space because that
>>> requires disk space.  Surely I'm not the first one to have encountered that
>>> problem with Postgres.
>>>
>>>

 > This is mostly transient data, so I don't mind deleting rows, but if
 some day this could happen in production then I have to know how to deal
 with it without losing all of the data.

 For the future, it would be better to delete more often than waiting
 until the table grows too large.  A normal VACUUM will turn space used
 by dead tuples back into free space, so if done often enough there
 won't be a need to vacuum full.

>>>
>>> ACK.  This issue came up while implementing a retention policy that will
>>> be enforced regularly.
>>>
>>> Thank you for all of your help,
>>>
>>> Igal
>>>
>>>
>>


Re: Unable to Vacuum Large Defragmented Table

2019-04-08 Thread Igal Sapir
Pavel,

On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule 
wrote:

>
> po 8. 4. 2019 v 7:57 odesílatel Igal Sapir  napsal:
>
>> David,
>>
>> On Sun, Apr 7, 2019 at 8:11 PM David Rowley 
>> wrote:
>>
>>> On Mon, 8 Apr 2019 at 14:57, Igal Sapir  wrote:
>>> > However, I have now deleted about 50,000 rows more and the table has
>>> only 119,688 rows.  The pg_relation_size() still shows 31MB and
>>> pg_total_relation_size() still shows 84GB.
>>> >
>>> > It doesn't make sense that after deleting about 30% of the rows the
>>> values here do not change.
>>>
>>> deleting rows does not make the table any smaller, it just creates
>>> dead rows in the table.  VACUUM tries to release the space used by
>>> those dead rows and turns it back into free space.  Normal vacuum (not
>>> FULL) can only shrink the table if completely empty pages are found at
>>> the end of the table.
>>>
>>
>> ACK
>>
>>
>>>
>>> > Attempting to copy the data to a different table results in the out of
>>> disk error as well, so that is in line with your assessment.  But it
>>> actually just shows the problem.  The new table to which the data was
>>> copied (though failed due to out of disk) shows 0 rows, but
>>> pg_total_relation_size() for that table shows 27GB.  So now I have an
>>> "empty" table that takes 27GB of disk space.
>>>
>>> I think the best thing to do is perform a normal VACUUM on the table
>>>
>>
>> Running VACUUM on the newly created table cleared the 27GB so that's good
>> (I was planning to run normal VACUUM but ran FULL).
>>
>
> you can drop some indexes, then you can run vacuum full, and create
> dropped indexes again.
>

The table does not have any indexes.  It is mostly an append-only table.


>
>
>
>>
>>> then CREATE EXTENSION IF NOT EXISTS pgstattuple; then do; SELECT *
>>> FROM pgstattuple('); and the same again on the toast table.
>>> If your table still contains many dead rows then perhaps an open
>>> transaction is stopping rows from being turned into free space.
>>
>>
>> I am not sure how to read the below.  I see a lot of "free_space" but not
>> "dead":
>>
>> -[ RECORD 1 ]--+
>> ?column?   | primary
>> table_len  | 32055296
>> tuple_count| 120764
>> tuple_len  | 9470882
>> tuple_percent  | 29.55
>> dead_tuple_count   | 0
>> dead_tuple_len | 0
>> dead_tuple_percent | 0
>> free_space | 20713580
>> free_percent   | 64.62
>> -[ RECORD 2 ]--+
>> ?column?   | toast
>> table_len  | 88802156544
>> tuple_count| 15126830
>> tuple_len  | 30658625743
>> tuple_percent  | 34.52
>> dead_tuple_count   | 0
>> dead_tuple_len | 0
>> dead_tuple_percent | 0
>> free_space | 57653329312
>> free_percent   | 64.92
>>
>
>
> it say, so your table can be reduced about 60%
>

That's what I thought, and releasing 65% of 84GB would be major here, but
unfortunately I am unable to release it because VACUUM FULL requires more
space than I currently have available.

Perhaps disabling the WAL, if possible, could help VACUUM FULL complete.
Or some way to do an in-place VACUUM so that it doesn't write all the data
to a new table.

Thank you,

Igal



>
>
>>
>>
>>> Once pgstattuples reports that "tuple_len" from the table, its toast
>>> table
>>> and all its indexes has been reduced to an acceptable value then you
>>> should try a VACUUM FULL.  Remember that VACUUM FULL must also write
>>> WAL, so if WAL is on the same volume, then you'll need to consider
>>> space required for that when deciding how much data to remove from the
>>> table.
>>>
>>
>> WAL is on the same volume.  The PGDATA directory is mounted in a Docker
>> container.
>>
>> Isn't there any way to do an in-place VACUUM or pause the WAL at the risk
>> of losing some data if recovery is required?
>>
>> There is a catch-22 here.  I can't reclaim the disk space because that
>> requires disk space.  Surely I'm not the first one to have encountered that
>> problem with Postgres.
>>
>>
>>>
>>> > This is mostly transient data, so I don't mind deleting rows, but if
>>> some day this could happen in production then I have to know how to deal
>>> with it without losing all of the data.
>>>
>>> For the future, it would be better to delete more often than waiting
>>> until the table grows too large.  A normal VACUUM will turn space used
>>> by dead tuples back into free space, so if done often enough there
>>> won't be a need to vacuum full.
>>>
>>
>> ACK.  This issue came up while implementing a retention policy that will
>> be enforced regularly.
>>
>> Thank you for all of your help,
>>
>> Igal
>>
>>
>


Re: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Adrian Klaver

On 4/8/19 7:19 AM, Raghavendra Rao J S V wrote:

Thank you very much for your prompt response.

Could you explain other admin type operations, which are not supported 
by pgbouncer?


I would say anything you could not run through psql.



Regards,
Raghavendra Rao.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-08 Thread Robert Treat
On Fri, Apr 5, 2019 at 8:35 AM Jeff Janes  wrote:
> On Tue, Apr 2, 2019 at 11:31 AM Andres Freund  wrote:
>> On 2019-04-02 07:35:02 -0500, Brad Nicholson wrote:
>>
>> > A blog post would be nice, but it seems to me have something about this
>> > clearly in the manual would be best, assuming it's not there already.  I
>> > took a quick look, and couldn't find anything.
>>
>> https://www.postgresql.org/docs/devel/sql-copy.html
>>
>> "Note that the command is invoked by the shell, so if you need to pass
>> any arguments to shell command that come from an untrusted source, you
>> must be careful to strip or escape any special characters that might
>> have a special meaning for the shell. For security reasons, it is best
>> to use a fixed command string, or at least avoid passing any user input
>> in it."
>>
>> "Similarly, the command specified with PROGRAM is executed directly by
>> the server, not by the client application, must be executable by the
>> PostgreSQL user. COPY naming a file or command is only allowed to
>> database superusers or users who are granted one of the default roles
>> pg_read_server_files, pg_write_server_files, or
>> pg_execute_server_program, since it allows reading or writing any file
>> or running a program that the server has privileges to access."
>>
>> Those seem reasonable to me?
>
>
> Yes, but I think that the use of the phrase "default roles" here is 
> unfortunate.  I know it means that the role exists by default, but it is easy 
> to read that to mean they are granted by default.  They should probably be 
> called something like 'built-in roles' or 'system roles'.
>
> And even with the understanding that we are referring to existence, not grant 
> status, "default roles" is still not really correct. If it exists by default, 
> that means I can make it not exist by taking action.  But these roles cannot 
> be dropped.
>
> We don't have 'default functions' or 'default types' in the user-facing 
> documentation.  We shouldn't call these 'default roles'.
>

As someone who likes to break systems in interesting ways, I do find
it interesting that you can actually remove all superuser roles and/or
the superuser bit from all roles (not that I would recommend that to
anyone) but that these roles cannot be removed without some serious
heavy lifting.

Given that, I think I would tend to agree, describing them more
consistently as "system roles" is probably warranted.

Robert Treat
https://xzilla.net
https://credativ.com




Re: 9.6.11- could not truncate directory "pg_serial": apparent wraparound

2019-04-08 Thread Thomas Munro
On Sun, Apr 7, 2019 at 2:31 AM Pavel Suderevsky  wrote:
> Probably if you advise me what could cause "pg_serial": apparent wraparound 
> messages I would have more chances to handle all the performance issues.

9.6 has this code:

/*
 * Give a warning if we're about to run out of SLRU pages.
 *
 * slru.c has a maximum of 64k segments, with 32
(SLRU_PAGES_PER_SEGMENT)
 * pages each. We need to store a 64-bit integer for each Xid, and with
 * default 8k block size, 65536*32 pages is only enough to cover 2^30
 * XIDs. If we're about to hit that limit and wrap around,
warn the user.
 *
 * To avoid spamming the user, we only give one warning when
we've used 1
 * billion XIDs, and stay silent until the situation is fixed and the
 * number of XIDs used falls below 800 million again.
 *
 * XXX: We have no safeguard to actually *prevent* the wrap-around,
 * though. All you get is a warning.
 */
if (oldSerXidControl->warningIssued)
{
TransactionId lowWatermark;

lowWatermark = tailXid + 8;
if (lowWatermark < FirstNormalTransactionId)
lowWatermark = FirstNormalTransactionId;
if (TransactionIdPrecedes(xid, lowWatermark))
oldSerXidControl->warningIssued = false;
}
else
{
TransactionId highWatermark;

highWatermark = tailXid + 10;
if (highWatermark < FirstNormalTransactionId)
highWatermark = FirstNormalTransactionId;
if (TransactionIdFollows(xid, highWatermark))
{
oldSerXidControl->warningIssued = true;
ereport(WARNING,
(errmsg("memory for
serializable conflict tracking is nearly exhausted"),
 errhint("There might be an
idle transaction or a forgotten prepared transaction causing
this.")));
}
}

Did you see that warning at some point before the later error?

I think if you saw that warning, and then later the error you
reported, it's probably just being prudent and avoiding the truncation
because it detects a potential wraparound.  If it actually does wrap
around, then there is a potential for
OldSerXidGetMinConflictCommitSeqNo() to report a too-recent minimum
conflict CSN for a given XID, and I'm not sure what consequence that
would have (without drinking a lot more coffee), but potentially some
kind of incorrect answer.  On server restart the problem fixes itself
because pg_serial is only used to spill state relating to transactions
running in this server lifetime.

I wonder if this condition required you to have a serializable
transaction running (or prepared) while you consume 2^30 AKA ~1
billion xids.  I think it is unreachable in v11+ because commit
e5eb4fa8 allowed for more SLRU pages to avoid this artificially early
wrap.

Gee, it'd be nice to use FullTransactionId for SERIALIZABLEXACT and
pg_serial in v13 and not to even have to think about wraparound here.
It's more doable here than elsewhere because the data on disk isn't
persistent across server restart, let alone pg_upgrade.  Let's see...
each segment file is 256kb and we need to be able to address 2^64 *
sizeof(SerCommitSequenceNumber), so you'd have segment files numbered
from 0 up to 1 (so you'd need slru.c to support 13 char
segment names and 64 bit segment numbers, whereas it currently has a
limit of 6 in SlruScanDirectory and uses int for segment number).
You'd be addressing them by FullTransactionId, but that's just the
index used to find entries -- the actual amount of data stored
wouldn't change, you'd just start seeing wider filenames, and all the
fragile modulo comparison truncation stuff would disappear from the
tree.

-- 
Thomas Munro
https://enterprisedb.com




Re: 9.6.11- could not truncate directory "pg_serial": apparent wraparound

2019-04-08 Thread Thomas Munro
On Tue, Apr 9, 2019 at 12:14 PM Thomas Munro  wrote:
> It's more doable here than elsewhere because the data on disk isn't
> persistent across server restart, let alone pg_upgrade.  Let's see...
> each segment file is 256kb and we need to be able to address 2^64 *
> sizeof(SerCommitSequenceNumber), so you'd have segment files numbered
> from 0 up to 1 (so you'd need slru.c to support 13 char
> segment names and 64 bit segment numbers, whereas it currently has a
> limit of 6 in SlruScanDirectory and uses int for segment number).

Come to think of it, even for the persistent ones, pg_upgrade could
rename them to a new scheme anyway as long as the segment size remains
the same.  I'd be inclined to ditch the current segment numbering
scheme and switch to one where the FullTransactionId of the first
entry in the segment is used for its name, so that admins can more
easily understand what the files under there correspond to.

-- 
Thomas Munro
https://enterprisedb.com




Re: Fwd: Postgresql with nextcloud in Windows Server

2019-04-08 Thread 김준형
Thanks for your answers.

1) First time, I set that value to 200. I think it is connection issue, so
increase max_connection = 1200.
After I show this email, I search max connection is associated to
shared_buffers.
My configuration set shared_buffers = 192GB because PostgreSQL documents
says set shared_buffers to quarter of memory will be good.

2) How to see that? I searched postgresql connection pooler but I didn't
get good answer.

3) I saw pg_stat_activity but nothing specially.
Sometimes Nextcloud appeared in a short time like this.

 1795037 | abminext| 41836 |  1795036 | oc_abmiadmin |
  | ip address| |   port numer| 2019-04-09
10:35:38.527147+09 |   | 2019-04-09
10:35:38.594062+09 | 2019-04-09 10:35:38.594259+09 | Client  |
ClientRead  | idle   | |  |





   +| client backend
 | |   |  |  |
  | | | |
 |   |
 |   | | |
  | |  | SELECT
"remote", "share_token", "password", "mountpoint", "owner"




 +|
 | |   |  |  |
  | | | |
 |   |
 |   | | |
  | |  | FROM
"oc_share_external"




  +|
 | |   |  |  |
  | | | |
 |   |
 |   | | |
  | |  | WHERE
"user" = $1 AND "accepted" = $2




 +|
 | |   |  |  |
  | | | |
 |   |
 |   | | |
  | |  |


2019년 4월 8일 (월) 오후 11:46, Adrian Klaver 님이 작성:

> On 4/7/19 9:53 PM, 김준형 wrote:
> > Sorry for late but my server works without problem for a while.
> >
> >  > What problem occurs?
> >  > Where is the Windows server?
> >
> > Problem means Windows server doesn't accept new connection and non-admin
> > connection.
> > Only connected admin connection lives.
> > Windows server is not so close on my work space. So I use Windows server
> > with remote connection.
> >
> >  > Not sure you know that if you cannot connect to the Windows server?
> >
> > I know I cannot connect to the Windows server but sometimes I got remote
> > connection of admin when occurred connection problem.
> > That time, I had tried to stop the PostgreSQL server but It didn't work.
> >
> >
> > At last, I leave logs what I got.
> >
> > 1) The Windows system logs
> >
> > 1096 Process closing UDP socket with local port number 64347 is running
> > longer than expected. The local port number can not be used until the
> > close operation is completed. This problem is usually caused by
> > malfunctioning network drivers. Make sure you have the latest updates
> > for third-party networking software, including Windows and NIC drivers,
> > firewalls, and other security products.
>
> I rarely use Windows anymore so I am not sure how to interpret the
> above. Some searching found that it is often helpful to look in the
> Event Log over the same time period.  Postgres does use UDP, so it may
> be related I am just not sure how?
>
> More below.
>
> >
> > 2) PostgreSQL logs
> >
> > ...
> > 2019-04-06 04:34:03.984 KST [129704] LOG:  connection received:
> > host=128.1.99.51 port=40602
> > 2019-04-06 04:34:03.990 KST [129704] LOG:  connection authorized:
> > user=oc_abmiadmin database=abminext
> > 2019-04-06 04:34:05.117 KST [129720] LOG:  connection received:
> > host=128.1.99.51 port=40604
> > 2019-04-06 04:34:05.125 KST [129720] LOG:  connection authorized:
> > user=oc_abmiadmin database=abminext
> > 2019-04-06 04:34:05.125 KST [129720] FATAL:  remaining connection slots
> > are reserved for non-replication superuser connections
> > 2019-04-06 04:34:05.171 KST [129736] LOG:  connection received:
> > host=128.1.99.51 port=40606
> > 2019-04-06 04:34:05.179 KST [129736] LOG:  connection authorized:
> > user=oc_abmiadmin database=abminext
> > 2019-04-06 04:34:05.179 KST [129736] FATAL:  remaining connection slots
> > are reserved for non-replication superuser connections
> > ...
> >
> > p.s) PostgreSQL max_connections =1200. Is there problem in here?
>
> Yeah, it looks like you are maxing out the connections. The overhead of
> maintaining 1200 connections is probably a contributing factor to your
> issues. So:
>
> 1) Why is 1200 con