Query Tuning

2019-10-01 Thread Sonam Sharma
We have a query which is running slow and it's taking 26secs to complete..
we have run the analyzer also and it's taking the same time.

Any tool is there for query optimization or any suggestions.

My query plan looks like this :

CTE Scan on approvalwflscreen (cost=8736.21..8737.25 rows=52 width=1214)

CTE constants

-> Result (cost=0.00..0.01 rows=1 width=44)

CTE approval

-> Sort (cost=7793.89..7805.22 rows=4530 width=292)

Sort Key: apv_1.t616_vbu_nbr, apv_1.t617_fnc_typ_cd,
apv_1.t8071_cai_ivo_id, apv_1.t8071_add_dm

-> WindowAgg (cost=0.00..7518.80 rows=4530 width=292)

-> Nested Loop (cost=0.00..7450.85 rows=4530 width=72)

Join Filter: ((apv_1.t8118_apv_sts_cd IS NULL) OR (((apv_1.t8118_apv_sts_cd
= con.dummy) OR (apv_1.t8118_apv_sts_cd = con.t8118_rejected) OR
(apv_1.t8118_apv_sts_cd =

con.t8118_approved) OR (apv_1.t8118_apv_sts_cd = con.t8118_pending)) AND
((apv_1.t8130_apv_job_lvl_cd = con.t8130_deflt) OR
(apv_1.t8130_apv_job_lvl_cd = con.t8130_processor) OR (apv_1.t81

30_apv_job_lvl_cd = con.t8130_assistant_mgr) OR (apv_1.t8130_apv_job_lvl_cd
= con.t8130_manager) OR (apv_1.t8130_apv_job_lvl_cd = con.t8130_vp) OR
(apv_1.t8130_apv_job_lvl_cd = con.t8130_re

ad_only)) AND (SubPlan 2)))

-> CTE Scan on constants con (cost=0.00..0.02 rows=1 width=42)

-> Seq Scan on t8119_cai_ivo_apv_wfl apv_1 (cost=0.00..268.18 rows=9818
width=72)

SubPlan 2

-> Nested Loop (cost=0.29..3913.17 rows=9507 width=0)

-> Seq Scan on t8071_cai_ivo_hdr hdr (cost=0.00..457.98 rows=9760 width=37)

Filter: (ivo_sts_cd = ANY (ARRAY[con.dummy, con.t8070_rejct,
con.t8070_pndap, con.t8070_aprvd, con.t8070_pndps, con.t8070_cmplt,
con.t8070_rdpmt, con.t8

070_stgap, con.t8070_cmeim, con.t8070_pndrv, con.t8070_delet,
con.t8070_cncld]))

-> Index Only Scan using t8119i0 on t8119_cai_ivo_apv_wfl apv
(cost=0.29..0.34 rows=1 width=37)

Index Cond: ((t616_vbu_nbr = hdr.t616_vbu_nbr) AND (t617_fnc_typ_cd =
hdr.t617_fnc_typ_cd) AND (t8071_cai_ivo_id = hdr.t8071_cai_ivo_id) AND
(t8071_add_

dm = hdr.t8071_add_dm))

CTE maxapproval

-> Sort (cost=149.09..150.22 <+11490915022> rows=453 width=12)

Sort Key: apv_2.joinkey

-> HashAggregate (cost=124.58..129.11 <+11245812911> rows=453 width=12)

Group Key: apv_2.joinkey, apv_2.t8119_apv_seq_nbr

-> CTE Scan on approval apv_2 (cost=0.00..90.60 <+10009060> rows=4530
width=10)

CTE header

-> Limit (cost=508.37..649.77 <+15083764977> rows=1 width=618)

-> Nested Loop (cost=508.37..649.77 <+15083764977> rows=1 width=618)

Join Filter: ((hdr_1.ivo_sts_cd = con_1.dummy) OR (hdr_1.ivo_sts_cd =
con_1.t8070_rejct) OR (hdr_1.ivo_sts_cd = con_1.t8070_pndap) OR
(hdr_1.ivo_sts_cd = con_1.t8070_aprvd)

OR (hdr_1.ivo_sts_cd = con_1.t8070_pndps) OR (hdr_1.ivo_sts_cd =
con_1.t8070_cmplt) OR (hdr_1.ivo_sts_cd = con_1.t8070_rdpmt) OR
(hdr_1.ivo_sts_cd = con_1.t8070_stgap) OR (hdr_1.ivo_sts_cd

= con_1.t8070_cmeim) OR (hdr_1.ivo_sts_cd = con_1.t8070_pndrv) OR
(hdr_1.ivo_sts_cd = con_1.t8070_delet) OR (hdr_1.ivo_sts_cd =
con_1.t8070_cncld))

-> Hash Join (cost=508.37..646.53 <+15083764653> rows=1 width=126)

Hash Cond: ((apv_3.t616_vbu_nbr = hdr_1.t616_vbu_nbr) AND
(apv_3.t617_fnc_typ_cd = hdr_1.t617_fnc_typ_cd) AND (apv_3.t8071_cai_ivo_id
= hdr_1.t8071_cai_ivo_id) AND (a

pv_3.t8071_add_dm = hdr_1.t8071_add_dm))

-> CTE Scan on approval apv_3 (cost=0.00..90.60 <+10009060> rows=4530
width=114)

-> Hash (cost=306.79..306.79 <+13067930679> rows=10079 width=118)

-> Seq Scan on t8071_cai_ivo_hdr hdr_1 (cost=0.00..306.79 rows=10079
width=118)

-> CTE Scan on constants con_1 (cost=0.00..0.02 rows=1 width=


Re: pg12 rc1 on CentOS8 depend python2

2019-10-01 Thread Christoph Berg
Re: Devrim Gündüz 2019-09-30 
<21705bb57210f01b559ec2f5de8550df586324e2.ca...@gunduz.org>
> I think postgresql-contrib-py3 is really the best idea at this point, 
> otherwise
> I cannot see a clean way to make this without breaking existing installations.

Users of these (now contrib) modules need to have
postgresql12-plpython3 installed anyway, so it's unlikely you'd be
breaking anyone's installation.

> I cannot move those contrib modules under plpython* subpackage -- I mean, I
> *can, but then the installations will break in next upgrade. OTOH, I can add
> Requires: -contrb for plpython package, then it will be the same problem.

There must be a way to move a file from one package to another?
(In .deb, this requires
Replaces: sourcedeb (<< newversion)
Conflits: sourcedeb (<< newversion)
in the target package.)

Christoph




Re: Query Tuning

2019-10-01 Thread Kyotaro Horiguchi
Hello.

At Tue, 1 Oct 2019 12:42:24 +0530, Sonam Sharma  wrote in 

> We have a query which is running slow and it's taking 26secs to complete..
> we have run the analyzer also and it's taking the same time.
> 
> Any tool is there for query optimization or any suggestions.

EXPLAIN ANALYZE (not just EXPLAIN) would be that. In many cases
where a query takes an unexpectedly long time, rows estimation in
some nodes would be largely different from actual rows. That
leads to a wrong query plan. EXPLAIN ANALYZE may give you a clue
for such kind of problem.

The following output comes from EXPLAIN. I suppose that you
already have a similar output having a "(actutal time=...)"
clause after the "(cost=..)"  clause.

> My query plan looks like this :
> 
> CTE Scan on approvalwflscreen (cost=8736.21..8737.25 rows=52 width=1214)
> 
> CTE constants
> 
> -> Result (cost=0.00..0.01 rows=1 width=44)
> 
> CTE approval
> 
> -> Sort (cost=7793.89..7805.22 rows=4530 width=292)

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




fetch time included in pg_stat_statements?

2019-10-01 Thread Ayub M
Does the pg_stat_statements.total_time include the time it takes for all
fetches of a cursor query. Or is it only the db time taken to execute the
query?

-- 
Regards,
Ayub


Schema dump/restore not restoring grants on the schema

2019-10-01 Thread Mike Roest
Hi There,
   Just trying to find out if something is intended behaviour. When doing a
schema filtered pg_dump the created dump file includes the grants on that
specific schema (in our case a grant usage to a unprivleged user) but doing
a pg_restore with a -n  does not restore that grant however
individual grants on object within the filtered schema are restored.  But
it's resulting in our unprivileged user not actually being able to access
the limited number of tables it should be able to as the grant usage on the
schema itself is being lost.

example

in template1:
create database backuptest;
create database restoretest;
create role testuser with login password 'password';

in backuptest;
create schema testschema
create table testschema.stuff (id integer not null);
grant usage on testschema to testuser;
grant insert,update,delete,select on testschema.stuff to testuser;

pg_dump -n testschema -d backuptest -U postgres -h localhost -F c -f
test.backup
pg_restore -U postgres -d restoretest -h localhost -n testschema test.backup

In backuptest
backuptest=# \dn+
List of schemas
Name|  Owner   |  Access privileges   |  Description
+--+--+
 public | postgres | postgres=UC/postgres+| standard public schema
|  | =UC/postgres |
 testschema | postgres | postgres=UC/postgres+|
|  | testuser=U/postgres  |

in restore test:
restoretest=# \dn+
List of schemas
Name|  Owner   |  Access privileges   |  Description
+--+--+
 public | postgres | postgres=UC/postgres+| standard public schema
|  | =UC/postgres |
 testschema | postgres |  |
(2 rows)

How ever the table does have the grant in restoretest
restoretest=# \z testschema.stuff
   Access privileges
   Schema   | Name  | Type  | Access privileges | Column privileges
| Policies
+---+---+---+---+--
 testschema | stuff | table | postgres=arwdDxt/postgres+|
|
|   |   | testuser=arwd/postgres|
|
(1 row)


This behaviour seems counter intuitive as unless I'm providing --no-acl on
the backup or restore I would expect the grants on the schema to come along
as well.

We've observed this behaviour with 9.5/10 & 11 client tools.

Thanks

-- 
Data's inconvienient when people have opinions.


Re: "Failed to connect to Postgres database" : No usage specified for certificate (update)

2019-10-01 Thread Adrian Klaver

On 9/30/19 9:21 AM, Marco Ippolito wrote:

Hi Adrian,
important update.

After adding in fabric-ca-server-config.yaml

ca:
   # Name of this CA
   name: fabric_ca
   # Key file (is only used to import a private key into BCCSP)
   keyfile: /etc/ssl/private/fabric_ca.key
   # Certificate file (default: ca-cert.pem)
   certfile: /etc/ssl/certs/fabric_ca.pem
   # Chain file
   chainfile:

Now I get this message:

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server init -b admin:adminpw
2019/09/30 18:10:41 [INFO] Configuration file location: 
/home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml

2019/09/30 18:10:41 [INFO] Server Version: 1.4.4
2019/09/30 18:10:41 [INFO] Server Levels: &{Identity:2 Affiliation:1 
Certificate:1 Credential:1 RAInfo:1 Nonce:1}

2019/09/30 18:10:41 [INFO] The CA key and certificate files already exist
2019/09/30 18:10:41 [INFO] Key file location: /etc/ssl/private/fabric_ca.key
2019/09/30 18:10:41 [INFO] Certificate file location: 
/etc/ssl/certs/fabric_ca.pem
2019/09/30 18:10:41 [FATAL] Initialization failure: Validation of 
certificate and key failed: Invalid certificate in file 
'/etc/ssl/certs/fabric_ca.pem': No usage specified for certificate


This is the start of /etc/ssl/certs/fabric_ca.pem:

-BEGIN CERTIFICATE-
MIIDlTCCAn2gAwIBAgIUCm243lybs0PNfAEdgbuw0chmjWkwDQYJKoZIhvcNAQEL

and this is its end:
xNItFJulgsA1
-END CERTIFICATE-

What does it mean "No usage specified for certificate" ?



I have no idea. Per my post upstream I would test your Postgres setup 
first without bringing in the fabric server:


psql "host=localhost port=5433 dbname=fabmnet_ca user=postgres
sslmode=require"

Changing sslmode to whatever you need.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Schema dump/restore not restoring grants on the schema

2019-10-01 Thread Tom Lane
Mike Roest  writes:
>Just trying to find out if something is intended behaviour. When doing a
> schema filtered pg_dump the created dump file includes the grants on that
> specific schema (in our case a grant usage to a unprivleged user) but doing
> a pg_restore with a -n  does not restore that grant however
> individual grants on object within the filtered schema are restored.  But
> it's resulting in our unprivileged user not actually being able to access
> the limited number of tables it should be able to as the grant usage on the
> schema itself is being lost.

Hm.  The pg_dump man page says

-n pattern
--schema=pattern

Dump only schemas matching pattern; this selects both the schema
itself, and all its contained objects.

while pg_restore says

-n schema
--schema=schema

Restore only objects that are in the named schema.

and the actual behavior seems consistent with that: pg_dump emits both
a CREATE SCHEMA command and GRANTs for it, while pg_restore emits
neither.

So I guess this is working as documented, but it does seem not very
nice that the two programs interpret the "same" switch differently.
I suppose the reasoning is lost in the mists of time :-(

Another thing that is not very nice is that pg_restore lacks the
ability to use patterns (wildcards) here.  Someday maybe somebody
will get around to fixing that.  I could see changing the definition
of -n to include the schema itself at the same time.

regards, tom lane




Re: Schema dump/restore not restoring grants on the schema

2019-10-01 Thread Mike Roest
Thanks for the reply Tom,
   We're going to look at removing the filtering on the pg_restore which I
think should allow us to move forward since we have the pg_dump already
filtered.

--Mike


Support for SLES 15 and PostgreSQL 11.x

2019-10-01 Thread Steve Williams
Currently PostgresSQL 11.x can be installed on openSUSE 15 via the Postgres 
channel.  When will SLES 15 itself will be supported?


Thanks
Steve







Re: Schema dump/restore not restoring grants on the schema

2019-10-01 Thread Adrian Klaver

On 10/1/19 7:53 AM, Tom Lane wrote:

Mike Roest  writes:

Just trying to find out if something is intended behaviour. When doing a
schema filtered pg_dump the created dump file includes the grants on that
specific schema (in our case a grant usage to a unprivleged user) but doing
a pg_restore with a -n  does not restore that grant however
individual grants on object within the filtered schema are restored.  But
it's resulting in our unprivileged user not actually being able to access
the limited number of tables it should be able to as the grant usage on the
schema itself is being lost.


Hm.  The pg_dump man page says

-n pattern
--schema=pattern

 Dump only schemas matching pattern; this selects both the schema
 itself, and all its contained objects.

while pg_restore says

-n schema
--schema=schema

 Restore only objects that are in the named schema.

and the actual behavior seems consistent with that: pg_dump emits both
a CREATE SCHEMA command and GRANTs for it, while pg_restore emits
neither.

So I guess this is working as documented, but it does seem not very
nice that the two programs interpret the "same" switch differently.
I suppose the reasoning is lost in the mists of time :-(


Some fooling around on my part found:

pg_restore -d test -U postgres -n utility utility_schema.out

pg_restore: [archiver (db)] could not execute query: ERROR:  schema 
"utility" does not exist


test_(postgres)# create schema utility;
CREATE SCHEMA
test_(postgres)# \dn+ utility
   List of schemas
  Name   |  Owner   | Access privileges | Description
-+--+---+-
 utility | postgres |   |
(1 row)

pg_restore -d test -U postgres -n utility utility_schema.out

test_(postgres)# \dn+ utility
   List of schemas
  Name   |  Owner   | Access privileges | Description
-+--+---+-
 utility | postgres |   |
(1 row)

test_(postgres)# drop  schema utility cascade;


pg_restore -d test -U postgres  utility_schema.out

test_(postgres)# \dn+ utility
 List of schemas
  Name   |  Owner   |   Access privileges   | Description
-+--+---+-
 utility | postgres | postgres=UC/postgres +|
 |  | production=U/postgres |
(1 row)


Looks to me the -n argument on restore is for restoring the objects into 
an existing schema. Leaving it off restores the schema and the objects.






Another thing that is not very nice is that pg_restore lacks the
ability to use patterns (wildcards) here.  Someday maybe somebody
will get around to fixing that.  I could see changing the definition
of -n to include the schema itself at the same time.

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Schema dump/restore not restoring grants on the schema

2019-10-01 Thread Adrian Klaver

On 10/1/19 8:49 AM, Mike Roest wrote:

Thanks for the reply Tom,
    We're going to look at removing the filtering on the pg_restore 
which I think should allow us to move forward since we have the pg_dump 
already filtered.


It will. If you want to verify do:

pg_restore -f testschema.txt test.backup

That will give you a plain text version of the restore.



--Mike




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Behaviour adding a column with and without a default (prior to PG11)

2019-10-01 Thread Joe Horsnell
Hi Tom,

Just thinking about this further, there are other areas where Postgres 
(correctly, IMO) deviates from the SQL spec and clarifies that in the docs.

For example, 
https://www.postgresql.org/docs/10/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL
 states that for a NUMERIC with no precision specified, "The SQL standard 
requires a default scale of 0, i.e., coercion to integer precision. We find 
this a bit useless.", so instead Postgres treats that as arbitrary precision 
(up to the implementation limit).

Here, the difference in behaviour (ie of adding a nullable column with no 
default and then changing the default vs adding a nullable with a default) is 
due to adherence to the spec, but there is no clarifying comment in the docs 
saying so. Would you consider a documentation patch to clarify this point?

Thanks again,

Joe.

PS. Apologies for the annoying disclaimer on my initial email, I tried to stop 
it!

On 30/09/2019, 22:19, "Tom Lane"  wrote:

Joe Horsnell  writes:
> The documentation (https://www.postgresql.org/docs/10/ddl-alter.html) 
clearly states that adding a column with a default requires updating all the 
rows in the table, to store the new column value (this is prior to PG11, 
obviously).

> Obviously the desired effect of adding the column default without 
rewriting all the rows in the table can be achieved by adding the nullable 
column first with no default, then changing the default, but out of curiosity; 
was there a specific technical reason for the behaviour described above, or was 
it a conscious design choice?

We read the SQL spec as requiring this behavior.

regards, tom lane



[Bamboo Limited | 1st Floor | Grenville House | Nelson Gate | Southampton | 
SO15 1GX ::: www.bambooloans.com]
This email message is intended only for the addressee(s) and contains 
information that may be confidential and/or copyright. If you are not the 
intended recipient please notify the sender by reply email and immediately 
delete this email. Use, disclosure or reproduction of this email by anyone 
other than the intended recipient(s) is strictly prohibited. Although, all 
emails are scanned for viruses, no representation is made that this email or 
any attachments are free of viruses. Virus scanning is recommended and is the 
responsibility of the recipient.
Help protect our environment by only printing this email if absolutely 
necessary.


Users, Roles and Connection Pooling

2019-10-01 Thread Matt Andrews
Hey all,

Here’s a question I’ve been asking for a while and just can’t find an
answer to, so I thought I’d ask it here. The answer could be subjective,
but here goes...

When a web app connects to Postgres via a connection pooler, what is the
best way to manage privileges for the connecting user? Especially when
their is a complex hierarchy of privileges?

Should each signed up user have their own role which inherits from
whichever roles they are members of? This means that the connection pool
role should then be granted in every user?

Does creating a Postgres role for every app user scale? Roles can only have
names, is an email address a good name for individual app users?

Are their any resources out there that anyone knows of that addresses these
questions?

There’s plenty of stuff out there on roles and permissions, I’ve read a lot
of it, but not much on dealing with individual app users through a
connection pool.

I’m thinking that any queries should start with SET ROLE ‘username’ and end
with RESET ROLE. Is this how it could work?

Any help would be greatly appreciated.


-- 
Matt Andrews

0400 990 131


Re: pg_receivexlog or archive_command

2019-10-01 Thread Stephen Frost
Greetings,

* Vikas Sharma (shavi...@gmail.com) wrote:
> I am wondering which one is the best way to archive the xlogs for Backup
> and Recovery - pg_receivexlog or archive_command.
> 
> pg_receivexlog seems best suited because the copied/archived file is
> streamed as it is being written to in xlog while archive_command only
> copies when the WAL is fully written to.

This really ends up depending on what your WAL volume is.  As mentioned,
you really don't want to write your own utility for archive_command,
since you really want to make sure that the WAL has actually been
archived and sync'd- so definitely use an existing tool which does that
for you.  The downside of pg_receivewal when it comes to high WAL volume
is that a single-threaded process just simply can't keep up, while
something being called from archive_command can be parallelized.
Perhaps, down the road, there'll be a tool that could parallelize
streaming of WAL also, though it would be pretty tricky to get right,
and if you're doing that much WAL, is it really an issue that it's
already chunked up nicely for archive_command..?

Note that there is also archive_timeout which you can set, to make sure
that you don't go too long with writes on the primary that haven't been
sent to the archive and stored.  If your transations are particularly
valuable, then having a synchronous standby setup (likely with two
replicas in a quorum-based sync setup) is probably the direction to
go in, so you can have a highly available environment.  Anything that's
async will mean you have a good chance of having some data loss if
things go wrong (even with pg_receivewal..).

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: pg_receivexlog or archive_command

2019-10-01 Thread Stephen Frost
Greetings,

* Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote:
> On 2019-09-23 10:25, Vikas Sharma wrote:
> > I am wondering which one is the best way to archive the xlogs for Backup
> > and Recovery - pg_receivexlog or archive_command.
> 
> I recommend using pg_receivexlog.  It has two important advantages over
> archive_command:  1) You can have multiple instances of pg_receivexlog
> running and copying things to different places.  This is complicated to
> do correctly with archive_command.  2) pg_receivexlog will fsync the
> files it writes.  This is also complicated to do correctly with
> archive_command.

Yes, it definitely is difficult to write your own archive_command, in
general, so, please, just don't.  Use one of the existing tools that
have been well tested and written specifically to work with PG and to
provide the guarantees that an archive command should.

Thanks,

Stephen


signature.asc
Description: PGP signature