RE: Query Crashes PG 10.3 using partitions, works on 9.6

2018-03-28 Thread Kumar, Virendra
Would be nice if you can attach explain plan of course, explain analyze is not 
going to work if server is crashing.


Regards,
Virendra

From: Cory Tucker [mailto:cory.tuc...@gmail.com]
Sent: Wednesday, March 28, 2018 5:49 PM
To: pgsql-gene...@postgresql.org
Subject: Query Crashes PG 10.3 using partitions, works on 9.6

Hey guys, I am in the middle of testing out a database migration from 9.6 to 
10.3.  We have a quasi-multi tenant based application and so are utilizing 
native partitions on some relations to help improve some performance.

I was issuing a query on both databases to cleanup some duplicates in 
preparation of applying new indexes.  On the 9.6 database with all the data in 
one table, the query runs fine in about 6 min.  On 10.3, with a work_mem 
setting of 1GB the query runs for about 7 minutes and then gets terminated with 
an out of memory error.  If I bump the work_mem up fairly high (12GB out of 
52GB available) the server actually crashes.

On the both databases, the total dataset size is exactly the same, ~29M 
records.  The table looks like this:

  Table "candidate_person"
Column|   Type   | Collation | Nullable | Default
--+--+---+--+-
 created_at   | timestamp with time zone |   | not null | now()
 modified_at  | timestamp with time zone |   | not null | now()
 account_id   | bigint   |   | not null |
 candidate_id | character varying(40)| C | not null |
 first_name   | text |   |  |
 middle_name  | text |   |  |
 last_name| text |   |  |
 spouse   | boolean  |   | not null | false
 suffix   | text |   |  |
 salutation   | text |   |  |
Partition key: LIST (account_id)

With the only difference being on 9.6 there obviously isn't any partitions.  On 
10.3 there are ~250 partition tables.

I have attached the server log that shows the first out of memory and then the 
server crash and recovery.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Query Crashes PG 10.3 using partitions, works on 9.6

2018-03-28 Thread Kumar, Virendra
Hi Cory,

You are running the query (DELETE statement) as if the table is not partitioned 
which is causing the server to crash.
Please run that query for each partitions separately in a loop with dynamic 
query and you should see the improvement. It should be pretty quick.

Regards,
Virendra.

From: Kumar, Virendra
Sent: Wednesday, March 28, 2018 5:57 PM
To: Cory Tucker; pgsql-gene...@postgresql.org
Subject: RE: Query Crashes PG 10.3 using partitions, works on 9.6

Would be nice if you can attach explain plan of course, explain analyze is not 
going to work if server is crashing.


Regards,
Virendra

From: Cory Tucker [mailto:cory.tuc...@gmail.com]
Sent: Wednesday, March 28, 2018 5:49 PM
To: pgsql-gene...@postgresql.org<mailto:pgsql-gene...@postgresql.org>
Subject: Query Crashes PG 10.3 using partitions, works on 9.6

Hey guys, I am in the middle of testing out a database migration from 9.6 to 
10.3.  We have a quasi-multi tenant based application and so are utilizing 
native partitions on some relations to help improve some performance.

I was issuing a query on both databases to cleanup some duplicates in 
preparation of applying new indexes.  On the 9.6 database with all the data in 
one table, the query runs fine in about 6 min.  On 10.3, with a work_mem 
setting of 1GB the query runs for about 7 minutes and then gets terminated with 
an out of memory error.  If I bump the work_mem up fairly high (12GB out of 
52GB available) the server actually crashes.

On the both databases, the total dataset size is exactly the same, ~29M 
records.  The table looks like this:

  Table "candidate_person"
Column|   Type   | Collation | Nullable | Default
--+--+---+--+-
 created_at   | timestamp with time zone |   | not null | now()
 modified_at  | timestamp with time zone |   | not null | now()
 account_id   | bigint   |   | not null |
 candidate_id | character varying(40)| C | not null |
 first_name   | text |   |  |
 middle_name  | text |   |  |
 last_name| text |   |  |
 spouse   | boolean  |   | not null | false
 suffix   | text |   |  |
 salutation   | text |   |  |
Partition key: LIST (account_id)

With the only difference being on 9.6 there obviously isn't any partitions.  On 
10.3 there are ~250 partition tables.

I have attached the server log that shows the first out of memory and then the 
server crash and recovery.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


LDAP Bind Password

2018-04-03 Thread Kumar, Virendra
Hi There,

Is anybody aware of how to encrypt bind password for ldap authentication in 
pg_hba.conf. Anonymous bind is disabled in our organization so we have to use 
bind ID and password but to keep them as plaintext in pg_hba.conf defeat 
security purposes. We want to either encrypt it or authenticate without 
binding. Any insights into this is appreciated.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: LDAP Bind Password

2018-04-06 Thread Kumar, Virendra
Thank you Peter!
I figured that out.

Regards,
Virendra


-Original Message-
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
Sent: Wednesday, April 04, 2018 8:41 PM
To: Kumar, Virendra; pgsql-gene...@postgresql.org
Subject: Re: LDAP Bind Password

On 4/3/18 16:12, Kumar, Virendra wrote:
> Is anybody aware of how to encrypt bind password for ldap
> authentication in pg_hba.conf. Anonymous bind is disabled in our
> organization so we have to use bind ID and password but to keep them
> as plaintext in pg_hba.conf defeat security purposes. We want to
> either encrypt it or authenticate without binding. Any insights into this is 
> appreciated.

You can use the "simple bind" method that is described in the documentation.  
That one doesn't involve a second bind step.

--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Archiving Data to Another DB?

2018-04-11 Thread Kumar, Virendra
Does this apply to SELECT calls as well or only for DMLs.
I am planning to use postgres_fdw but if it is going by one row at a time there 
will be a lot of round trip and defeat the purpose.

Regards,
Virendra.

From: Don Seiler [mailto:d...@seiler.us]
Sent: Wednesday, April 11, 2018 2:53 PM
To: Adrian Klaver
Cc: pgsql-gene...@postgresql.org
Subject: Re: Archiving Data to Another DB?

On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler 
mailto:d...@seiler.us>> wrote:

Yeah, I saw the same with a 132 row insert. Now imagine that with a monthly 50 
million row insert or delete. :p Thanks for the confirmation!

I went back to look at the postgres logs on my dev server. These logs are 
rotated once they hit 2G in size. One typical log from the middle of my test 
last night hit that in 13 minutes and had over 5.2 million DELETE calls. There 
a quite a few logs like this. That would not be fun for the disk space on the 
log volume, either.

Don.
--
Don Seiler
www.seiler.us



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Autovacuum holding exclusive lock on table

2018-05-08 Thread Kumar, Virendra
Hi,

I have as scenario where one of our developer created a table loaded huge 
amount of data and since then auto vacuum is not letting us do alter table for 
disabling it, even truncate and drop are locked. Is there anything we can do to 
prevent this and disable autovacuum at least.


Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


psqlODBC

2018-09-04 Thread Kumar, Virendra
Hi Community,

We are facing some problem with using psqlODBC driver connecting to 
postgres-xl. Whom should we redirect our queries?
Basically, when using load balancing across cluster we are seeing below error 
and the process is failing:
--
"AutomationException: Underlying DBMS error[ERROR: prepared statement 
\"sde_1535573518_38_9763483\" does not exist::SQLSTATE=26000]"

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: psqlODBC

2018-09-04 Thread Kumar, Virendra
Thank you Adrian!

Hi ODBC Project Team,

Would you mind looking into this please?


Regards,
Virendra.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Tuesday, September 04, 2018 9:47 AM
To: Kumar, Virendra; pgsql-general@lists.postgresql.org
Subject: Re: psqlODBC

On 09/04/2018 06:38 AM, Kumar, Virendra wrote:
> Hi Community,
>
> We are facing some problem with using psqlODBC driver connecting to
> postgres-xl. Whom should we redirect our queries?

https://www.postgresql.org/list/pgsql-odbc/

>
> Basically, when using load balancing across cluster we are seeing below
> error and the process is failing:
>
> --
>
> "AutomationException: Underlying DBMS error[ERROR: prepared statement
> \"sde_1535573518_38_9763483\" does not exist::SQLSTATE=26000]"
>
> Regards,
>
> Virendra
>



--
Adrian Klaver
adrian.kla...@aklaver.com




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Why the index is not used ?

2018-10-07 Thread Kumar, Virendra
You can consider outside DB encryption which is less of worry for performance 
and data at rest will be encrypted.

Regards,
Virendra
-Original Message-
From: ROS Didier [mailto:didier@edf.fr]
Sent: Sunday, October 07, 2018 2:33 PM
To: fola...@peoplecall.com
Cc: pavel.steh...@gmail.com; pgsql-...@lists.postgresql.org; 
pgsql-performa...@lists.postgresql.org; pgsql-general@lists.postgresql.org
Subject: RE: Why the index is not used ?

Hi Francisco

Thank you for your remark.
You're right, but it's the only procedure I found to make search on encrypted 
fields with good response times (using index) !

Regarding access to the file system, our servers are in protected network 
areas. few people can connect to it.

it's not the best solution, but we have data encryption needs and good 
performance needs too. I do not know how to do it except the specified 
procedure..
if anyone has any proposals to put this in place, I'm interested.

Thanks in advance

Best Regards
Didier ROS

-Message d'origine-
De : fola...@peoplecall.com [mailto:fola...@peoplecall.com]
Envoyé : dimanche 7 octobre 2018 17:58
À : ROS Didier 
Cc : pavel.steh...@gmail.com; pgsql-...@lists.postgresql.org; 
pgsql-performa...@lists.postgresql.org; pgsql-general@lists.postgresql.org
Objet : Re: Why the index is not used ?

ROS:

On Sun, Oct 7, 2018 at 3:13 PM, ROS Didier  wrote:

> -INSERT INTO cartedecredit(username,cc)  SELECT 'individu ' || x.id, 
> pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, 
> cipher-algo=aes256') FROM generate_series(1,10) AS x(id);
> -CREATE INDEX idx_cartedecredit_cc02 ON 
> cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, 
> cipher-algo=aes256'));

If my french is not too rusty you are encrypting a credit-card, and then 
storing an UNENCRYPTED copy in the index. So, getting it from the server is 
trivial for anyone with filesystem access.

Francisco Olarte.



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Possible Values of Command Tag in PG Log file

2019-07-18 Thread Kumar, Virendra
Hello Team,

Can somebody redirect me to document which lists all possible value of command 
tag in instance log of PostgreSQL instance.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Possible Values of Command Tag in PG Log file

2019-07-18 Thread Kumar, Virendra
Here is it:
--
https://www.postgresql.org/docs/10/protocol-message-formats.html

CommandComplete (B), string value of this section.

I loaded log file (csv format) into postgres_log table as per below document 
and can see entries like:
https://www.postgresql.org/docs/10/runtime-config-logging.html

postgres=# select command_tag from postgres_log where command_tag is not null 
limit 5;
  command_tag


 authentication

 authentication

(5 rows)

Regads,
Virendra


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Thursday, July 18, 2019 4:55 PM
To: Kumar, Virendra; pgsql-general@lists.postgresql.org
Subject: Re: Possible Values of Command Tag in PG Log file

On 7/18/19 12:58 PM, Kumar, Virendra wrote:
> Hello Team,
>
> Can somebody redirect me to document which lists all possible value of
> command tag in instance log of PostgreSQL instance.

Can you show example of a command tag in the log?

>
> Regards,
>
> Virendra
>
>
> 
>
> This message is intended only for the use of the addressee and may contain
> information that is PRIVILEGED AND CONFIDENTIAL.
>
> If you are not the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please erase all copies of the message
> and its attachments and notify the sender immediately. Thank you.


--
Adrian Klaver
adrian.kla...@aklaver.com



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.




Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-03 Thread Kumar, Virendra
Here is simple query, I am running via function call. This statement runs fine 
as SQL but when put in function as plpgsql it failes with error below, when I 
ran second times in same session it succeeds:
--
product_master_fdw=> \set VERBOSITY verbose
product_master_fdw=>
select sddf_update.tps_update_1(p_pres_id_in=>50241::integer,
  p_last_audit_update_dt_in=>'2019-09-03 
12:44:21.356638'::timestamp ,
  
p_audit_update_user_name_tx_in=>'a...@xyz.com'::character varying,
  p_major_class_name_tx_in=>'TEST0826222'::character 
varying,
  p_effective_dt_in=>CURRENT_TIMESTAMP::timestamp
 );
ERROR:  XX000: cache lookup failed for type 196609
CONTEXT:  SQL statement "UPDATE product_history.external_sys_class_code_pres
 SET class_code_id =
CASE WHEN p_class_code_id_in='0.1' THEN  class_code_id  
ELSE p_class_code_id_in END ,
 major_classification_cd =
CASE WHEN p_major_classification_cd_in='.1' THEN  
major_classification_cd  ELSE p_major_classification_cd_in END ,
 major_classification_name_tx =
CASE WHEN p_major_class_name_tx_in='0.1' THEN  
major_classification_name_tx  ELSE p_major_class_name_tx_in END ,
 coverage_short_name_tx =
CASE WHEN p_coverage_short_name_tx_in='0.1' THEN  
coverage_short_name_tx  ELSE p_coverage_short_name_tx_in END ,
 coverage_name_tx =
CASE WHEN p_coverage_name_tx_in='0.1' THEN  
coverage_name_tx  ELSE p_coverage_name_tx_in END ,
 cdt_source_system_cd =
CASE WHEN p_cdt_source_system_cd_in='0.1' THEN  
cdt_source_system_cd  ELSE p_cdt_source_system_cd_in END ,
 cdt_submitting_country_cd =
CASE WHEN p_cdt_submitting_country_cd_in='0.1' THEN  
cdt_submitting_country_cd  ELSE p_cdt_submitting_country_cd_in END ,
 cdt_status_cd =
CASE WHEN p_cdt_status_cd_in='0.1' THEN  cdt_status_cd  
ELSE p_cdt_status_cd_in END ,
 effective_dt =
CASE WHEN p_effective_dt_in=TO_DATE('01/01/1600', 'mm/dd/') 
THEN  effective_dt  ELSE p_effective_dt_in END ,
 expiration_dt =
CASE WHEN p_expiration_dt_in=TO_DATE('01/01/1600', 
'mm/dd/') THEN  expiration_dt  ELSE p_expiration_dt_in END ,
 audit_insert_user_name_tx =
CASE WHEN p_audit_insert_user_name_tx_in='0.1' THEN  
audit_insert_user_name_tx  ELSE p_audit_insert_user_name_tx_in END ,
 audit_update_dt = CURRENT_TIMESTAMP,
 audit_update_user_name_tx =
CASE WHEN p_audit_update_user_name_tx_in='0.1' THEN  
audit_update_user_name_tx  ELSE p_audit_update_user_name_tx_in END ,
 latest_version_in =
CASE WHEN p_latest_version_in_in='0' THEN  latest_version_in  
ELSE p_latest_version_in_in END ,
 delete_in =
CASE WHEN p_delete_in_in='0' THEN  delete_in  ELSE 
p_delete_in_in END
   WHERE pres_id = p_pres_id_in
 AND audit_update_dt = p_last_audit_update_dt_in"
PL/pgSQL function px_co_pr_pres_pg.spt_update_1(bigint,timestamp without time 
zone,timestamp without time zone,character varying,character varying,character 
varying,character varying,character varying,character varying,character 
varying,character varying,character varying,character varying,character 
varying,character varying,timestamp without time zone) line 7 at SQL statement
LOCATION:  getTypeOutputInfo, lsyscache.c:2681


Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-03 Thread Kumar, Virendra
Type ID doesn't seem to be existing:

product_master_fdw=> select * from pg_type where oid=196609;
typname | typnamespace | typowner | typlen | typbyval | typtype | typcategory | 
typispreferred | typisdefined | typdelim | typrelid | typelem | typarray | 
typinput | typoutput | typreceive | t
ypsend | typmodin | typmodout | typanalyze | typalign | typstorage | typnotnull 
| typbasetype | typtypmod | typndims | typcollation | typdefaultbin | 
typdefault | typacl
-+--+--++--+-+-++--+--+--+-+--+--+---++--
---+--+---++--+++-+---+--+--+---++
(0 rows)

Regards,
Virendra

From: Kumar, Virendra
Sent: Tuesday, September 03, 2019 4:56 PM
To: pgsql-general@lists.postgresql.org
Subject: Running a Simple Update Statement Fails, Second Time Suceeds.

Here is simple query, I am running via function call. This statement runs fine 
as SQL but when put in function as plpgsql it failes with error below, when I 
ran second times in same session it succeeds:
--
product_master_fdw=> \set VERBOSITY verbose
product_master_fdw=>
select sddf_update.tps_update_1(p_pres_id_in=>50241::integer,
  p_last_audit_update_dt_in=>'2019-09-03 
12:44:21.356638'::timestamp ,
  
p_audit_update_user_name_tx_in=>'a...@xyz.com'::character varying,
  p_major_class_name_tx_in=>'TEST0826222'::character 
varying,
  p_effective_dt_in=>CURRENT_TIMESTAMP::timestamp
 );
ERROR:  XX000: cache lookup failed for type 196609
CONTEXT:  SQL statement "UPDATE product_history.external_sys_class_code_pres
 SET class_code_id =
CASE WHEN p_class_code_id_in='0.1' THEN  class_code_id  
ELSE p_class_code_id_in END ,
 major_classification_cd =
CASE WHEN p_major_classification_cd_in='.1' THEN  
major_classification_cd  ELSE p_major_classification_cd_in END ,
 major_classification_name_tx =
CASE WHEN p_major_class_name_tx_in='0.1' THEN  
major_classification_name_tx  ELSE p_major_class_name_tx_in END ,
 coverage_short_name_tx =
CASE WHEN p_coverage_short_name_tx_in='0.1' THEN  
coverage_short_name_tx  ELSE p_coverage_short_name_tx_in END ,
 coverage_name_tx =
CASE WHEN p_coverage_name_tx_in='0.1' THEN  
coverage_name_tx  ELSE p_coverage_name_tx_in END ,
 cdt_source_system_cd =
CASE WHEN p_cdt_source_system_cd_in='0.1' THEN  
cdt_source_system_cd  ELSE p_cdt_source_system_cd_in END ,
 cdt_submitting_country_cd =
CASE WHEN p_cdt_submitting_country_cd_in='0.1' THEN  
cdt_submitting_country_cd  ELSE p_cdt_submitting_country_cd_in END ,
 cdt_status_cd =
CASE WHEN p_cdt_status_cd_in='0.1' THEN  cdt_status_cd  
ELSE p_cdt_status_cd_in END ,
 effective_dt =
CASE WHEN p_effective_dt_in=TO_DATE('01/01/1600', 'mm/dd/') 
THEN  effective_dt  ELSE p_effective_dt_in END ,
 expiration_dt =
CASE WHEN p_expiration_dt_in=TO_DATE('01/01/1600', 
'mm/dd/') THEN  expiration_dt  ELSE p_expiration_dt_in END ,
 audit_insert_user_name_tx =
CASE WHEN p_audit_insert_user_name_tx_in='0.1' THEN  
audit_insert_user_name_tx  ELSE p_audit_insert_user_name_tx_in END ,
 audit_update_dt = CURRENT_TIMESTAMP,
 audit_update_user_name_tx =
CASE WHEN p_audit_update_user_name_tx_in='0.1' THEN  
audit_update_user_name_tx  ELSE p_audit_update_user_name_tx_in END ,
 latest_version_in =
CASE WHEN p_latest_version_in_in='0' THEN  latest_version_in  
ELSE p_latest_version_in_in END ,
 delete_in =
CASE WHEN p_delete_in_in='0' THEN  delete_in  ELSE 
p_delete_in_in END
   WHERE pres_id = p_pres_id_in
 AND audit_update_dt = p_last_audit_update_dt_in"
PL/pgSQL function px_co_pr_pres_pg.spt_update_1(bigint,timestamp without time 
zone,timestamp without time zone,character varying,character varying,character 
varying,character varying,character varying,character varying,character 
varying,character varying,character varying,character varying,character 
varying,character varying,timestamp without time zone) line 7 at SQL statement
LOCATION:  getTypeOutputInfo, lsyscache.c:2681


Regards,
Virendra

___

Re: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-03 Thread Kumar, Virendra
Hi Tom,

Sincere apologies for that privacy notice in email, this company policy which I 
cannot skip.

Adrian,

The function is really simple. I’ll share the code as soon as I can.

Regards,
Virendra

> On Sep 3, 2019, at 5:41 PM, Tom Lane  wrote:
>
> Adrian Klaver  writes:
>>> On 9/3/19 1:56 PM, Kumar, Virendra wrote:
>>> Here is simple query, I am running via function call. This statement
>>> runs fine as SQL but when put in function as plpgsql it failes with
>>> error below, when I ran second times in same session it succeeds:
>
>> We will need to see the function definition.
>
> Also, what PG version is that?  This looks a bit like bug #15913,
> but it might be something else.
>
>>> This message is intended only for the use of the addressee and may contain
>>> information that is PRIVILEGED AND CONFIDENTIAL.
>>>
>>> If you are not the intended recipient, you are hereby notified that any
>>> dissemination of this communication is strictly prohibited. If you have
>>> received this communication in error, please erase all copies of the message
>>> and its attachments and notify the sender immediately. Thank you.
>
> I rather wonder whether I'm even allowed to read this, let alone
> answer it.  You do realize that this sort of add-on is completely
> silly on a public mailing list?
>
>regards, tom lane



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-04 Thread Kumar, Virendra
tion is below:
--
product_master_fdw=# \d product_owner.external_sys_class_code_pres;
 Table "product_owner.external_sys_class_code_pres"
Column|Type | Collation | 
Nullable | Default
--+-+---+--+-
pres_id  | bigint  |   | not 
null |
major_classification_cd  | character varying(10)   |   | not 
null |
major_classification_name_tx | character varying(100)  |   | not 
null |
coverage_short_name_tx   | character varying(50)   |   |
  |
coverage_name_tx | character varying(100)  |   | not 
null |
cdt_source_system_cd | character varying(50)   |   | not 
null |
cdt_status_cd| character varying(50)   |   | not 
null |
effective_dt | timestamp without time zone |   | not 
null |
expiration_dt| timestamp without time zone |   |
  |
audit_insert_dt  | timestamp without time zone |   | not 
null |
audit_insert_user_name_tx| character varying(50)   |   | not 
null |
audit_update_dt  | timestamp without time zone |   |
  |
audit_update_user_name_tx| character varying(50)   |   |
  |
latest_version_in| character varying(1)|   |
  |
delete_in| character varying(1)|   |
  |
class_code_id| character varying(50)   |   | not 
null |
cdt_submitting_country_cd| character varying(50)   |   | not 
null |
Indexes:
"external_sys_class_code_pres_pkey" PRIMARY KEY, btree (pres_id)
"pres_fk_cdt_submitting_ctry_cd" btree (cdt_submitting_country_cd)
Referenced by:
TABLE "product_owner.cov_category_detail_class_prcc" CONSTRAINT 
"prcc_fk_pres_detail_class" FOREIGN KEY (pres_id) REFERENCES 
product_owner.external_sys_class_code_pres(pres_id) DEFERRABLE
TABLE "product_owner.ext_class_prod_class_map_prcm" CONSTRAINT 
"prcm_fk_pres_class_code" FOREIGN KEY (pres_id) REFERENCES 
product_owner.external_sys_class_code_pres(pres_id) DEFERRABLE NOT VALID
TABLE "product_owner.ext_class_prod_cvg_map_prcv" CONSTRAINT 
"prcv_fk_pres_id" FOREIGN KEY (pres_id) REFERENCES 
product_owner.external_sys_class_code_pres(pres_id) DEFERRABLE
TABLE "product_owner.external_prod_dtl_presd" CONSTRAINT "presd_fk_pres_id" 
FOREIGN KEY (pres_id) REFERENCES 
product_owner.external_sys_class_code_pres(pres_id) DEFERRABLE



Regards,
Virendra
-Original Message-
From: Kumar, Virendra
Sent: Tuesday, September 03, 2019 6:09 PM
To: Tom Lane
Cc: Adrian Klaver; pgsql-general@lists.postgresql.org
Subject: Re: Running a Simple Update Statement Fails, Second Time Suceeds.

Hi Tom,

Sincere apologies for that privacy notice in email, this company policy which I 
cannot skip.

Adrian,

The function is really simple. I’ll share the code as soon as I can.

Regards,
Virendra

> On Sep 3, 2019, at 5:41 PM, Tom Lane  wrote:
>
> Adrian Klaver  writes:
>>> On 9/3/19 1:56 PM, Kumar, Virendra wrote:
>>> Here is simple query, I am running via function call. This statement
>>> runs fine as SQL but when put in function as plpgsql it failes with
>>> error below, when I ran second times in same session it succeeds:
>
>> We will need to see the function definition.
>
> Also, what PG version is that?  This looks a bit like bug #15913,
> but it might be something else.
>
>>> This message is intended only for the use of the addressee and may contain
>>> information that is PRIVILEGED AND CONFIDENTIAL.
>>>
>>> If you are not the intended recipient, you are hereby notified that any
>>> dissemination of this communication is strictly prohibited. If you have
>>> received this communication in error, please erase all copies of the message
>>> and its attachments and notify the sender immediately. Thank you.
>
> I rather wonder whether I'm even allowed to read this, let alone
> answer it.  You do realize that this sort of add-on is completely
> silly on a public mailing list?
>
>regards, tom lane



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-11 Thread Kumar, Virendra


RE: why select count(*) consumes wal logs

2018-11-06 Thread Kumar, Virendra
I concord.
Why VACUUM when there is no update or deletes.

Regards,
Virendra

From: Ron [mailto:ronljohnso...@gmail.com]
Sent: Tuesday, November 06, 2018 12:20 PM
To: pgsql-general@lists.postgresql.org
Subject: Re: why select count(*) consumes wal logs

On 11/06/2018 11:12 AM, Michael Nolan wrote:

On Tue, Nov 6, 2018 at 11:08 AM Ravi Krishna 
mailto:srkrish...@aol.com>> wrote:
PG 10.5

I loaded 133 million rows to a wide table (more than 100 cols) via COPY.

It's always a good idea after doing a large scale data load to do a vacuum 
analyze on the table (or the entire database.)


I understand the need to ANALYZE (populate the histograms needed by the dynamic 
optimizer), but why VACUUM (which is recommended after updates and deletes).

Thanks
--
Angular momentum makes the world go 'round.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


NL Join vs Merge Join - 5 hours vs 2 seconds

2018-12-17 Thread Kumar, Virendra
We are using Postgers-XL based on PostgreSQL 10.5 on RHEL7. We have an extreme 
case of one query which running in 4 hour 45 mins and 33 seconds with Nested 
Loops Join vs 2.5 seconds when we disable enable_nestloop. The query is given 
below. I would be happy to share more information if required.
--

Plan A - NL enabled:

--

Limit  (cost=78922.65..87812.39 rows=1 width=584) (actual 
time=53476.337..17133622.713 rows=353 loops=1)

   Buffers: shared hit=181422132, temp read=50909275 written=50909273

   ->  GroupAggregate  (cost=78922.65..87812.39 rows=1 width=584) (actual 
time=53476.334..17133622.035 rows=353 loops=1)

 Group Key: sh."row", sh.col

 Buffers: shared hit=181422132, temp read=50909275 written=50909273

 ->  Nested Loop  (cost=78922.65..87812.20 rows=1 width=536) (actual 
time=51768.638..17132518.985 rows=51562 loops=1)

   Join Filter: ((se.account_id = pe.account_id) AND (se.peril_id = 
pe.peril_id))

   Rows Removed by Join Filter: 2403022205

   Buffers: shared hit=181422132, temp read=50909275 
written=50909273

   ->  GroupAggregate  (cost=72432.96..72433.14 rows=1 width=368) 
(actual time=51513.891..52815.004 rows=51567 loops=1)

 Group Key: sh."row", sh.col, se.portfolio_id, se.peril_id, 
se.account_id

 Buffers: shared hit=22235530, temp read=7878 written=7876

 ->  Sort  (cost=72432.96..72432.96 rows=1 width=324) 
(actual time=51513.864..51790.397 rows=111621 loops=1)

   Sort Key: sh."row", sh.col, se.peril_id, 
se.account_id

   Sort Method: external merge  Disk: 36152kB

   Buffers: shared hit=22235530, temp read=7878 
written=7876

   ->  Nested Loop Semi Join  (cost=34681.42..72432.95 
rows=1 width=324) (actual time=238.805..51120.362 rows=111621 loops=1)

 Join Filter: (a.account_id = se1.account_id)

Buffers: shared hit=22235530, temp read=3359 
written=3353

 ->  Gather  (cost=34681.00..72431.43 rows=1 
width=348) (actual time=238.757..1085.453 rows=111621 loops=1)

   Workers Planned: 2

   Workers Launched: 2

   Buffers: shared hit=421611, temp 
read=3359 written=3353

   ->  Nested Loop  
(cost=33681.00..71431.33 rows=1 width=348) (actual time=359.006..926.303 
rows=37207 loops=3)

 Join Filter: (se.account_id = 
a.account_id)

 Buffers: shared hit=421611, temp 
read=3359 written=3353

 ->  Hash Join  
(cost=33680.71..71298.94 rows=395 width=300) (actual time=358.953..656.923 
rows=36828 loops=3)

   Hash Cond: ((se.account_id = 
sh.account_id) AND (se.site_id = sh.site_id))

   Buffers: shared hit=89803, 
temp read=3359 written=3353

   ->  Parallel Bitmap Heap 
Scan on site_exposure_p113 se  (cost=2447.82..32716.49 rows=15266 
width=276) (actual time=4.980..165.908 rows=36639 loops=3)

 Recheck Cond: (shape 
&& 
'010320E610010005006EDDDBF40F285FC03D719130D9532340BEE0DBF4BFCE53C03D719130D9532340BEE0DBF4BFCE53C0B2C127D5E26B42406EDDDBF40F285FC0B2C127D5E26B42406EDDDBF40F285FC03D719130D9532340'::geometry)

 Filter: ((portfolio_id 
= 113) AND _st_intersects(shape, 
'010320E610010005006EDDDBF40F285FC03D719130D9532340BEE0DBF4BFCE53C03D719130D9532340BEE0DBF4BFCE53C0B2C127D5E26B42406EDDDBF40F285FC0B2C127D5E26B42406EDDDBF40F285FC03D719130D9532340'::geometry))

 Heap Blocks: exact=2704

 Buffers: shared 
hit=16522

 ->  Bitmap Index Scan 
on site_exposure_p113_shape_idx  (cost=0.00..2438.66 rows=109917 width=0) 
(actual time=12.154..12.155 rows=109917 loops=1)

   Index Cond: 
(shape && 
'010320E610010005006EDDDBF40F285FC03D719130D9532340BEE0DBF4BFCE53C03D719130D9532340BEE0DBF4BFCE53C0B2C127D5E26B42406EDDDBF40F285FC0B2C127D5E26B42406EDDDBF40F285FC03D719130D9532340'::geometry)

   Buffers: shared 
hit=797

   ->  Hash  
(cost=28094.08..28094.08 rows=143721 width=28) (actual time=352.223..352.224 
rows=144202 loops=3)

 

RE: NL Join vs Merge Join - 5 hours vs 2 seconds

2018-12-17 Thread Kumar, Virendra
Thank you Tom for initial thoughts!

I tried query with function dependency statistics but it did not help. I don't 
see we can create such statistics on table join.

Regards,
Virendra


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Monday, December 17, 2018 12:02 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: NL Join vs Merge Join - 5 hours vs 2 seconds

"Kumar, Virendra"  writes:
> We are using Postgers-XL based on PostgreSQL 10.5 on RHEL7. We have an 
> extreme case of one query which running in 4 hour 45 mins and 33 seconds with 
> Nested Loops Join vs 2.5 seconds when we disable enable_nestloop. The query 
> is given below. I would be happy to share more information if required.

It looks like what's burning you is bad estimates at these joins:

>->  Nested Loop  
> (cost=33681.00..71431.33 rows=1 width=348) (actual time=359.006..926.303 
> rows=37207 loops=3)
>  Join Filter: (se.account_id = 
> a.account_id)
>  ->  Hash Join  
> (cost=33680.71..71298.94 rows=395 width=300) (actual time=358.953..656.923 
> rows=36828 loops=3)
>Hash Cond: ((se.account_id 
> = sh.account_id) AND (se.site_id = sh.site_id))
...
>  ->  Index Scan using 
> account_p113_account_id_idx on account_p113 a  (cost=0.29..0.32 
> rows=1 width=52) (actual time=0.003..0.005 rows=1 loops=110485)
>Index Cond: (account_id = 
> sh.account_id)
>Filter: (portfolio_id = 
> 113)

I'm guessing that account_id, site_id, and portfolio_id are all highly
correlated, but the planner doesn't know that and thinks the additional
conditions will remove way more rows than they actually do.

In PG10 and later, you can probably fix that by creating
functional-dependency statistics on those pairs of columns.

regards, tom lane



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
Hi,


I am going through ldap authentication documents in PostgreSQL and found that 
we can specify multiple ldap servers but sure how. I have put two entries in 
double quotes like below:
--
hostall all0.0.0.0/0  ldap 
ldapserver="server1.com server2.com" ldapprefix=PROD01\

There is no complain when reloading the pg_hba.conf file however when 
server1.com is down the connection just times out instead of trying server2. 
Does anybody have idea how to configure it or how to make this work.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
This is what I see:
--
[postgres@usdf24v0131 ~]$ which postgres
/opt/postgres/10/bin/postgres
[postgres@usdf24v0131 ~]$ ldd /opt/postgres/10/bin/postgres
linux-vdso.so.1 =>  (0x7ffee3fe8000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f9a7f125000)
librt.so.1 => /lib64/librt.so.1 (0x7f9a7ef1d000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f9a7ed19000)
libm.so.6 => /lib64/libm.so.6 (0x7f9a7ea17000)
libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x7f9a7e7c2000)
libc.so.6 => /lib64/libc.so.6 (0x7f9a7e3f5000)
/lib64/ld-linux-x86-64.so.2 (0x7f9a7f341000)
liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x7f9a7e1e6000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x7f9a7dfcd000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x7f9a7ddb)
libssl.so.10 => /lib64/libssl.so.10 (0x7f9a7db3e000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x7f9a7d6dd000)
libssl3.so => /lib64/libssl3.so (0x7f9a7d48b000)
libsmime3.so => /lib64/libsmime3.so (0x7f9a7d264000)
libnss3.so => /lib64/libnss3.so (0x7f9a7cf37000)
libnssutil3.so => /lib64/libnssutil3.so (0x7f9a7cd08000)
libplds4.so => /lib64/libplds4.so (0x7f9a7cb04000)
libplc4.so => /lib64/libplc4.so (0x7f9a7c8ff000)
libnspr4.so => /lib64/libnspr4.so (0x7f9a7c6c1000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x7f9a7c48a000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x7f9a7c23d000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x7f9a7bf54000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x7f9a7bd39000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x7f9a7bb35000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x7f9a7b926000)
libz.so.1 => /lib64/libz.so.1 (0x7f9a7b71)
libfreebl3.so => /lib64/libfreebl3.so (0x7f9a7b50d000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x7f9a7b309000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x7f9a7b0e2000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x7f9a7ae8)


Regards,
Virendra

-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Thursday, December 20, 2018 1:47 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication

Hello Virendra

On 2018-Dec-20, Kumar, Virendra wrote:

> I am going through ldap authentication documents in PostgreSQL and found that 
> we can specify multiple ldap servers but sure how. I have put two entries in 
> double quotes like below:
> --
> hostall all0.0.0.0/0  ldap 
> ldapserver="server1.com server2.com" ldapprefix=PROD01\
>
> There is no complain when reloading the pg_hba.conf file however when 
> server1.com is down the connection just times out instead of trying server2. 
> Does anybody have idea how to configure it or how to make this work.

Reading the server code, I find this rather surprising.  The ldapserver
string is passed directly to ldap_init(), which per its manual should
use the multiple servers:
  "The host parameter may contain a blank-separated list of hosts to try
   to connect to, and each host may optionally by of the form
   host:port."
 ( quoted from https://linux.die.net/man/3/ldap_init )

Maybe the library you have does not behave in the same way?  It would be
good to confirm that the LDAP library you have is OpenLDAP and not
something else.  Can you run an `ldd` on the postgres binary and see
what the output says about ldap?

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
Comman separated doesn't work as well.


Regards,
Virendra

-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Thursday, December 20, 2018 2:21 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication

On 2018-Dec-20, Kumar, Virendra wrote:

> I am going through ldap authentication documents in PostgreSQL and found that 
> we can specify multiple ldap servers but sure how. I have put two entries in 
> double quotes like below:
> --
> hostall all0.0.0.0/0  ldap 
> ldapserver="server1.com server2.com" ldapprefix=PROD01\

Does it work if you change the string this way?

ldapserver="server1.com, server2.com"

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
[postgres@xxx ~]$ rpm -qf /lib64/libldap-2.4.so.2
openldap-2.4.44-20.el7.x86_64


Regards,
Virendra
-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Thursday, December 20, 2018 2:01 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication

On 2018-Dec-20, Kumar, Virendra wrote:

> This is what I see:
> --
> [postgres@usdf24v0131 ~]$ which postgres
> /opt/postgres/10/bin/postgres
> [postgres@usdf24v0131 ~]$ ldd /opt/postgres/10/bin/postgres
> linux-vdso.so.1 =>  (0x7ffee3fe8000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f9a7f125000)
> librt.so.1 => /lib64/librt.so.1 (0x7f9a7ef1d000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f9a7ed19000)
> libm.so.6 => /lib64/libm.so.6 (0x7f9a7ea17000)
> libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x7f9a7e7c2000)

I'm going to risk it and say that this is indeed OpenLDAP -- but you can
confirm by doing this:
rpm -qf /lib64/libldap-2.4.so.2

Anyway, little birdies tell me that OpenLDAP does not really live up to
its own documentation, because this feature does not in fact work.

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
Hi Alvaro,

Tried with a comma and a space but it timed out one more time:
--
2018-12-20 14:34:56.070 CST [64618] FATAL:  canceling authentication due to 
timeout

Okay, so how would I configure openldap to try another ldap server in x amount 
of time if first one is not working.

Regards,
Virendra
-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Thursday, December 20, 2018 3:25 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication

On 2018-Dec-20, Kumar, Virendra wrote:

> Comman separated doesn't work as well.

Please separate by a comma and a space, not just a comma.  My reading of
the OpenLDAP source code, and some quick experiments comparing failure
patterns, suggest that that exact combination may work.  (OpenLDAP is
not exactly well commented.)  I think one problem you may or may not hit
is the PostgreSQL authentication timeout expiring sooner than OpenLDAP
is willing to try the second server.

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Multiple LDAP Servers for ldap Authentication

2018-12-20 Thread Kumar, Virendra
I figured it out, this is how it works:
--
hostall all0.0.0.0/0  ldap 
ldapserver=server1.com ldapserver=server2.com ldapprefix=PROD01\

So documentation need some update.

Regards,
Virendra

-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Thursday, December 20, 2018 3:25 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication

On 2018-Dec-20, Kumar, Virendra wrote:

> Comman separated doesn't work as well.

Please separate by a comma and a space, not just a comma.  My reading of
the OpenLDAP source code, and some quick experiments comparing failure
patterns, suggest that that exact combination may work.  (OpenLDAP is
not exactly well commented.)  I think one problem you may or may not hit
is the PostgreSQL authentication timeout expiring sooner than OpenLDAP
is willing to try the second server.

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



RE: Multiple LDAP Servers for ldap Authentication

2018-12-21 Thread Kumar, Virendra
Hi Chris,



It is better explained if we include an example in the documentation. That will 
be self explainatory.



Regards,

Virendra



-Original Message-
From: Christopher Browne [mailto:cbbro...@gmail.com]
Sent: Thursday, December 20, 2018 6:16 PM
To: Kumar, Virendra
Cc: Alvaro Herrera; pgsql-general@lists.postgresql.org
Subject: Re: Multiple LDAP Servers for ldap Authentication



On Thu, 20 Dec 2018 at 16:17, Kumar, Virendra  
wrote:

>

> I figured it out, this is how it works:

> --

> hostall all0.0.0.0/0  ldap 
> ldapserver=server1.com ldapserver=server2.com ldapprefix=PROD01\

>

> So documentation need some update.



Looking at the docs (doc/src/sgml/client-auth.sgml), it appears to be correct, 
though it is somewhat ambiguous in that I'd expect "ldapserver=server1.com 
server2.com" to be OK rather than needing "ldapserver=server1.com 
ldapserver2.com" instead.  I'm not certain of a way to explain this better.



 

  ldapserver

  

   

Names or IP addresses of LDAP servers to connect to. Multiple

servers may be specified, separated by spaces.

   

  

 



Perhaps the attached?





--

When confronted by a difficult problem, solve it by reducing it to the 
question, "How would the Lone Ranger handle this?"



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2018-12-21 Thread Kumar, Virendra
Hi Mark,

Can you try setting up stats target for school_id column and analyze table and 
see where it takes, something like:
--
ALTER table common_student ALTER COLUMN school_id SET STATISTICS 1;
ANALYZE common_stundent;

Regards,
Virendra

From: Mark [mailto:mwchamb...@gmail.com]
Sent: Friday, December 21, 2018 11:39 AM
To: pgsql-general@lists.postgresql.org
Subject: Query planner / Analyse statistics bad estimate rows=1 with maximum 
statistics 1 on PostgreSQL 10.2

Hi All,

I'm seeing some very slow queries and it looks like the query planner is 
deciding to do a 'Nested Loop Left Join'
which is slow.  When I SET enable_nestloop=OFF for testing it does a 'Hash Left 
Join' which is much faster.

I think  I can see the cause of the problem in the examples below. I can't make 
sense of the statistics being written
by ANALYSE or the planners row estimates.

Can anyone help me understand

- Why the row estimate I get in the query below for school_id = 36 is 1 ? ( 
I expect it to be higher)
- Why does '1' appear in the most_common_vals when it is actually the least 
common value.
- Why doesn't 36 appear in the most_common_vals (it is more common than 1)
- Does the analyse output below mean that it only scanned 51538 of 65463 rows 
in the table? Is school_id 36 just being missed in the sample? (This happens 
when the analyse is repeated )

Any help with understanding what's happening here would be much appreciated.

I hope I've provided enough information below.

Thanks,

Mark

db=> explain analyse select * from common_student  where school_id = 36 ;
 QUERY PLAN

 Index Scan using idx_common_student_sid on common_student  (cost=0.41..8.39 
rows=1 width=385) (actual time=0.264..1.691 rows=1388 loops=1)
   Index Cond: (school_id = 36)
 Planning time: 0.087 ms
 Execution time: 2.706 ms
(4 rows)

db=> select tablename,attname,most_common_vals,histogram_bounds,n_distinct from 
pg_stats where attname='school_id'  and tablename='common_stude
nt';
   tablename|  attname  |   
 most_common_vals   
 | histogram_bounds | n_distinct
+---++--+
 common_student | school_id | 
{79,60,25,61,59,69,86,77,64,33,56,78,58,81,41,97,22,67,38,23,3,72,92,93,48,24,96,26,75,90,70,52,51,21,14,91,83,54,85,11,68,94,53,88,1}
 |  | 45
(1 row)

db=> select count(distinct(school_id)) from common_student ;
 count
---
55
(1 row)

db=> alter table common_student alter column school_id set statistics 10;
WARNING:  lowering statistics target to 1
ALTER TABLE
db=> analyse verbose common_student(school_id);
INFO:  analyzing "public.common_student"
INFO:  "common_student": scanned 7322 of 7322 pages, containing 65463 live rows 
and 49026 dead rows; 51538 rows in sample, 65463 estimated total rows
ANALYZE
db=> select tablename,attname,most_common_vals,histogram_bounds,n_distinct from 
pg_stats where attname='school_id'  and tablename='common_stude
nt';
   tablename|  attname  |   
 most_common_vals   
 | histogram_bounds | n_distinct
+---++--+
 common_student | school_id | 
{79,60,25,61,59,69,86,77,64,33,56,78,58,81,41,97,22,67,38,23,3,72,92,93,48,24,96,26,75,90,70,52,51,21,14,91,83,54,85,11,68,94,53,88,1}
 |  | 45
(1 row)

db=> explain analyse select * from common_student  where school_id = 36 ;
 QUERY PLAN

 Index Scan using idx_common_student_sid on common_student  (cost=0.41..8.39 
rows=1 width=385) (actual time=0.542..4.022 rows=1388 loops=1)
   Index Cond: (school_id = 36)
 Planning time: 0.334 ms
 Execution time: 6.542 ms
(4 rows)

db=> select school_id, count(*) from common_student  group by school_id order 
by count(*)  limit 6 ;
 school_id | count
---+---
 1 |50
88 |   161
53 |   252
94 |   422
31 |   434
68 |   454
(6 rows)

dvpjxbzc=> select school_id, count(*) from common_student where school_id = 36  
group by school_id ;
 school_id | count
---+---
36 |  1388
(1 row)

db=> explain 

Notification for Minor Release and Security Update

2019-04-11 Thread Kumar, Virendra
Team,

Is there a subscription URL we have to subscribe for to know about Minor 
Version Release and Security Updates when they are released for public usage.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


User Details for PostgreSQL

2019-05-09 Thread Kumar, Virendra
Hello Team,

We are looking for some audit information about user creation. We need a few 
audit fields which we did not find in PostgreSQL. I would be happy if someone 
help us in finding these details. Besically we need information about:

1.   User creation date

2.   Last Password change date

Do we have a way to get these values or can somebody guide us how we can store 
and get these values while creating user.

Regards,
Virendra




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: User Details for PostgreSQL

2019-05-09 Thread Kumar, Virendra
Thanks Chris!
Since PostgreSQL still have to have those accounts even if we authenticate it 
externally we have to get at least the user creation date from the instance as 
that information might be different in instance vs external utility. Is there a 
possibility we can get it.

Most of our accounts are AD authenticated however we have some like (postgres – 
superuser!) which is local or peer authenticated we want to control that as 
well and hence the requirement.

Regards,
Virendra

From: Christopher Browne [mailto:cbbro...@gmail.com]
Sent: Thursday, May 09, 2019 5:04 PM
To: Kumar, Virendra
Cc: pgsql-general@lists.postgresql.org
Subject: Re: User Details for PostgreSQL

On Thu, 9 May 2019 at 16:43, Kumar, Virendra 
mailto:virendra.ku...@guycarp.com>> wrote:
Hello Team,

We are looking for some audit information about user creation. We need a few 
audit fields which we did not find in PostgreSQL. I would be happy if someone 
help us in finding these details. Besically we need information about:

1.   User creation date

2.   Last Password change date

Do we have a way to get these values or can somebody guide us how we can store 
and get these values while creating user.

Regards,
Virendra

Since there is a diversity of ways of managing this information, including 
outside the database, there is no way to assert a true-in-general mechanism for 
this.

Indeed, if you are interested in managing such information particularly 
carefully, you may wish to use mechanisms such as PAM, Kerberos, LDAP, GSSAPI 
for this, in which case PostgreSQL may have no responsibility in the matter of 
managing passwords.  It is quite likely a good idea to use something like 
Kerberos if you have the concerns that you describe, and if so, the audit 
information you want would be collected from Kerberos, not PostgreSQL

--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Supartitions in PGSQL 10

2018-01-08 Thread Kumar, Virendra
Team,

Can you please let us know if Sub-partitions are supported in PGSQL 
(declarative partitions) 10.1. If yes can it be list-list partitions. We have a 
situation where a table is very big having around 2 billion rows and is 
growing. We want to use partitions but not sure if sub-partitions are 
available. The partition key we are looking for is having around 8000 different 
values so it will be 8000 partitions and I think that number is really too big 
number of partitions. For your information RDBMS is not yet Postgres, we are 
evaluating it to see if it can support. Please suggest.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


How Many Partitions are Good Performing

2018-01-08 Thread Kumar, Virendra
Can somebody tell us how many partitions are good number without impacting the 
performance. We are hearing around a thousand, is that a limit. Do we have plan 
to increase the number of partitions for a table. We would appreciate if 
somebody can help us with this?

Regards,
Virendra




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: How Many Partitions are Good Performing

2018-01-09 Thread Kumar, Virendra
Thank you Rakesh and Andrew!

We will not be doing time scaling but we have list of value based of which we 
will be partitioning the table and list is something around 7500 now.
For short term we are thinking of putting around a thousand partitions and when 
PG11 releases we will go for each value a partition.

Regards,
Virendra

From: Andrew Staller [mailto:and...@timescale.com]
Sent: Tuesday, January 09, 2018 12:15 PM
To: Rakesh Kumar
Cc: Kumar, Virendra; pgsql-gene...@postgresql.org
Subject: Re: How Many Partitions are Good Performing

This is the blog post that Rakesh referenced:
https://blog.timescale.com/time-series-data-postgresql-10-vs-timescaledb-816ee808bac5

Please note, this analysis is done in the context of working with time-series 
data, where 1000s of chunks is not uncommon because of the append-mostly nature 
of the workload.

On Mon, Jan 8, 2018 at 6:54 PM, Rakesh Kumar 
mailto:rakeshkumar...@mail.com>> wrote:

 You should have read carefully what I wrote.  1000 is not an upper limit.  
1000 partition is the number after which performance starts dropping .

There is a blog in www.timescale.com<http://www.timescale.com> which also 
highlights the same.

Sent: Monday, January 08, 2018 at 6:20 PM
From: "Kumar, Virendra" 
mailto:virendra.ku...@guycarp.com>>
To: "pgsql-gene...@postgresql.org<mailto:pgsql-gene...@postgresql.org>" 
mailto:pgsql-gene...@postgresql.org>>
Subject: How Many Partitions are Good Performing

Can somebody tell us how many partitions are good number without impacting the 
performance. We are hearing around a thousand, is that a limit. Do we have plan 
to increase the number of partitions for a table. We would appreciate if 
somebody can help us with this?

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



--
TimescaleDB | Growth & Developer Evangelism
c: 908.581.9509

335 Madison Ave.
New York, NY 10017
http://www.timescale.com/
https://github.com/timescale/timescaledb




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: How Many Partitions are Good Performing

2018-01-10 Thread Kumar, Virendra
For test I created two tables with 7800 partitions each and joining them sees 
performance bottleneck. It is taking 5 seconds planning time. Please see 
attached plan.


Regards,
Virendra

-Original Message-
From: pinker [mailto:pin...@onet.eu]
Sent: Wednesday, January 10, 2018 12:07 PM
To: pgsql-gene...@postgresql.org
Subject: Re: How Many Partitions are Good Performing

I've run once a test on my laptop because was curious as well. From my results 
(on laptop - 16GB RAM, 4 cores) the upper limit was 12k. Above it planning time 
was unbearable high - much higher than execution time. It's been tested on 9.5



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.
 Aggregate  (cost=337270.14..337270.15 rows=1 width=232) (actual 
time=102.123..102.124 rows=1 loops=1)
   Buffers: shared hit=22747
   CTE t1
 ->  HashAggregate  (cost=2773.81..2795.37 rows=2156 width=24) (actual 
time=12.051..13.626 rows=2792 loops=1)
   Group Key: se_1.site_id, se_1.peril_id, se_1.account_id, 
se_1.portfolio_id
   Buffers: shared hit=385
   ->  Nested Loop  (cost=10.05..2752.25 rows=2156 width=24) (actual 
time=0.359..9.940 rows=2792 loops=1)
 Buffers: shared hit=385
 ->  Seq Scan on catevent_earthquake_poly_merged  
(cost=0.00..17.75 rows=3 width=32) (actual time=0.027..0.030 rows=1 loops=1)
   Filter: (event_id = 'us20009yvq'::text)
   Rows Removed by Filter: 3
   Buffers: shared hit=2
 ->  Append  (cost=10.05..910.78 rows=72 width=56) (actual 
time=0.328..7.250 rows=2792 loops=1)
   Buffers: shared hit=383
   ->  Bitmap Heap Scan on site_exposure_13397 se_1  
(cost=10.05..910.78 rows=72 width=56) (actual time=0.325..4.726 rows=2792 
loops=1)
 Recheck Cond: (shape && 
catevent_earthquake_poly_merged.shape)
 Filter: ((portfolio_id = 13397) AND 
_st_intersects(shape, catevent_earthquake_poly_merged.shape))
 Heap Blocks: exact=360
 Buffers: shared hit=383
 ->  Bitmap Index Scan on 
site_exposure_13397_shape_idx  (cost=0.00..10.03 rows=216 width=0) (actual 
time=0.263..0.263 rows=2792 loops=1)
   Index Cond: (shape && 
catevent_earthquake_poly_merged.shape)
   Buffers: shared hit=23
   ->  Hash Join  (cost=271937.09..334470.48 rows=59 width=200) (actual 
time=88.676..98.896 rows=2792 loops=1)
 Hash Cond: ((se.peril_id = p.pe_peril_id) AND (se.account_id = 
p.pe_account_id))
 Buffers: shared hit=22747
 ->  GroupAggregate  (cost=271794.39..330553.67 rows=215630 width=152) 
(actual time=30.641..37.303 rows=2792 loops=1)
   Group Key: se.peril_id, se.portfolio_id, se.account_id
   Buffers: shared hit=11178
   ->  Sort  (cost=271794.39..274489.77 rows=1078152 width=148) 
(actual time=30.629..32.018 rows=2792 loops=1)
 Sort Key: se.peril_id, se.account_id
 Sort Method: quicksort  Memory: 838kB
 Buffers: shared hit=11178
 ->  Nested Loop  (cost=48.94..1614.87 rows=1078152 
width=148) (actual time=3.305..27.195 rows=2792 loops=1)
   Buffers: shared hit=11178
   ->  HashAggregate  (cost=48.51..50.51 rows=200 
width=8) (actual time=3.274..4.957 rows=2792 loops=1)
 Group Key: t1.site_id
 ->  CTE Scan on t1  (cost=0.00..43.12 
rows=2156 width=8) (actual time=0.002..1.378 rows=2792 loops=1)
   ->  Append  (cost=0.43..7.81 rows=1 width=156) 
(actual time=0.004..0.006 rows=1 loops=2792)
 Buffers: shared hit=11178
 ->  Index Scan using site_exposure_13397_pkey 
on site_exposure_13397 se  (cost=0.43..7.81 rows=1 width=156) (actual 
time=0.003..0.004 rows=1 loops=2792)
   Index Cond: (site_id = t1.site_id)
   Filter: (portfolio_id = 13397)
   Buffers: shared hit=11178
 ->  Hash  (cost=142.53..142.53 rows=11 width=80) (actual 
time=58.016..58.016 rows=2792 loops=1)
   Buckets: 4096 (originally 1024)  Batches: 1 (originally 1)  
Memory Usage: 

pgpool Connections Distributions Among Nodes

2018-01-29 Thread Kumar, Virendra
Hi Gurus,

We have 4-node cluster (1 master and 3 hot standby).  We are using pgpool as 
load balancer. We have an observation where if application requests for 3 
connections, pgpool connects to all 4 servers and I see 3 connections on each 
of them. I was expecting it have a total of 3 connections from either of 4 
servers but I can easily see 12 connections in all.

Can somebody shed some light on it.

Please let me know if you need more information.

Regards,
Virendra




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


RE: pgpool Connections Distributions Among Nodes

2018-01-29 Thread Kumar, Virendra
Attached is config file.


[pgpool@usdf23v0550 ~]$ pgpool -v
pgpool-II version 3.7.1 (amefuriboshi)

Reading should go to all nodes.

Regards,
Virendra
From: Glauco Torres [mailto:torres.gla...@gmail.com]
Sent: Monday, January 29, 2018 11:53 AM
To: Kumar, Virendra
Cc: pgsql-gene...@postgresql.org
Subject: Re: pgpool Connections Distributions Among Nodes



We have 4-node cluster (1 master and 3 hot standby).  We are using pgpool as 
load balancer. We have an observation where if application requests for 3 
connections, pgpool connects to all 4 servers and I see 3 connections on each 
of them. I was expecting it have a total of 3 connections from either of 4 
servers but I can easily see 12 connections in all.

Can somebody shed some light on it.

Please let me know if you need more information.


Hello,
Which version pgpool you are using?
How did you balance the reading by pgpool? Should all readings go to the 
replicas? or reading goes to all nodes?
Regards,
Glauco Torres



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


pgpool.conf
Description: pgpool.conf