Re: Table partitioning with sequence field in postgresql12

2020-06-18 Thread amul sul
On Thu, Jun 18, 2020 at 12:22 PM Srinivasa T N  wrote:
>
> Hi,
>I have a parent table with one of the field as ""gid" int4 DEFAULT 
> nextval('"ami_smart_new".aoi_boundary_gid_seq'::regclass)".
>
>I create child tables which inherit parent and use hash partition.  When I 
> directly insert into child tables, will there be any race condition causing 
> two child tables getting the same sequence value for gid?
>
if "gid" is the partitioning key and assuming you are using declarative
partitioning[1], then it won't be possible to have the same "gid" value in two
child partitions.

1] https://www.postgresql.org/docs/current/ddl-partitioning.html

regards,
Amul




Re: Table partitioning with sequence field in postgresql12

2020-06-18 Thread Srinivasa T N
I am using declarative partitioning on some other field of the same table
in which gid also exists.

Regards,
Seenu.


On Thu, Jun 18, 2020 at 12:31 PM amul sul  wrote:

> On Thu, Jun 18, 2020 at 12:22 PM Srinivasa T N  wrote:
> >
> > Hi,
> >I have a parent table with one of the field as ""gid" int4 DEFAULT
> nextval('"ami_smart_new".aoi_boundary_gid_seq'::regclass)".
> >
> >I create child tables which inherit parent and use hash partition.
> When I directly insert into child tables, will there be any race condition
> causing two child tables getting the same sequence value for gid?
> >
> if "gid" is the partitioning key and assuming you are using declarative
> partitioning[1], then it won't be possible to have the same "gid" value in
> two
> child partitions.
>
> 1] https://www.postgresql.org/docs/current/ddl-partitioning.html
>
> regards,
> Amul
>


Re: Table partitioning with sequence field in postgresql12

2020-06-18 Thread Tim Cross


Srinivasa T N  writes:

> Hi,
>I have a parent table with one of the field as ""gid" int4 DEFAULT
> nextval('"ami_smart_new".aoi_boundary_gid_seq'::regclass)".
>
>I create child tables which inherit parent and use hash partition.  When
> I directly insert into child tables, will there be any race condition
> causing two child tables getting the same sequence value for gid?
>
Assuming all inserts use the default e.g. nextval from the same
sequence, you won't get duplicates. You could get a duplicate if an
insert sets an explicit value for gid of course or if rows in any table
were inserted with a gid which was not obtained from the same sequence
using nextval i.e. parent and children use same sequence. The sequence
is just a counter with the property that no two calls to nextval from
that sequence will have the same value. You cannot make any additional
assumptions e.g. cannot assume gid values will be inserted in order or
there won't be 'gaps ' etc.
-- 
Tim Cross




Re: Conflict with recovery on PG version 11.6

2020-06-18 Thread Toomas Kristin
Hi!

What is difference between following error messages?

ERROR:  canceling statement due to conflict with recovery
FATAL:  terminating connection due to conflict with recovery

I tried to correlate process termination on standby and vacuuming on master.

Here is a sample timeline
1) 2020-06-17 22:45:42 - Last vacuuming before session
2) 2020-06-18 01:01:12 - Session was started at standby
3) 2020-06-18 01:27:53 - Session was terminated at standby
4) 2020-06-18 02:59:32 - Next vacuuming

Based on that I would say there is no correlation between vacuuming and process 
termination.

How can I identify where is the conflict that stops replication and forces to 
terminate a query on standby host?

BR,
Toomas

> On 17. Jun 2020, at 16:38, Toomas Kristin  wrote:
> 
> Hi Laurenz,
> 
> Thank you!
> 
> What are reasons for conflicts? Based on documentation seems that the only 
> reason can be that vacuum removed unused tuples that are in use at standby 
> host and due to that standby host cannot apply modifications while blocking 
> query either finishes or will be terminated. isnt it? Or there can be some 
> other reasons?
> 
> I just wondering what would be impact when I increase value for 
> autovacuum_vacuum_scale_factor in order force vacuuming process postpone the 
> clean up process.
> 
> BR,
> Toomas
> 
>> On 17. Jun 2020, at 12:42, Laurenz Albe  wrote:
>> 
>> On Wed, 2020-06-17 at 08:42 +0300, Toomas Kristin wrote:
>>> Basically after upgrade to version 11.5 from 10.6 I experience error 
>>> messages on streaming
>>> replica host “FATAL:  terminating connection due to conflict with recovery” 
>>> and
>>> “ERROR: canceling statement due to conflict with recovery”. There is no 
>>> changes for
>>> vacuuming on master nor max_standby_streaming_delay for replica. I tried to 
>>> correlate
>>> errors with vacuuming process on master but according to logs there is no 
>>> link between
>>> them. Somehow I have feeling that when query runs longer than value for 
>>> parameter
>>> max_standby_streaming_delay the query will be terminated regardless 
>>> vacuuming process on master.
>>> 
>>> Is there any changes on version 11.5 what may cause it?
>>> 
>>> Is there any good solution without setting max_standby_streaming_delay=-1 
>>> or enabling hot_standby_feedback?
>> 
>> The basic behavior shouldn't have changed since v10.
>> 
>> Check "pg_stat_database_conflicts" to see what kinds of conflicts that are.
>> 
>> The only solutions to avoid queries being canceled due to replication 
>> conflicts are:
>> 
>> 1. avoid that such conflicts happen:
>>  - set "hot_standby_feedback = on" on the standby and/or
>>"vacuum_defer_cleanup_age" on the primary to avoid VACUUM conflicts
>>  - Don't lock tables in access exclusive mode
>> 
>> 2. set "max_standby_streaming_delay" to -1
>> 
>> Note that it can be quite hard to completely avoid replication conflicts.
>> Trying to have both no delay in applying changes and no cancelled queries
>> is often not possible without seriously crippling autovacuum.
>> 
>> Yours,
>> Laurenz Albe
>> -- 
>> Cybertec | https://www.cybertec-postgresql.com
>> 
> 





Re: ESQL/C no indicator variables ./. error -213

2020-06-18 Thread Matthias Apitz
El día Mittwoch, Juni 17, 2020 a las 01:39:53 -0400, Tom Lane escribió:

> Matthias Apitz  writes:
> > We encountered that if our ESQL/C written servers see on SELECT or FETCH
> > in a row a NULL value, it will raise correctly the error -213 as written
> > and explained in 
> > https://www.postgresql.org/docs/11/ecpg-variables.html#ECPG-INDICATORS
> > We catch this error -213 and deal with in.
> 
> > What we did not knew and discovered today is something very fatal: In
> > such a situation on a FETCH of a row of some 55 columns, the transfer of
> > the column elements into their hostvariables stops on first NULL value,
> > as here to be seen in the log:
> 
> Could you provide a self-contained test case for this?  It's hard to
> guess at what the problem might be.

Hello,

attached is a simple ESQL/C code; if it is not passing the mailing-list,
the code is also here: http://www.unixarea.de/embedded.pgc and this is
its output together with the ESQL/C error log file and SQL examples:

psql -Usisis -dsisis 
psql (11.4)
Geben Sie »help« für Hilfe ein.

sisis=# \d dbctest
Tabelle »public.dbctest«
  Spalte   |  Typ  | Sortierfolge | NULL erlaubt? | Vorgabewert
---+---+--+---+-
 tstchar25 | character(25) |  |   |
 tstint| integer   |  |   |
Indexe:
"i_tstint" UNIQUE, btree (tstint)

sisis=# select * from dbctest where tstint = 1;
 tstchar25 | tstint
---+
   |  1
(1 Zeile)



./embedded
hostvariable 'tstint' before SELECT tstchar25, tstint INTO :tstchar25, :tstint 
FROM dbctest: 99
hostvariable 'tstint' after  SELECT tstchar25, tstint INTO :tstchar25, :tstint 
FROM dbctest with -213: 99


cat esqlc.6485
[6485] [18.06.2020 08:26:38:433]: ECPGdebug: set to 1
[6485] [18.06.2020 08:26:38:433]: ECPGdebug: proc argv0 is embedded
[6485] [18.06.2020 08:26:38:433]: ECPGconnect: opening database sisis on 
localhost port 5432 with options application_name=SunRise DBCALL V7.1 
(pid=6485) for user sisis
[6485] [18.06.2020 08:26:38:436]: ecpg_execute on line 36: query: select 
tstchar25 , tstint from dbctest where tstint = 1; with 0 parameter(s) on 
connection sisis
[6485] [18.06.2020 08:26:38:437]: ecpg_execute on line 36: using PQexec
[6485] [18.06.2020 08:26:38:437]: ecpg_process_output on line 36: correctly got 
1 tuples with 2 fields
[6485] [18.06.2020 08:26:38:437]: ecpg_get_data on line 36: RESULT:  offset: 
80; array: no
[6485] [18.06.2020 08:26:38:437]: raising sqlcode -213 on line 36: null value 
without indicator on line 36
[6485] [18.06.2020 08:26:38:438]: ecpg_finish: connection sisis closed


Thanks

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
May, 9: Спаси́бо освободители! Thank you very much, Russian liberators!

#include 
#include 

EXEC SQL INCLUDE sqlca;
EXEC SQL WHENEVER SQLERROR sqlprint;

void posSqlError() { return; }

main()
{
FILE *fp;
char file[80];
sprintf(file, "esqlc.%d", getpid());
fp = fopen(file, "w");
ECPGdebug(1, fp);

EXEC SQL BEGIN DECLARE SECTION;
char connection[1024];
char user[80];
char pass[80];
char tstchar25[80];
int  tstint = 99;
EXEC SQL END DECLARE SECTION;

sprintf(connection, 
"tcp:postgresql://localhost:5432/sisis?application_name=SunRise DBCALL V7.1 
(pid=%d)", getpid());
strcpy(user, "sisis");
strcpy(pass, "sisis123");

EXEC SQL WHENEVER SQLERROR call posSqlError();

EXEC SQL CONNECT TO :connection USER :user USING :pass ;

printf("hostvariable 'tstint' before SELECT tstchar25, tstint INTO 
:tstchar25, :tstint FROM dbctest: %d\n", tstint);

EXEC SQL SELECT tstchar25, tstint INTO :tstchar25, :tstint FROM dbctest 
WHERE tstint = 1;

printf("hostvariable 'tstint' after  SELECT tstchar25, tstint INTO 
:tstchar25, :tstint FROM dbctest with -213: %d\n", tstint);


EXEC SQL DISCONNECT;

return 0;

}


Re: Importing a Large .ndjson file

2020-06-18 Thread Sankar P
> It looks like plain old COPY would do this just fine, along the lines
> of (in psql)
>
> \copy myTable(content) from 'myfile.ndjson'
>

Indeed. Thanks.

-- 
Sankar P
http://psankar.blogspot.com




Using postgresql and JSONB on very large datasets

2020-06-18 Thread Sankar P
Hi

I have a table t, with just two columns, an `id SERIAL` and a `record
JSONB`. I have about 3 million records where I save some JSON data in
each record.

Each `record` column jsonb has about 40 json key-values, of which
about 10 are nested fields.

Now, if I run a query, like:

```
select distinct record -> 'destinationServiceName' from t;
```

it takes about 1 minute or so to return the list of unique values for
this column, to search across the 3 million records. The number of
records in my setup may go up to a few tens of millions.

I want these queries to be faster. I also intend to do a few more
aggregation queries, like, "Give the maximum responseTime for each
distinct destinationServiceName" etc. in future.

I have used Elasticsearch and kibana in the past and I am planning to
migrate to postgres jsonb field (not as individual columns), as
vertical scaling is preferred for my case and the schema could not be
fixed.

I assumed that if I add more indexes, these types of queries would
become faster. So I tried to create an index using the command:

```
CREATE INDEX idx_records ON t USING GIN (record);
```

But this index creation fails now as my postbird client timesout.

So my questions are:
1) Is it a good idea to use postgres as a JSON database, if I have
tens of millions of records ? If you run such a setup, can you share
any words of advice on Dos and Don'ts ?
2) Is there a better way to do the index creation, reliably, for a
table with a few million records already ?
3) There are various types of indexes. I am using GIN based on some
short-term googling. Are there any good blog posts, links, tutorials,
courses etc. that you have found useful in understanding the postgres
jsonb performance tuning ?

Thanks.

-- 
Sankar P
http://psankar.blogspot.com




[HELP] General query - performance metrics

2020-06-18 Thread Praveen Kumar K S
Hello,

I have set up pgpool and there are 2 postgres servers behind it. I had run
pgbench with and without pgpool and got the results. I would like to
showcase the results for a presentation. Just thought of asking you people
if there are any tools to generate nice results with charts and graphs ? TIA


-- 


*Regards,*


*K S Praveen Kumar*


Netapp SnapCenter

2020-06-18 Thread Paul Förster
Hi,

first of all, it wasn't my glorious idea and I know that volume snapshots are 
no backups. :-)

Leaving that aside: Our company has decided to move away from Netapp 
SnapCreator to its SnapCenter. I was told there is some community plugin for 
PostgreSQL to use with SnapCenter but can't find any information that makes 
sense.

The only thing I found is:

https://automationstore.netapp.com/snappack-listNC.shtml

But it's from Oct. 3rd 2018, for PostgreSQL 9.x and 10.x and also requires JDK 
1.8 or later.

If this works at all, I consider these version limits (i.e. development status) 
as ugly, because more or less in limbo.

Does anyone know of a solution for SnapCenter that works, is actively supported 
and can be used with current versions of PostgreSQL (i.e. 11.x, 12.x and soon 
to come 13.x)?

Any idea would be greatly appreciated. Thanks in advance.

Cheers,
Paul



Re: i am getting issue as FATAL: password authentication failed for user ' postgres' tried

2020-06-18 Thread o1bigtenor
On Wed, Jun 17, 2020 at 9:14 PM prachi surangalikar
 wrote:
>
> hello team,
> i have  tried every  thing  but still i  could not find the solution to this 
> problem.
> i made changes in the pg_hba.conf file also , please help me to solve this 
> problem.

As  a noob to programming and under the hood stuff I have found myself
with this
issue a few times. My method to 'fix' this problem is likely not
sanctioned but as I'm
working on my system on my stuff - - - oh well it works even if its NOT elegant!

When my password for postgres doesn't work.
1. I try again a couple times
2. still not working I go back into the pg_hba.conf file and comment out
the 'new' part where postgresql has moved from no password to password
3. next step is to stop and restart postgresql
4. now you log into postgresql and given postgres a password
(I would suggest writing it down - - - - you need to remember it!!
 One technique I use is to have a password that I can refer to in some way
 where I don't write out the password. Say you might use something based
 on a specific bit of text. You write down the directions to that
text but not
 the actual password - - - the directions should be enough of a tip so that
 you can remember the actual password!)
5. now you again modify your pg_hbs.conf file to require a password

I will admit to having occasionally needing to do this a few times in
fairly quick
succession. Using this method (I don't remove the lines from the hb_pga.conf
file I just comment them out) has reduced my stress levels at least a little.
This may not be the most secure way of doing things but if your
cracker has access
to your machine to review conf files - - - - I'd say you have some large issues
happening already.

Regards




rejecting "interactive" connections

2020-06-18 Thread Luca Ferrari
Hi all,
I'm just wondering if there is a way to discriminate a connection
opened from an application server from one opened by an interactive
client (e.g., psql) using the same credentials: the aim is to reject
connections established from an interactive client rather than an
application server.
I know there is application_name as a property, but I don't think it's
usable to this aim and a filter at hba level exists.
Anyone have an idea?

Thanks,
Luca




RE: Netapp SnapCenter

2020-06-18 Thread Wolff, Ken L
We got the following from one of our NetApp contacts regarding a similar 
question.  I'd be interested to hear if anyone feels the steps mentioned are 
not accurate:



There is presently no NetApp supported plugin for PostgreSQL for SnapCenter.  
There is a community support plugin for PostgreSQL, which I have no experience 
with, and it hasn't been updated since 2018, so not sure if it's worth pursuing 
if it's not actively being maintained.



Documentation for backup and recovery of PostgreSQL is generally missing.  It's 
not a SnapCenter issue, it's PostreSQL itself.  It also doesn't need a lot of 
documentation because backups are easy.  PostgreSQL doesn't need to be in a 
special mode for backups to work.



  1.  Place data files on volume 1
  2.  Place Logs on volume 2
  3.  Schedule ONTAP snapshots on volume 1 at 00:05
  4.  Schedule ONTAP snapshots on volume 2 at 00:10



There you go.  All you really need is a data file snapshot before the log 
snapshot.  During recovery, PostgreSQL will figure out what logs are required 
to make the database consistent.





Ken



-Original Message-
From: Paul Förster 
Sent: Thursday, June 18, 2020 4:56 AM
To: pgsql-general 
Subject: EXTERNAL: Netapp SnapCenter



Hi,



first of all, it wasn't my glorious idea and I know that volume snapshots are 
no backups. :-)



Leaving that aside: Our company has decided to move away from Netapp 
SnapCreator to its SnapCenter. I was told there is some community plugin for 
PostgreSQL to use with SnapCenter but can't find any information that makes 
sense.



The only thing I found is:



https://automationstore.netapp.com/snappack-listNC.shtml



But it's from Oct. 3rd 2018, for PostgreSQL 9.x and 10.x and also requires JDK 
1.8 or later.



If this works at all, I consider these version limits (i.e. development status) 
as ugly, because more or less in limbo.



Does anyone know of a solution for SnapCenter that works, is actively supported 
and can be used with current versions of PostgreSQL (i.e. 11.x, 12.x and soon 
to come 13.x)?



Any idea would be greatly appreciated. Thanks in advance.



Cheers,

Paul




Re: Netapp SnapCenter

2020-06-18 Thread Paul Förster
Hi Ken,

> On 18. Jun, 2020, at 15:56, Wolff, Ken L  wrote:
> PostgreSQL doesn’t need to be in a special mode for backups to work.

this is curious. Doesn't the PostgreSQL cluster need to be set to backup mode 
to use SnapCenter?

The problem is, one can't test that and get a reliable answer, because you do 
100 backups (snaps) for testing and restore those 100 backups, and all may be 
ok.

But what about the 1000th snap? Just in that millisecond of the snap, some 
important information is written to the volume on which the PostgreSQL cluster 
resides and for some reason, it needs to be restored later and this information 
is lost or may lead to corruption. This is why backup mode exists, after all. 
Really, no backup mode by a pre/post script when snapping?

This is very surprising to me.

Cheers,
Paul





Re: Netapp SnapCenter

2020-06-18 Thread Magnus Hagander
On Thu, Jun 18, 2020 at 4:07 PM Paul Förster 
wrote:

> Hi Ken,
>
> > On 18. Jun, 2020, at 15:56, Wolff, Ken L  wrote:
> > PostgreSQL doesn’t need to be in a special mode for backups to work.
>
> this is curious. Doesn't the PostgreSQL cluster need to be set to backup
> mode to use SnapCenter?
>

I don't know specifically about SnapCenter, but for snapshots in general,
it does require backup mode *unless* all your data is on the same disk and
you have an atomic snapshot across that disk (in theory it can be on
different disk as well, as long as the snapshots in that case are atomic
across *all* those disks, not just individually, but that is unusual).

So the upthread suggestion of putting data and wal on different disk and
snapshoting them at different times is *NOT* safe. Unless the reference to
the directory for the logs means a directory where log files are copied out
with archive_command, and it's actually the log archive (in which case it
will work, but the recommendation is that the log archive should not be on
the same machine).


The problem is, one can't test that and get a reliable answer, because you
> do 100 backups (snaps) for testing and restore those 100 backups, and all
> may be ok.
>

> But what about the 1000th snap? Just in that millisecond of the snap, some
> important information is written to the volume on which the PostgreSQL
> cluster resides and for some reason, it needs to be restored later and this
> information is lost or may lead to corruption. This is why backup mode
> exists, after all. Really, no backup mode by a pre/post script when
> snapping?
>

The normal case is that snapshots are guaranteed to be atomic, and thus
this millisecond window cannot appear. But that usually only applies across
individual volumes. It's also worth noticing that taking the backups
without using the backup mode and archive_command means you cannot use them
for PITR, only for restore onto that specific snapshot.

While our documentation on backups in general definitely needs improvement,
this particular requirement is documented at
https://www.postgresql.org/docs/current/backup-file.html.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: ESQL/C no indicator variables ./. error -213

2020-06-18 Thread Laurenz Albe
On Wed, 2020-06-17 at 19:02 +0200, Matthias Apitz wrote:
> We encountered that if our ESQL/C written servers see on SELECT or FETCH
> in a row a NULL value, it will raise correctly the error -213 as written
> and explained in 
> https://www.postgresql.org/docs/11/ecpg-variables.html#ECPG-INDICATORS
> We catch this error -213 and deal with in.
> 
> What we did not knew and discovered today is something very fatal: In
> such a situation on a FETCH of a row of some 55 columns, the transfer of
> the column elements into their hostvariables stops on first NULL value,
> as here to be seen in the log:
> 
> [29217] [17.06.2020 15:49:16:499]: ecpg_execute on line 69: query: select * 
> from acq_ffleit where bnr = $1 ; with 1 parameter(s) on connection sisis
> [29217] [17.06.2020 15:49:16:500]: ecpg_execute on line 69: using PQexecParams
> [29217] [17.06.2020 15:49:16:500]: ecpg_free_params on line 69: parameter 1 = 
> 742
> [29217] [17.06.2020 15:49:16:500]: ecpg_process_output on line 69: correctly 
> got 1 tuples with 55 fields
> [29217] [17.06.2020 15:49:16:500]: ecpg_get_data on line 69: RESULT: 742 
> offset: 752; array: no
> ...
> [29217] [17.06.2020 15:49:16:500]: ecpg_get_data on line 69: RESULT: 49 
> offset: 752; array: no
> [29217] [17.06.2020 15:49:16:500]: ecpg_get_data on line 69: RESULT: offset: 
> 752; array: no
> [29217] [17.06.2020 15:49:16:500]: raising sqlcode -213 on line 69: null 
> value without indicator on line 69
> 
> In the above examples the transfer stopped after 47 RESULTs; the
> hostvariables after this have been untouched in our program, i.e. only
> part of the row has been read.
> 
> One could say, "so what, you have error -213 and you deserve it". But at
> least this behaviour should be documented clearly in the above mentioned page.
> 
> I would expect, that NULL values would not be transfered in such case
> but all other hostvariables yes, i.e. for me this is perhaps a bug in the
> ESQL/C implementation.

I think that is as expected.

The documentation says:

  To be able to pass null values to the database or retrieve null values from 
the database,
  you need to append a second host variable specification to each host variable 
that contains data.
  This second host variable is called the *indicator* [...]

Failure to do that causes an error (which you catch).
If a statement causes an error, you cannot rely on the state of any host 
valiable
that gets set by that statement.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Parallel safety of contrib extensions

2020-06-18 Thread Winfield, Steven
> This would not be a complicated change as it requires creating a new
> version script for those modules.  You can look at commit 20eb273 as
> an example, and the work is even simpler now that we just need update
> scripts when bumping a module's version (those named foo--1.0--1.1.sql
> and not foo--1.1.sql).  Here is the related documentation:
> https://www.postgresql.org/docs/devel/extend-extensions.html#id-1.8.3.20.15

Many thanks for the pointers - I've submitted a patch.

Steven.

** Cantab Capital Partners LLP is now named GAM Systematic LLP. Please note 
that our email addresses have changed from @cantabcapital.com to @gam.com.**

This email was sent by and on behalf of GAM Investments. GAM Investments is the 
corporate brand for GAM Holding AG and its direct and indirect subsidiaries. 
These companies may be referred to as ‘GAM’ or ‘GAM Investments’. In the United 
Kingdom, the business of GAM Investments is conducted by GAM (U.K.) Limited 
(No. 01664573) or one or more entities under the control of GAM (U.K.) Limited, 
including the following entities authorised and regulated by the Financial 
Conduct Authority: GAM International Management Limited (No. 01802911), GAM 
London Limited (No. 00874802), GAM Sterling Management Limited (No. 01750352), 
GAM Unit Trust Management Company Limited (No. 2873560) and GAM Systematic LLP 
(No. OC317557). GAM (U.K.) Limited and its regulated entities are registered in 
England and Wales. The registered office and principal place of business of GAM 
(U.K.) Limited and its regulated entities is at 8 Finsbury Circus, London, 
England, EC2M 7GB. The registered office of GAM Systematic LLP is at City 
House, Hills Road, Cambridge, CB2 1RE. This email, and any attachments, is 
confidential and may be privileged or otherwise protected from disclosure. It 
is intended solely for the stated addressee(s) and access to it by any other 
person is unauthorised. If you are not the intended recipient, you must not 
disclose, copy, circulate or in any other way use or rely on the information 
contained herein. If you have received this email in error, please inform us 
immediately and delete all copies of it. See - 
https://www.gam.com/en/legal/email-disclosures-eu/ for further information on 
confidentiality, the risks of non-secure electronic communication, and certain 
disclosures which we are required to make in accordance with applicable 
legislation and regulations. If you cannot access this link, please notify us 
by reply message and we will send the contents to you. GAM Investments will 
collect and use information about you in the course of your interactions with 
us. Full details about the data types we collect and what we use this for and 
your related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with 
this policy and check it from time to time for updates as it supplements this 
notice.


Re: Conflict with recovery on PG version 11.6

2020-06-18 Thread Laurenz Albe
Toomas Kristin wrote:
> > > Basically after upgrade to version 11.5 from 10.6 I experience error 
> > > messages on streaming
> > > replica host “FATAL:  terminating connection due to conflict with 
> > > recovery” and
> > > “ERROR: canceling statement due to conflict with recovery”. There is no 
> > > changes for
> > > vacuuming on master nor max_standby_streaming_delay for replica. I tried 
> > > to correlate
> > > errors with vacuuming process on master but according to logs there is no 
> > > link between
> > > them. Somehow I have feeling that when query runs longer than value for 
> > > parameter
> > > max_standby_streaming_delay the query will be terminated regardless 
> > > vacuuming process on master.
> > > 
> > > Is there any changes on version 11.5 what may cause it?
> > > 
> > > Is there any good solution without setting max_standby_streaming_delay=-1 
> > > or enabling hot_standby_feedback?
> > 
> > The basic behavior shouldn't have changed since v10.
> > 
> > Check "pg_stat_database_conflicts" to see what kinds of conflicts that are.
> > 
> > The only solutions to avoid queries being canceled due to replication 
> > conflicts are:
> > 
> > 1. avoid that such conflicts happen:
> >- set "hot_standby_feedback = on" on the standby and/or
> >  "vacuum_defer_cleanup_age" on the primary to avoid VACUUM conflicts
> >- Don't lock tables in access exclusive mode
> > 
> > 2. set "max_standby_streaming_delay" to -1
> > 
> > Note that it can be quite hard to completely avoid replication conflicts.
> > Trying to have both no delay in applying changes and no cancelled queries
> > is often not possible without seriously crippling autovacuum.
>
> What are reasons for conflicts? Based on documentation seems that the only 
> reason can be that vacuum removed
> unused tuples that are in use at standby host and due to that standby host 
> cannot apply
> modifications while blocking query either finishes or will be terminated. 
> isnt it? Or there can be some other reasons?

There can be other reasons:

- replicated ACCESS EXCLUSIVE locks that conflict with queries
- replicated ACCESS EXCLUSIVE locks that cause deadlocks
- buffer pins that are needed for replication but held by a query
- dropped tablespaces that hold temporary files on the standby

> I just wondering what would be impact when I increase value for 
> autovacuum_vacuum_scale_factor
> in order force vacuuming process postpone the clean up process.

That won't help, it will just get your primary bloated.

I told you the remedies above, why don't you like them?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Conflict with recovery on PG version 11.6

2020-06-18 Thread Laurenz Albe
On Thu, 2020-06-18 at 10:09 +0300, Toomas Kristin wrote:
> What is difference between following error messages?
> 
> ERROR:  canceling statement due to conflict with recovery
> FATAL:  terminating connection due to conflict with recovery

One kills your database session, the other doesn't.

> I tried to correlate process termination on standby and vacuuming on master.
> 
> Here is a sample timeline
> 1) 2020-06-17 22:45:42 - Last vacuuming before session
> 2) 2020-06-18 01:01:12 - Session was started at standby
> 3) 2020-06-18 01:27:53 - Session was terminated at standby
> 4) 2020-06-18 02:59:32 - Next vacuuming
> 
> Based on that I would say there is no correlation between vacuuming and 
> process termination.
> 
> How can I identify where is the conflict that stops replication and forces to 
> terminate a query on standby host?

It need not be caused by VACUUM; look which counter in
"pg_stat_database_conflicts" has increased.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Hiding a GUC from SQL

2020-06-18 Thread Laurenz Albe
On Wed, 2020-06-17 at 13:23 -0700, Michel Pelletier wrote:
> In my extension pgsodium I'm defining a custom variable at startup to store a 
> key:
> 
> https://github.com/michelp/pgsodium/blob/master/src/pgsodium.c#L1107
> 
> I'm using the flags GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | 
> GUC_DISALLOW_IN_FILE,
> and a custom "no show" show hook that obscures the value.  This idea was 
> inspired from the
> pgcryptokey module from Bruce Momjian.
> 
> The value cannot be shown either with SHOW or current_setting() and it does 
> not appear in pg_settings.
>  From what I can tell, the value is inaccessible from SQL, but I think it's 
> worth asking
> the experts if there is some other demonstrable way, from SQL, that this 
> value could be
> leaked even to a superuser.  no sql level user should be able to see this 
> value, only a C function,
> like the pgsodium_derive() from which to derive other keys, should be able to 
> see it.
> I realize that someone with external process access can get the key, my  goal 
> is to prevent
> accessing it from SQL.
> 
> Any thoughts on weaknesses to this approach would be welcome.  Thanks!
> 
> -Michel

A superuser can access files and start programs on the server machine.

A dedicated superuser may for example attach to PostgreSQL with a debugger
and read the value of the variable.

And if that doesn't work, there may be other things to try.

It is mostly useless to try to keep a superuser from doing anything that
the "postgres" operating system user can do.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-18 Thread Laurenz Albe
On Wed, 2020-06-17 at 14:26 -0400, Jim Hurne wrote:
> On one of the instances that is exhibiting the "disk leak" behavior, the 
> VACUUM ANALYZE VERBOSE command doesn't generate any output or complete 
> before I loose the connection to the database (presumably because I hit a 
> connection read timeout). Is it possible to configure th read timeout for 
> psql?

I have never heard about a connection read timeout for "psql".

I'd look into the server log; perhaps there is an error that indicates
data curruption that crashes the server?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: ESQL/C no indicator variables ./. error -213

2020-06-18 Thread Tom Lane
Matthias Apitz  writes:
> El día Mittwoch, Juni 17, 2020 a las 01:39:53 -0400, Tom Lane escribió:
>> Could you provide a self-contained test case for this?  It's hard to
>> guess at what the problem might be.

> attached is a simple ESQL/C code;
> ...
> [6485] [18.06.2020 08:26:38:436]: ecpg_execute on line 36: query: select 
> tstchar25 , tstint from dbctest where tstint = 1; with 0 parameter(s) on 
> connection sisis
> [6485] [18.06.2020 08:26:38:437]: ecpg_execute on line 36: using PQexec
> [6485] [18.06.2020 08:26:38:437]: ecpg_process_output on line 36: correctly 
> got 1 tuples with 2 fields
> [6485] [18.06.2020 08:26:38:437]: ecpg_get_data on line 36: RESULT:  offset: 
> 80; array: no
> [6485] [18.06.2020 08:26:38:437]: raising sqlcode -213 on line 36: null value 
> without indicator on line 36
> [6485] [18.06.2020 08:26:38:438]: ecpg_finish: connection sisis closed

Hmm.  Trying this here, I get

$ ./embedded
hostvariable 'tstint' before SELECT tstchar25, tstint INTO :tstchar25, :tstint 
FROM dbctest: 99
hostvariable 'tstint' after  SELECT tstchar25, tstint INTO :tstchar25, :tstint 
FROM dbctest with -213: 1
...
[1549069]: ecpg_execute on line 38: query: select tstchar25 , tstint from dbctes
t where tstint = 1; with 0 parameter(s) on connection sisis
[1549069]: ecpg_execute on line 38: using PQexec
[1549069]: ecpg_process_output on line 38: correctly got 1 tuples with 2 fields
[1549069]: ecpg_get_data on line 38: RESULT:   offset: 8
0; array: no
[1549069]: ecpg_get_data on line 38: RESULT: 1 offset: 4; array: no
[1549069]: ecpg_finish: connection sisis closed


which looks like the right thing.  I don't pay much attention to ECPG
development, but it seems like you must be hitting a bug that's been
fixed.  Are your ecpg and libecpg up to date?

regards, tom lane




Re: create batch script to import into postgres tables

2020-06-18 Thread Pepe TD Vo
I have a Postgresql client installed and connected.  how can i create a batch 
script running from the client window?
Bach-Nga

No one in this world is pure and perfect.  If you avoid people for their 
mistakes you will be alone. So judge less, love, and forgive more.To call him a 
dog hardly seems to do him justice though in as much as he had four legs, a 
tail, and barked, I admit he was, to all outward appearances. But to those who 
knew him well, he was a perfect gentleman (Hermione Gingold)
**Live simply **Love generously **Care deeply **Speak kindly.*** Genuinely rich 
*** Faithful talent *** Sharing success

 

On Tuesday, June 16, 2020, 11:42:40 AM EDT, Pepe TD Vo  
wrote:  
 
 Yes, I do have putty installed but can't connect to the aws postgres instance. 
 Only work for oracle instance.  Only connect postgres instance using pgadmin.
follow the url and the login prompt for username and hung there.
thank you.
Bach-Nga

No one in this world is pure and perfect.  If you avoid people for their 
mistakes you will be alone. So judge less, love, and forgive more.To call him a 
dog hardly seems to do him justice though in as much as he had four legs, a 
tail, and barked, I admit he was, to all outward appearances. But to those who 
knew him well, he was a perfect gentleman (Hermione Gingold)
**Live simply **Love generously **Care deeply **Speak kindly.*** Genuinely rich 
*** Faithful talent *** Sharing success

 

On Tuesday, June 16, 2020, 11:17:21 AM EDT, Christopher Browne 
 wrote:  
 
 
On Tue, 16 Jun 2020 at 10:59, Pepe TD Vo  wrote:

I can run \copy in Linux with individual csv file into the table fine and run 
import using pgadmin into AWS instance.  I am trying to run \copy all csv files 
import into its own table in Linux and in AWS instance. If all csv files into 
one table is fine but each csv for each table.  Should I create one batch job 
for each imported table?  If each batch file import csv to its table would be 
fine via \copy table_name(col1, col2, ... coln) from '/path/tablename.csv' 
delimiter ',' csv header;  right?

There is no single straightforward answer to that.
Supposing I want a batch to either all be processed, or to all not process, 
then I might write a sql file like:
begin;\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;\copy 
table_2 (c1, c2, c3) from '/path/tabledata2.csv' csv header;\copy table_3 (c1, 
c2, c3) from '/path/tabledata3.csv' csv header;commit;
But you may be fine with having a separate SQL script for each table.
There will be conditions where one or the other is more appropriate, and that 
will be based on the requirements of the process.


Also, the problem is I can't pull/execute psql from window client to pull the 
psql in aws instance and don't know how to create the batch script for this 
run.  I tried simple \copy pull from c:\tes.csv and psql is unknown.


You cannot run psql without having it installed; there is a Windows installer 
for PostgreSQL, so you could use that to get it installed.
Hopefully there is an installer that will just install PostgreSQL client 
software (like psql, pg_dump, and notably *not* the database server software); 
I don't use WIndows, so I am not too familiar with that.
 -- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: ESQL/C no indicator variables ./. error -213

2020-06-18 Thread Tom Lane
I wrote:
> Hmm.  Trying this here, I get
> ...

Oh, scratch that, I'd failed to duplicate the test case.  (I had an
empty string, not NULL, as the value of the tstchar25 column.)

I concur with Laurenz's diagnosis that you need to use an indicator
variable if you want to retrieve a NULL.

regards, tom lane




Re: create batch script to import into postgres tables

2020-06-18 Thread Adrian Klaver

On 6/18/20 8:20 AM, Pepe TD Vo wrote:

Please don't top post. The preferred style on this list is inline or 
bottom posting(https://en.wikipedia.org/wiki/Posting_style).


I have a Postgresql client installed and connected.  how can i create a 
batch script running from the client window?


Create a file with commands in it like the example from Christopher 
Browne that was posted earlier:


"There is no single straightforward answer to that.


Supposing I want a batch to either all be processed, or to all not process,
then I might write a sql file like:


begin;
\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
\copy table_2 (c1, c2, c3) from '/path/tabledata2.csv' csv header;
\copy table_3 (c1, c2, c3) from '/path/tabledata3.csv' csv header;
commit;


But you may be fine with having a separate SQL script for each table.


There will be conditions where one or the other is more appropriate, and
that will be based on the requirements of the process."

Then point psql at it:

psql -d some_db -h some_host -U some_user -f the_file

Be aware that \copy is all or nothing. If there is a single failure in 
the copying the whole copy will rollback. Given that the one file per 
table might be preferable.




**
*Bach-Nga





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Hiding a GUC from SQL

2020-06-18 Thread Michel Pelletier
On Wed, Jun 17, 2020 at 3:55 PM Tom Lane  wrote:

> Michel Pelletier  writes:
> > In my extension pgsodium I'm defining a custom variable at startup to
> store
> > a key:
>
> > https://github.com/michelp/pgsodium/blob/master/src/pgsodium.c#L1107
>
> > I'm using the flags GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL |
> GUC_NOT_IN_SAMPLE
> > | GUC_DISALLOW_IN_FILE, and a custom "no show" show hook that obscures
> the
> > value.  This idea was inspired from the pgcryptokey module from Bruce
> > Momjian.
>
> I guess I'm wondering why you're making it a GUC at all, if you don't
> want any of the GUC facilities to apply.
>

An excellent point as it's loaded pre-fork I guess I don't need any of that
stuff.


> It seems like if you want to be this paranoid, you'd be better off
> not exposing the variable to the GUC machinery in the first place.
> You could use a custom set-function (like setseed) to replace the one
> bit of functionality you do want.
>

Thanks!  I've implemented your suggestion similar to how setseed stores its
data.


>
> regards, tom lane
>


Re: create batch script to import into postgres tables

2020-06-18 Thread Pepe TD Vo
I get this part that separates SQL script for import each table, 
(import.sql)begin;
\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
commit;
but when open the psql sql shell script it prompts line by line for localhost, 
port, db, user, and password.  If I set up a script and let it run it won't 
connect to the postgresql instance.  I want to know how to execute a batch 
script connect to the database/instance.   In oracle I created a shell script 
with all oracle_sid, oracle_home, and read the function/procedure... for psql, 
especially from window client, I did put psql_home and connect to the instance, 
it failed

c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U postgres 
-i import.sql
even I do a simple count
c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U postgres 
-c "select count(*) from tableA";

none of them is work. Try to learn how to execute its script.
Bach-Nga

No one in this world is pure and perfect.  If you avoid people for their 
mistakes you will be alone. So judge less, love, and forgive more.To call him a 
dog hardly seems to do him justice though in as much as he had four legs, a 
tail, and barked, I admit he was, to all outward appearances. But to those who 
knew him well, he was a perfect gentleman (Hermione Gingold)
**Live simply **Love generously **Care deeply **Speak kindly.*** Genuinely rich 
*** Faithful talent *** Sharing success

 

On Thursday, June 18, 2020, 12:08:44 PM EDT, Adrian Klaver 
 wrote:  
 
 On 6/18/20 8:20 AM, Pepe TD Vo wrote:

Please don't top post. The preferred style on this list is inline or 
bottom posting(https://en.wikipedia.org/wiki/Posting_style).

> I have a Postgresql client installed and connected.  how can i create a 
> batch script running from the client window?

Create a file with commands in it like the example from Christopher 
Browne that was posted earlier:

"There is no single straightforward answer to that.


Supposing I want a batch to either all be processed, or to all not process,
then I might write a sql file like:


begin;
\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
\copy table_2 (c1, c2, c3) from '/path/tabledata2.csv' csv header;
\copy table_3 (c1, c2, c3) from '/path/tabledata3.csv' csv header;
commit;


But you may be fine with having a separate SQL script for each table.


There will be conditions where one or the other is more appropriate, and
that will be based on the requirements of the process."

Then point psql at it:

psql -d some_db -h some_host -U some_user -f the_file

Be aware that \copy is all or nothing. If there is a single failure in 
the copying the whole copy will rollback. Given that the one file per 
table might be preferable.

> 
> **
> *Bach-Nga
> 



-- 
Adrian Klaver
adrian.kla...@aklaver.com


  

Re: Netapp SnapCenter

2020-06-18 Thread Paul Förster
Hi Magnus,

> On 18. Jun, 2020, at 16:19, Magnus Hagander  wrote:
> I don't know specifically about SnapCenter, but for snapshots in general, it 
> does require backup mode *unless* all your data is on the same disk and you 
> have an atomic snapshot across that disk (in theory it can be on different 
> disk as well, as long as the snapshots in that case are atomic across *all* 
> those disks, not just individually, but that is unusual).

according to what I know from our storage guys, Netapp does atomic snapshots 
for each volume. We have the database and its corresponding WAL files (pg_wal 
directory) on the same volume and the archived WALs (archive_command) on 
another. And the snapshots on those two volumes are not taken at the same time. 
Currently, the database is set to backup mode (using cron) and the storage guys 
have a window during which they can take the snapshots.

> So the upthread suggestion of putting data and wal on different disk and 
> snapshoting them at different times is *NOT* safe. Unless the reference to 
> the directory for the logs means a directory where log files are copied out 
> with archive_command, and it's actually the log archive (in which case it 
> will work, but the recommendation is that the log archive should not be on 
> the same machine).

as I said above, pg_wal is a directory in PGDATA at the default location and 
WALs are archived using the archive_command to a different volume. So I guess, 
we should be safe then.

> The normal case is that snapshots are guaranteed to be atomic, and thus this 
> millisecond window cannot appear. But that usually only applies across 
> individual volumes. It's also worth noticing that taking the backups without 
> using the backup mode and archive_command means you cannot use them for PITR, 
> only for restore onto that specific snapshot.
> 
> While our documentation on backups in general definitely needs improvement, 
> this particular requirement is documented at 
> https://www.postgresql.org/docs/current/backup-file.html.

since we use backup mode, we should be good with PITR too. I didn't test that 
myself, but a workmate did and he said it worked nicely.

So bottom line, SnapCenter means for PostgreSQL just a plain volume snapshot 
like we currently do with SnapCreator, using backup mode scripts and 
"archive_command"-ing WALs to a diffent volume, i.e. no change in strategy. 
It's just that SnapCenter can't control backup mode as it can with Oracle.

Thanks for the invaluable input. Also, thanks Ken.

Cheers,
Paul





Re: Hiding a GUC from SQL

2020-06-18 Thread Michel Pelletier
On Thu, Jun 18, 2020 at 7:47 AM Laurenz Albe 
wrote:

> On Wed, 2020-06-17 at 13:23 -0700, Michel Pelletier wrote:
> >
> > Any thoughts on weaknesses to this approach would be welcome.  Thanks!
>

> A superuser can access files and start programs on the server machine.
>

> A dedicated superuser may for example attach to PostgreSQL with a debugger
> and read the value of the variable.
>

Preventing access from regular users is pretty solid I believe, but you're
right superusers is going to be a real challenge.  It is discouraging that
just about every postgres deployment I've inherited over the years has some
web user interacting process logging in as a superuser.  There are
seemingly infinite web frameworks that do this out of the box like it's a
feature.  If the database is coupled to the entire web via a superuser web
client then I consider that a compromised system already and doing
something like crypto, or banking, or PII is insane.

So, that being said I'm assuming some level of reasonable when I want to
avoid access for superusers.  This is mainly to avoid mistakes that a
superuser could make like accidentally leaking a key.For that
definition of "reasonable" I guess there are at least two risks here, one
is accessing process state by invoking inspection tools like debuggers, and
another is running the getkey script like `COPY foo FROM PROGRAM
'/usr/share/postgresql/13/extension/pgsodium_getkey';`

For the first situation I can't think of any mitigation other than
documenting and recommending that things like debuggers not be installed on
systems that do crypto (or banking, or PII, etc).

For the second situation there are a couple mitigations at the expense of
some annoyance lik getkey programs that prompt for the key on boot from an
interactive console. For non-interactive getkeys  I'm considering an
optional mode where the getkey program is deleted by the extension
initialization after one use.  The key fetcher program must be placed in
the right dir on every server start by some external process.


> It is mostly useless to try to keep a superuser from doing anything that
> the "postgres" operating system user can do.
>

Agreed, thanks for your suggestions!

-Michel


>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: Importing a Large .ndjson file

2020-06-18 Thread Sankar P
> > It looks like plain old COPY would do this just fine, along the lines
> > of (in psql)
> >
> > \copy myTable(content) from 'myfile.ndjson'

I spoke too soon. While this worked fine when there were no indexes
and finished within 10 minutes, with GIN index on the jsonb column, it
is taking hours and still not completing.

-- 
Sankar P
http://psankar.blogspot.com




Re: Importing a Large .ndjson file

2020-06-18 Thread Michael Lewis
>
> I spoke too soon. While this worked fine when there were no indexes
> and finished within 10 minutes, with GIN index on the jsonb column, it
> is taking hours and still not completing.
>

It is always recommended to create indexes AFTER loading data. Sometimes it
can be faster to drop all indexes on the table, load huge data, and
re-create the indexes but there's no hard & fast rule. If you are adding
100k records to an empty or near empty table, I would remove all indexes
and create them after. Be sure you have sufficient maintenance_work_mem
also.


Re: Importing a Large .ndjson file

2020-06-18 Thread Tom Lane
Sankar P  writes:
> I spoke too soon. While this worked fine when there were no indexes
> and finished within 10 minutes, with GIN index on the jsonb column, it
> is taking hours and still not completing.

There's the generic advice that building an index after-the-fact
is often cheaper than updating it incrementally.  For GIN indexes
specifically, see also

https://www.postgresql.org/docs/current/gin-tips.html

regards, tom lane




Re: create batch script to import into postgres tables

2020-06-18 Thread Adrian Klaver

On 6/18/20 9:40 AM, Pepe TD Vo wrote:

I get this part that separates SQL script for import each table,

(import.sql)
begin;
\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
commit;

but when open the psql sql shell script it prompts line by line for 
localhost, port, db, user, and password.  If I set up a script and let 
it run it won't connect to the postgresql instance.  I want to know how 
to execute a batch script connect to the database/instance.
In oracle I created a shell script with all oracle_sid, oracle_home, and 
read the function/procedure... for psql, especially from window client, 
I did put psql_home and connect to the instance, it failed


c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U 
postgres -i import.sql


The above should be -f import.sql. AFAIK there is no -i for psql, so 
that should be failing.




even I do a simple count

c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U 
postgres -c "select count(*) from tableA";


psql -d production  -U postgres -c 'select count(*) from cell_per';
Null display is "NULL".
 count
---
68
(1 row)

psql -d production  -U postgres -c 'select count(*) from cell_per'
Null display is "NULL".
 count
---
68

psql -d production  -U postgres -c 'select count(*) from cell_per;'
Null display is "NULL".
 count
---
68




none of them is work. Try to learn how to execute its script.


What error messages do you get?



**
*Bach-Nga

*No one in this world is pure and perfect.  If you avoid people for 
their mistakes you will be alone. So judge less, love, and forgive 
more.EmojiEmojiEmoji
To call him a dog hardly seems to do him justice though in as much as he 
had four legs, a tail, and barked, I admit he was, to all outward 
appearances. But to those who knew him well, he was a perfect gentleman 
(Hermione Gingold)


**Live simply **Love generously **Care deeply **Speak kindly.
*** Genuinely rich *** Faithful talent *** Sharing success




On Thursday, June 18, 2020, 12:08:44 PM EDT, Adrian Klaver 
 wrote:



On 6/18/20 8:20 AM, Pepe TD Vo wrote:

Please don't top post. The preferred style on this list is inline or
bottom posting(https://en.wikipedia.org/wiki/Posting_style).

 > I have a Postgresql client installed and connected.  how can i create a
 > batch script running from the client window?

Create a file with commands in it like the example from Christopher
Browne that was posted earlier:

"There is no single straightforward answer to that.


Supposing I want a batch to either all be processed, or to all not process,
then I might write a sql file like:


begin;
\copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
\copy table_2 (c1, c2, c3) from '/path/tabledata2.csv' csv header;
\copy table_3 (c1, c2, c3) from '/path/tabledata3.csv' csv header;
commit;


But you may be fine with having a separate SQL script for each table.


There will be conditions where one or the other is more appropriate, and
that will be based on the requirements of the process."

Then point psql at it:

psql -d some_db -h some_host -U some_user -f the_file

Be aware that \copy is all or nothing. If there is a single failure in
the copying the whole copy will rollback. Given that the one file per
table might be preferable.


 >
 > **
 > *Bach-Nga

 >



--
Adrian Klaver
adrian.kla...@aklaver.com 






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Netapp SnapCenter

2020-06-18 Thread Stephen Frost
Greetings,

* Paul Förster (paul.foers...@gmail.com) wrote:
> > On 18. Jun, 2020, at 16:19, Magnus Hagander  wrote:
> > I don't know specifically about SnapCenter, but for snapshots in general, 
> > it does require backup mode *unless* all your data is on the same disk and 
> > you have an atomic snapshot across that disk (in theory it can be on 
> > different disk as well, as long as the snapshots in that case are atomic 
> > across *all* those disks, not just individually, but that is unusual).
> 
> according to what I know from our storage guys, Netapp does atomic snapshots 
> for each volume. We have the database and its corresponding WAL files (pg_wal 
> directory) on the same volume and the archived WALs (archive_command) on 
> another. And the snapshots on those two volumes are not taken at the same 
> time. Currently, the database is set to backup mode (using cron) and the 
> storage guys have a window during which they can take the snapshots.

If the entire database, all tablespaces, and pg_wal, are on the same
volume and the snapshot of the volume is atomic, then you don't actually
need to go through the start/stop backup- a snapshot being restored will
look just like a system crash and PG will just go back to the last
checkpoint and replay the WAL that's in pg_wal and it should reach
consistency and come up.

> > So the upthread suggestion of putting data and wal on different disk and 
> > snapshoting them at different times is *NOT* safe. Unless the reference to 
> > the directory for the logs means a directory where log files are copied out 
> > with archive_command, and it's actually the log archive (in which case it 
> > will work, but the recommendation is that the log archive should not be on 
> > the same machine).
> 
> as I said above, pg_wal is a directory in PGDATA at the default location and 
> WALs are archived using the archive_command to a different volume. So I 
> guess, we should be safe then.

Yes, that should be alright.

> > The normal case is that snapshots are guaranteed to be atomic, and thus 
> > this millisecond window cannot appear. But that usually only applies across 
> > individual volumes. It's also worth noticing that taking the backups 
> > without using the backup mode and archive_command means you cannot use them 
> > for PITR, only for restore onto that specific snapshot.
> > 
> > While our documentation on backups in general definitely needs improvement, 
> > this particular requirement is documented at 
> > https://www.postgresql.org/docs/current/backup-file.html.
> 
> since we use backup mode, we should be good with PITR too. I didn't test that 
> myself, but a workmate did and he said it worked nicely.
> 
> So bottom line, SnapCenter means for PostgreSQL just a plain volume snapshot 
> like we currently do with SnapCreator, using backup mode scripts and 
> "archive_command"-ing WALs to a diffent volume, i.e. no change in strategy. 
> It's just that SnapCenter can't control backup mode as it can with Oracle.

The one issue here is that if you're using the deprecated exxclusive
backup API, then PG will create a backup_label file in the data
directory.  If the system reboots while that file exists, there's a good
chance that PG won't start up cleanly since, due to the file existing,
it thinks that it's restoring from a backup when it isn't.

Of course, if you're actually restoring from a backup, then that file is
absolutely critical to have in place, otherwise PG won't realize it's
being restored from a backup and you'll end up with a corrupted database
on restore.

Better is to use the newer non-exclusive API and arrange to collect the
necessary contents of the backup_label file from PG and store that with
the snapshot that you've taken.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Netapp SnapCenter

2020-06-18 Thread Ron

On 6/18/20 11:49 AM, Paul Förster wrote:

Hi Magnus,


On 18. Jun, 2020, at 16:19, Magnus Hagander  wrote:
I don't know specifically about SnapCenter, but for snapshots in general, it 
does require backup mode *unless* all your data is on the same disk and you 
have an atomic snapshot across that disk (in theory it can be on different disk 
as well, as long as the snapshots in that case are atomic across *all* those 
disks, not just individually, but that is unusual).

according to what I know from our storage guys, Netapp does atomic snapshots 
for each volume. We have the database and its corresponding WAL files (pg_wal 
directory) on the same volume and the archived WALs (archive_command) on 
another. And the snapshots on those two volumes are not taken at the same time. 
Currently, the database is set to backup mode (using cron) and the storage guys 
have a window during which they can take the snapshots.


So the upthread suggestion of putting data and wal on different disk and 
snapshoting them at different times is *NOT* safe. Unless the reference to the 
directory for the logs means a directory where log files are copied out with 
archive_command, and it's actually the log archive (in which case it will work, 
but the recommendation is that the log archive should not be on the same 
machine).

as I said above, pg_wal is a directory in PGDATA at the default location and 
WALs are archived using the archive_command to a different volume. So I guess, 
we should be safe then.


But it's trivial to make pg_xlog a separate mount point.

--
Angular momentum makes the world go 'round.




Re: create batch script to import into postgres tables

2020-06-18 Thread Pepe TD Vo
>>psql -d production  -U postgres -c 'select count(*) from cell_per';
Null display is "NULL".
  count
---
    68
(1 row)

you can do this once you are in psql. But if you are running from shell script, 
it will be an error

>>What error messages do you get?
my shell script is:@echo off C:\Program Files\PostgreSQL\11\bin\psql -U 
PSmasteruser  -d PSCIDR -h hostname.amazonaws.com -p 5432 -c "select count(*) 
from tableA;"
pause
the error I have is 'C:\Program' is not recognized as an internal or external 
command, operable program or batch file.
I even surround the path in quotes because of space.@echo off cmd /c 
""C:\Program Files\PostgreSQL\11\bin\psql" -U PSmasteruser -d PSCIDR -h 
hostname.amazonaws.com -p 5432 -c "select count(*) from tableA;"
pause
error is: 'C:\Program Files\PostgreSQL\11\bin\psql: illegal option -- u
when I put -P Password123 (or fully qualified password=Password123)
it gives me another error "pset: unknow option: Password123psql: could not set 
printing parameter "Password123"
I can connect from psql shell fine when it prompt hostname, username (but 
connect to postgres not to PSmasteruser, PSmasteruser username set up for aws 
maintenance postgres database, someone did, not me and the password is same on 
both).  Once I'm in psql and I can change to PSCIDR instance fine and run 
select count(*) from tableA;


Bach-Nga

No one in this world is pure and perfect.  If you avoid people for their 
mistakes you will be alone. So judge less, love, and forgive more.To call him a 
dog hardly seems to do him justice though in as much as he had four legs, a 
tail, and barked, I admit he was, to all outward appearances. But to those who 
knew him well, he was a perfect gentleman (Hermione Gingold)
**Live simply **Love generously **Care deeply **Speak kindly.*** Genuinely rich 
*** Faithful talent *** Sharing success

 

On Thursday, June 18, 2020, 03:06:39 PM EDT, Adrian Klaver 
 wrote:  
 
 On 6/18/20 9:40 AM, Pepe TD Vo wrote:
> I get this part that separates SQL script for import each table,
> 
> (import.sql)
> begin;
> \copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
> commit;
> 
> but when open the psql sql shell script it prompts line by line for 
> localhost, port, db, user, and password.  If I set up a script and let 
> it run it won't connect to the postgresql instance.  I want to know how 
> to execute a batch script connect to the database/instance.
> In oracle I created a shell script with all oracle_sid, oracle_home, and 
> read the function/procedure... for psql, especially from window client, 
> I did put psql_home and connect to the instance, it failed
> 
> c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U 
> postgres -i import.sql

The above should be -f import.sql. AFAIK there is no -i for psql, so 
that should be failing.

> 
> even I do a simple count
> 
> c:\Progra~1\PostgreSQL\11\bin\psql -d PSCIDR -h localhost -p 5432 -U 
> postgres -c "select count(*) from tableA";

psql -d production  -U postgres -c 'select count(*) from cell_per';
Null display is "NULL".
  count
---
    68
(1 row)

psql -d production  -U postgres -c 'select count(*) from cell_per'
Null display is "NULL".
  count
---
    68

psql -d production  -U postgres -c 'select count(*) from cell_per;'
Null display is "NULL".
  count
---
    68


> 
> none of them is work. Try to learn how to execute its script.

What error messages do you get?

> 
> **
> *Bach-Nga
> 
> *No one in this world is pure and perfect.  If you avoid people for 
> their mistakes you will be alone. So judge less, love, and forgive 
> more.EmojiEmojiEmoji
> To call him a dog hardly seems to do him justice though in as much as he 
> had four legs, a tail, and barked, I admit he was, to all outward 
> appearances. But to those who knew him well, he was a perfect gentleman 
> (Hermione Gingold)
> 
> **Live simply **Love generously **Care deeply **Speak kindly.
> *** Genuinely rich *** Faithful talent *** Sharing success
> 
> 
> 
> 
> On Thursday, June 18, 2020, 12:08:44 PM EDT, Adrian Klaver 
>  wrote:
> 
> 
> On 6/18/20 8:20 AM, Pepe TD Vo wrote:
> 
> Please don't top post. The preferred style on this list is inline or
> bottom posting(https://en.wikipedia.org/wiki/Posting_style).
> 
>  > I have a Postgresql client installed and connected.  how can i create a
>  > batch script running from the client window?
> 
> Create a file with commands in it like the example from Christopher
> Browne that was posted earlier:
> 
> "There is no single straightforward answer to that.
> 
> 
> Supposing I want a batch to either all be processed, or to all not process,
> then I might write a sql file like:
> 
> 
> begin;
> \copy table_1 (c1, c2, c3) from '/path/tabledata1.csv' csv header;
> \copy table_2 (c1, c2, c3) from '/path/tabledata2.csv' csv header;
> \copy table_3 (c1, c2, c3) from '/path/tabledata3.csv' csv header;
> commit;
> 
> 
> But you may be fine with having a separate SQL script for each

Re: Conflict with recovery on PG version 11.6

2020-06-18 Thread Toomas Kristin
Hi,

> There can be other reasons:
> 
> - replicated ACCESS EXCLUSIVE locks that conflict with queries
> - replicated ACCESS EXCLUSIVE locks that cause deadlocks
> - buffer pins that are needed for replication but held by a query
> - dropped tablespaces that hold temporary files on the standby

Thank you for ideas what to verify.

> I told you the remedies above, why don't you like them?

Basically I want to achieve situation where replication is not suspended  (lag 
is not more than 3 minutes) and statements on standby are not terminated. Based 
on collected information I don’t see any connection between vacuuming on master 
and termination of statements on standby. I can temporarily disable vacuuming 
in order to be 100% sure this is the case. And when I set 
max_standby_streaming_delay either -1 or as a very big number then it helps 
avoid query termination but doesn’t help me about suspended replication. All 
worked with same configuration on Postgres version 10.6, the issue started 
after version upgrade.

This is the reason why I am very keen to find out real cause for the conflict.

BR,
Toomas





Re: create batch script to import into postgres tables

2020-06-18 Thread Adrian Klaver

On 6/18/20 12:54 PM, Pepe TD Vo wrote:

psql -d production  -U postgres -c 'select count(*) from cell_per';

Null display is "NULL".
   count
---
     68
(1 row)

you can do this once you are in psql. But if you are running from shell 
script, it will be an error


 >>What error messages do you get?

my shell script is:
@echo off
C:\Program Files\PostgreSQL\11\bin\psql -U PSmasteruser  -d PSCIDR -h 
hostname.amazonaws.com -p 5432 -c "select count(*) from tableA;"

pause

the error I have is 'C:\Program' is not recognized as an internal or 
external command, operable program or batch file.


I even surround the path in quotes because of space.
@echo off
cmd /c ""C:\Program Files\PostgreSQL\11\bin\psql" -U PSmasteruser -d 
PSCIDR -h hostname.amazonaws.com -p 5432 -c "select count(*) from tableA;"

pause


Not sure how quoting works in Windows shell scripts, but pretty sure the 
above has unbalanced quotes.





error is: 'C:\Program Files\PostgreSQL\11\bin\psql: illegal option -- u

when I put -P Password123 (or fully qualified password=Password123)


I would recommend spending some time here:

https://www.postgresql.org/docs/12/app-psql.html

to see what the options are. Hint: -P is not the option for password.



it gives me another error "pset: unknow option: Password123
psql: could not set printing parameter "Password123"


It is the option for pset as the error message says.



I can connect from psql shell fine when it prompt hostname, username 
(but connect to postgres not to PSmasteruser, PSmasteruser username set 
up for aws maintenance postgres database, someone did, not me and the 
password is same on both).  Once I'm in psql and I can change to PSCIDR 
instance fine and run select count(*) from tableA;




**
*Bach-Nga




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: create batch script to import into postgres tables

2020-06-18 Thread Pepe TD Vo
thank you for the link.  I did try it and it's still errorecho 'SELECT count(*) 
FROM tableA;' | C:\Program Files\PostgreSQL\11\bin\psql -U PSmasteruser -d 
PSCIDR -h hostname.amazonaws.com -p 5432
I can run 'psql -u postgres -d PSCIDR -p 5432' (on Linux server, still learning 
how to run it from psql shell in Window) fine from psql prompt.  Just still 
wonder how to connect directly to the instance PSCIDR from scripting in both 
aws and linux.  Otherwise manually run using pgAdmin.
v/r,
Bach-Nga

No one in this world is pure and perfect.  If you avoid people for their 
mistakes you will be alone. So judge less, love, and forgive more.To call him a 
dog hardly seems to do him justice though in as much as he had four legs, a 
tail, and barked, I admit he was, to all outward appearances. But to those who 
knew him well, he was a perfect gentleman (Hermione Gingold)
**Live simply **Love generously **Care deeply **Speak kindly.*** Genuinely rich 
*** Faithful talent *** Sharing success

 

On Thursday, June 18, 2020, 07:00:37 PM EDT, Adrian Klaver 
 wrote:  
 
 On 6/18/20 12:54 PM, Pepe TD Vo wrote:
>>>psql -d production  -U postgres -c 'select count(*) from cell_per';
> Null display is "NULL".
>    count
> ---
>      68
> (1 row)
> 
> you can do this once you are in psql. But if you are running from shell 
> script, it will be an error
> 
>  >>What error messages do you get?
> 
> my shell script is:
> @echo off
> C:\Program Files\PostgreSQL\11\bin\psql -U PSmasteruser  -d PSCIDR -h 
> hostname.amazonaws.com -p 5432 -c "select count(*) from tableA;"
> pause
> 
> the error I have is 'C:\Program' is not recognized as an internal or 
> external command, operable program or batch file.
> 
> I even surround the path in quotes because of space.
> @echo off
> cmd /c ""C:\Program Files\PostgreSQL\11\bin\psql" -U PSmasteruser -d 
> PSCIDR -h hostname.amazonaws.com -p 5432 -c "select count(*) from tableA;"
> pause

Not sure how quoting works in Windows shell scripts, but pretty sure the 
above has unbalanced quotes.


> 
> error is: 'C:\Program Files\PostgreSQL\11\bin\psql: illegal option -- u
> 
> when I put -P Password123 (or fully qualified password=Password123)

I would recommend spending some time here:

https://www.postgresql.org/docs/12/app-psql.html

to see what the options are. Hint: -P is not the option for password.

> 
> it gives me another error "pset: unknow option: Password123
> psql: could not set printing parameter "Password123"

It is the option for pset as the error message says.

> 
> I can connect from psql shell fine when it prompt hostname, username 
> (but connect to postgres not to PSmasteruser, PSmasteruser username set 
> up for aws maintenance postgres database, someone did, not me and the 
> password is same on both).  Once I'm in psql and I can change to PSCIDR 
> instance fine and run select count(*) from tableA;
> 
> 
> 
> **
> *Bach-Nga
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com
  

Re: create batch script to import into postgres tables

2020-06-18 Thread Adrian Klaver

On 6/18/20 4:37 PM, Pepe TD Vo wrote:

thank you for the link.  I did try it and it's still error

echo 'SELECT count(*) FROM tableA;' |
C:\Program Files\PostgreSQL\11\bin\psql -U PSmasteruser -d PSCIDR -h 
hostname.amazonaws.com -p 5432



This is getting old. The error is?


I can run 'psql -u postgres -d PSCIDR -p 5432' (on Linux server, still 
learning how to run it from psql shell in Window) fine from psql 
prompt.  Just still wonder how to connect directly to the instance 
PSCIDR from scripting in both aws and linux.  Otherwise manually run 
using pgAdmin.


v/r,

**
*Bach-Nga




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: create batch script to import into postgres tables

2020-06-18 Thread cgerard999
Remove the quotes around echo
echo select count(*) from web_20200619; | "C:\Program 
Files\postgresql\11\bin\psql" -d *** -h *** -U ***
or, store your query into a text file and use 
psql -f query.sql


Sent from my mobile phone

> Le 19 juin 2020 à 02:00, Adrian Klaver  a écrit :
> 
> On 6/18/20 4:37 PM, Pepe TD Vo wrote:
>> thank you for the link.  I did try it and it's still error
>> echo 'SELECT count(*) FROM tableA;' |
>> C:\Program Files\PostgreSQL\11\bin\psql -U PSmasteruser -d PSCIDR -h 
>> hostname.amazonaws.com -p 5432
> 
> 
> This is getting old. The error is?
>> I can run 'psql -u postgres -d PSCIDR -p 5432' (on Linux server, still 
>> learning how to run it from psql shell in Window) fine from psql prompt.  
>> Just still wonder how to connect directly to the instance PSCIDR from 
>> scripting in both aws and linux.  Otherwise manually run using pgAdmin.
>> v/r,
>> **
>> *Bach-Nga
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 


Re: Parallel safety of contrib extensions

2020-06-18 Thread Michael Paquier
On Thu, Jun 18, 2020 at 02:26:04PM +, Winfield, Steven wrote:
> Many thanks for the pointers - I've submitted a patch.

Thanks Steve, I have noticed the patch.  For the sake of the archives,
it is here:
https://www.postgresql.org/message-id/AM5PR0901MB1587E47B1ACF23C6089DFCA3FD9B0%40AM5PR0901MB1587.eurprd09.prod.outlook.com
--
Michael


signature.asc
Description: PGP signature


Re: Importing a Large .ndjson file

2020-06-18 Thread Sankar P
> Sankar P  writes:
> > I spoke too soon. While this worked fine when there were no indexes
> > and finished within 10 minutes, with GIN index on the jsonb column, it
> > is taking hours and still not completing.
>
> There's the generic advice that building an index after-the-fact
> is often cheaper than updating it incrementally.  For GIN indexes
> specifically, see also
>
> https://www.postgresql.org/docs/current/gin-tips.html

Thanks guys.

-- 
Sankar P
http://psankar.blogspot.com




Re: Netapp SnapCenter

2020-06-18 Thread Paul Förster
Hi Stephen,

> On 18. Jun, 2020, at 21:26, Stephen Frost  wrote:
> If the entire database, all tablespaces, and pg_wal, are on the same
> volume and the snapshot of the volume is atomic, then you don't actually
> need to go through the start/stop backup- a snapshot being restored will
> look just like a system crash and PG will just go back to the last
> checkpoint and replay the WAL that's in pg_wal and it should reach
> consistency and come up.

we DID use tablespaces as an old habit (coming from Oracle :-( when everything 
PostgreSQL was new). When we realized what a bad idea that was, I reorganized 
them all away. So we don't use tablespaces anymore. Still, they used to be on 
the same volume, just a different directory. So even that would have been no 
problem in this particular case. :-)

Only Flyway doesn't like a DBA to reorg a tablespace if developers add a 
tablespace clause to their create table/index statements. But we made the 
developers remove those clauses.

> The one issue here is that if you're using the deprecated exxclusive
> backup API, then PG will create a backup_label file in the data
> directory.  If the system reboots while that file exists, there's a good
> chance that PG won't start up cleanly since, due to the file existing,
> it thinks that it's restoring from a backup when it isn't.

we don't. We use the non-exclusive backup mode according to:

25.3.3.1. Making A Non-Exclusive Low-Level Backup
https://www.postgresql.org/docs/current/continuous-archiving.html

But I must say that I'm really not a friend of being forced to keep the session 
open until the pg_stop_backup() call occurs. This really gave me a huge 
headache when writing the backup script. It's solved now but I consider this 
very ugly.

When I wrote our backup mode script I read the deprecation note about the 
exclusive mode backup. This is why I decided to go with non-exclusive to be 
ready when exclusive backup mode is finally removed some day. Yet, I don't see 
the reason. Everything has to be consistent. So a non-exclusive backup mode 
makes absolutely no sense to me. Either the whole database cluster is in backup 
mode or it is not. There's nothing in between.

This is unlike Oracle where you can set single tablespaces to backup mode and 
then just backup them separately. Still, I never saw a use for this with Oracle 
too.

Cheers,
Paul



Re: Netapp SnapCenter

2020-06-18 Thread Paul Förster
Hi Ron,

> On 18. Jun, 2020, at 21:30, Ron  wrote:
> But it's trivial to make pg_xlog a separate mount point.

technically yes, but that would mean it would go to a different volume and 
hence destroy atomicity of the Netapp snapshots.

Also, you'd have a huge administrative barrier against such things in our 
company.

Cheers,
Paul



HASH partitioning not working properly

2020-06-18 Thread Srinivasa T N
Hi,
   After seeing the below, I feel partitioning is not working properly or
it maybe case that my understanding is wrong.  Can somebody explain me what
is happening?

postgres=# create table busbar_version (objectid int, ver_id int) partition
by hash(ver_id);
CREATE TABLE
postgres=# CREATE TABLE busbar_version0 PARTITION OF busbar_version FOR
VALUES WITH (MODULUS 10, REMAINDER 0);
CREATE TABLE
postgres=# CREATE TABLE busbar_version5 PARTITION OF busbar_version FOR
VALUES WITH (MODULUS 10, REMAINDER 5);
CREATE TABLE
postgres=# CREATE TABLE busbar_version6 PARTITION OF busbar_version FOR
VALUES WITH (MODULUS 10, REMAINDER 6);
CREATE TABLE
postgres=# CREATE TABLE busbar_version7 PARTITION OF busbar_version FOR
VALUES WITH (MODULUS 10, REMAINDER 7);
CREATE TABLE

I did insert using following:

postgres=# insert into busbar_version(objectid,ver_id) values (5,5);
INSERT 0 1
postgres=# insert into busbar_version(objectid,ver_id) values (6,6);
INSERT 0 1

I was of the opinion that the above rows were inserted into busbar_version5
and busbar_version6, but I'm wrong.

postgres=# select * from busbar_version;
 objectid | ver_id
--+
5 |  5
6 |  6
(2 rows)

postgres=# select * from busbar_version5;
 objectid | ver_id
--+
5 |  5
(1 row)

postgres=# select * from busbar_version6;
 objectid | ver_id
--+
(0 rows)

postgres=# select * from busbar_version7;
 objectid | ver_id
--+
6 |  6
(1 row)

   Why second insert has gone to table busbar_version7 instead of
busbar_version6?

If it helps, I am trying on the "psql (12.3 (Debian 12.3-1.pgdg100+1))"
container.

Regards,
Seenu.


Re: Conflict with recovery on PG version 11.6

2020-06-18 Thread Kyotaro Horiguchi
At Thu, 18 Jun 2020 23:29:49 +0300, Toomas Kristin  
wrote in 
> Hi,
> 
> > There can be other reasons:
> > 
> > - replicated ACCESS EXCLUSIVE locks that conflict with queries
> > - replicated ACCESS EXCLUSIVE locks that cause deadlocks
> > - buffer pins that are needed for replication but held by a query
> > - dropped tablespaces that hold temporary files on the standby
> 
> Thank you for ideas what to verify.
> 
> > I told you the remedies above, why don't you like them?
> 
> Basically I want to achieve situation where replication is not suspended  
> (lag is not more than 3 minutes) and statements on standby are not 
> terminated. Based on collected information I don’t see any connection between 
> vacuuming on master and termination of statements on standby. I can 
> temporarily disable vacuuming in order to be 100% sure this is the case. And 
> when I set max_standby_streaming_delay either -1 or as a very big number then 
> it helps avoid query termination but doesn’t help me about suspended 
> replication. All worked with same configuration on Postgres version 10.6, the 
> issue started after version upgrade.
> 
> This is the reason why I am very keen to find out real cause for the conflict.

FWIW in case you haven't tried yet, if you could find a DETAILS: line
following to the ERROR: canceling.." message in server log, it would
narrow the possibility.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


Re: HASH partitioning not working properly

2020-06-18 Thread David Rowley
On Fri, 19 Jun 2020 at 17:42, Srinivasa T N  wrote:
>After seeing the below, I feel partitioning is not working properly or it 
> maybe case that my understanding is wrong.  Can somebody explain me what is 
> happening?

> postgres=# select * from busbar_version6;
>  objectid | ver_id
> --+
> (0 rows)
>
> postgres=# select * from busbar_version7;
>  objectid | ver_id
> --+
> 6 |  6
> (1 row)
>
>Why second insert has gone to table busbar_version7 instead of 
> busbar_version6?

It's your understanding that's not correct.  The value of is passed
through a hash function and the partition is selected based partition
matching the remainder value after dividing the return value of the
hash function by the largest modulus of any partition.

That might surprise you, but how would you select which partition a
varchar value should go into if you didn't use a hash function.

David




Re: HASH partitioning not working properly

2020-06-18 Thread Srinivasa T N
On Fri, Jun 19, 2020 at 11:44 AM David Rowley  wrote:

> On Fri, 19 Jun 2020 at 17:42, Srinivasa T N  wrote:
> >After seeing the below, I feel partitioning is not working properly
> or it maybe case that my understanding is wrong.  Can somebody explain me
> what is happening?
>
> > postgres=# select * from busbar_version6;
> >  objectid | ver_id
> > --+
> > (0 rows)
> >
> > postgres=# select * from busbar_version7;
> >  objectid | ver_id
> > --+
> > 6 |  6
> > (1 row)
> >
> >Why second insert has gone to table busbar_version7 instead of
> busbar_version6?
>
> It's your understanding that's not correct.  The value of is passed
> through a hash function and the partition is selected based partition
> matching the remainder value after dividing the return value of the
> hash function by the largest modulus of any partition.
>
> That might surprise you, but how would you select which partition a
> varchar value should go into if you didn't use a hash function.
>
> David
>

How can I see the output of hash function that is used internally?

Regards,
Seenu.