Re: Safe switchover

2020-07-10 Thread James Sewell
> - open connection to database
> - smart shutdown master
> - terminate all other connections
> - wait for shutdown (archiving will finish)
>

OK despite what it looked like from the code - upon testing it seems like
even a fast shutdown will wait for logs to be archived *as long as progress
is being made* (the same as smart).

-- 
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Postgresql Backup Encryption

2020-07-10 Thread Brajendra Pratap Singh
Hi,

Please let me know the way to encrypt/decrypt the postgresql backup uaing
pg_dump and pgbackrest.

Also let me know the other good possibility if any.

Thanks,
Singh


Re: Safe switchover

2020-07-10 Thread Paul Förster
Hi James,

I wouldn't work out the procedure myself, especially since there is a free 
working solution already. It's dangerous if you do it yourself and make a 
mistake.

In our company, we rely on Patroni (https://github.com/zalando/patroni). Yes, 
it uses pg_rewind in the background but it does switchover and failover fast 
and reliably (also for high load databases), handles timelines automatically 
and so on.

For the DCS, we use etcd, but you can choose yourself, which one you want.

Cheers,
Paul


> On 10. Jul, 2020, at 09:18, James Sewell  wrote:
> 
> 
> - open connection to database
> - smart shutdown master
> - terminate all other connections
> - wait for shutdown (archiving will finish)
> 
> OK despite what it looked like from the code - upon testing it seems like 
> even a fast shutdown will wait for logs to be archived *as long as progress 
> is being made* (the same as smart).
> 
> 
> The contents of this email are confidential and may be subject to legal or 
> professional privilege and copyright. No representation is made that this 
> email is free of viruses or other defects. If you have received this 
> communication in error, you may not copy or distribute any part of it or 
> otherwise disclose its contents to anyone. Please advise the sender of your 
> incorrect receipt of this correspondence.





Re: Postgresql Backup Encryption

2020-07-10 Thread Paul Förster
Hi Singh,

> On 10. Jul, 2020, at 07:29, Brajendra Pratap Singh  
> wrote:
> 
> Hi,
> 
> Please let me know the way to encrypt/decrypt the postgresql backup uaing 
> pg_dump and pgbackrest.
> 
> Also let me know the other good possibility if any.
> 
> Thanks,
> Singh

don't use the -f option with pg_dump and simply pipe the output through your 
favorite encryption tool. Or use -f option and after pg_dump finishes, encrypt 
the resulting dump file. These are the two options you have. There is no direct 
command line option to encrypt pg_dump's output.

https://www.postgresql.org/docs/current/app-pgdump.html

Cheers,
Paul



Re: Postgresql-12 taking more time to execute the query

2020-07-10 Thread Thomas Kellerer
Vishwa Kalyankar schrieb am 10.07.2020 um 08:50:
> Previously we are running postgresql-10with postgis 2.5.3 and now we
> are updated to postgresql-12 and postgis-3.0.1, and in postgresql-10
> one query is taking 20 sec and same query is taking upto 80 sec.

Most of the slowdowns I have seen when upgrading to Postgres 12 were caused by 
JITting, which seems to kick in more often than not on queries that don't 
profit from it.

Try to disable jit (set jit=off) and re-run  your query








Wal_keep_segment value too high

2020-07-10 Thread Brajendra Pratap Singh
Hi,

What will happen if the wal_keep_segments value is too high ,is this affect
the database performance anyhow like checkpoint and all or is there because
of this any delay in the replication sync or wal records transfer from
primary to replication postgresql db?

Thanks,
Brajendra


Re: Wal_keep_segment value too high

2020-07-10 Thread Andreas Kretschmer
On 10 July 2020 10:26:25 CEST, Brajendra Pratap Singh 
 wrote:
>Hi,
>
>What will happen if the wal_keep_segments value is too high ,is this

wasted disk space.

What do you want to achive?


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: invalid non-zero objectSubId for object class

2020-07-10 Thread Alvaro Herrera
On 2020-Jul-09, Michel Pelletier wrote:

> I restored a snapshot and I can drop the tables there, so we'll likely
> proceed  to swap the replicas over tomorrow.  I have this corrupted
> instance i can continue to debug on if necessary.  There seem to be some
> other issues now that we're investigating, like a max(timestamp) query on
> the old instance is doing a seqscan on all the partitions instead of using
> an index that's there, the new instance from the snapshot correctly uses
> the index, so I suspect further corruption.

This being RDS makes it very difficult to debug.  For example I would
suggest a debugger to find the problem, but you don't have access to
that.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Wal_keep_segment value too high

2020-07-10 Thread Brajendra Pratap Singh
Hi Andreas,

Actually there will be sync gap almost 3-4hours during the ETL jobs between
primary and streaming replication db ,so I just want to make sure is there
any affect in replication bcoz of high value of wal_keep_segments .

Second thing we have implemented pgbackrest in archive_command so archive
name will generate in zipped format along with some binary so what will be
the restore_command for that if we lose any wals from wal directory of
primary db and how it will be applying to replication db via
restore_command means is it require password less ssh between primary and
replica.

Third thing is the high value of wal_keep_segment anyhow affect the
checkpoint operation ,dml operation,select query or data write operation
from memory to disk?

Thanks,
Brajendra


On Fri, 10 Jul, 2020, 5:44 PM Andreas Kretschmer, 
wrote:

> On 10 July 2020 10:26:25 CEST, Brajendra Pratap Singh <
> singh.bpratap...@gmail.com> wrote:
> >Hi,
> >
> >What will happen if the wal_keep_segments value is too high ,is this
>
> wasted disk space.
>
> What do you want to achive?
>
>
> Regards, Andreas
>
>
> --
> 2ndQuadrant - The PostgreSQL Support Company
>


Re: Postgresql-12 taking more time to execute the query

2020-07-10 Thread Adrian Klaver

On 7/9/20 11:50 PM, Vishwa Kalyankar wrote:

Hi Team,

    I Need help or any suggestion on below mentioned issue.

Previously we are running postgresql-10with postgis 2.5.3 and now we are 
updated to postgresql-12 and postgis-3.0.1, and in postgresql-10 one 
query is taking 20 sec and same query is taking upto 80 sec. thanks in 
advance


The actual query and EXPLAIN ANALYZE for both runs of the query would be 
useful.




Regards,

Vishwa S Kalyankar




--
Adrian Klaver
adrian.kla...@aklaver.com




Issue with timestamp without time zone datatype with default value as now()/localtimestamp

2020-07-10 Thread Deepika S Gowda
Hi,

On postgres 11.7 Master/Slave node, there is column named "createddate"
with datatype "timestamp without time zone" with default value as "now()";

Column Name | Date Type | Default value
createddate |timestamp without time zone|Now()


Issue: From the java application , data is getting loaded into this table
where we expect column value should be today's date with timestamp(
"2020-07-10 10:56:43.21"). But, out of 3K records, 100 records are loaded
as  "2019-07-10 10:56:43.21" (change in Year).

What could be the issue? we tried changing the default value to
"localtimestamp".

Kindly help on this request

Regardss,
Deepika


Re: Issue with timestamp without time zone datatype with default value as now()/localtimestamp

2020-07-10 Thread Adrian Klaver

On 7/10/20 7:03 AM, Deepika S Gowda wrote:

Hi,

On postgres 11.7 Master/Slave node, there is column named "createddate" 
with datatype "timestamp without time zone" with default value as "now()";


Column Name | Date Type                 | Default value
createddate |timestamp without time zone|Now()


Issue: From the java application , data is getting loaded into this 
table where we expect column value should be today's date with 
timestamp( "2020-07-10 10:56:43.21"). But, out of 3K records, 100 
records are loaded as  "2019-07-10 10:56:43.21" (change in Year).


What could be the issue? we tried changing the default value to 
"localtimestamp".


I would day the choices are:

1) A machine has it's clock set wrong.

2) The data is being loaded with a value for createdate that overrides 
the DEFAULT.




Kindly help on this request

Regardss,
Deepika



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgresql-12 taking more time to execute the query

2020-07-10 Thread Vishwa Kalyankar
Hi,

  Below is the output of the query explain and analyze result.

   Note : port 5434 is postgresql12  and 5433 is postgresql10

-bash-4.2$ psql -p 5434
psql (12.3)
Type "help" for help.

postgres=# \c IPDS_KSEB
You are now connected to database "IPDS_KSEB" as user "postgres".
IPDS_KSEB=# explain analyze select object_name, objectid, ST_AsText(shape)
as geom, gisid from kseb_geometry_trace_with_barrier_v1(453, 'htline', 2,
null, null, null, false, true);
 QUERY
PLAN

 Function Scan on kseb_geometry_trace_with_barrier_v1  (cost=0.25..2510.25
rows=1000 width=100) (actual time=44246.596..44247.349 rows=252 loops=1)
 Planning Time: 0.254 ms
 Execution Time: 44308.083 ms
(3 rows)

IPDS_KSEB=# \q
-bash-4.2$ psql -p 5433
psql (12.3, server 10.11)
Type "help" for help.

postgres=# \c IPDS_KSEB
psql (12.3, server 10.11)
You are now connected to database "IPDS_KSEB" as user "postgres".
IPDS_KSEB=# explain analyze select object_name, objectid, ST_AsText(shape)
as geom, gisid from kseb_geometry_trace_with_barrier_v1(453, 'htline', 2,
null, null, null, false, true);
 QUERY
PLAN

 Function Scan on kseb_geometry_trace_with_barrier_v1  (cost=0.25..1885.25
rows=1000 width=100) (actual time=19901.708..19902.453 rows=252 loops=1)
 Planning time: 0.154 ms
 Execution time: 19951.016 ms
(3 rows)

IPDS_KSEB=#





On Fri, Jul 10, 2020 at 7:23 PM Adrian Klaver 
wrote:

> On 7/9/20 11:50 PM, Vishwa Kalyankar wrote:
> > Hi Team,
> >
> > I Need help or any suggestion on below mentioned issue.
> >
> > Previously we are running postgresql-10with postgis 2.5.3 and now we are
> > updated to postgresql-12 and postgis-3.0.1, and in postgresql-10 one
> > query is taking 20 sec and same query is taking upto 80 sec. thanks in
> > advance
>
> The actual query and EXPLAIN ANALYZE for both runs of the query would be
> useful.
>
> >
> > Regards,
> >
> > Vishwa S Kalyankar
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: invalid non-zero objectSubId for object class

2020-07-10 Thread Michel Pelletier
On Fri, Jul 10, 2020 at 5:37 AM Alvaro Herrera 
wrote:

> On 2020-Jul-09, Michel Pelletier wrote:
>
> > I restored a snapshot and I can drop the tables there, so we'll likely
> > proceed  to swap the replicas over tomorrow.  I have this corrupted
> > instance i can continue to debug on if necessary.  There seem to be some
> > other issues now that we're investigating, like a max(timestamp) query on
> > the old instance is doing a seqscan on all the partitions instead of
> using
> > an index that's there, the new instance from the snapshot correctly uses
> > the index, so I suspect further corruption.
>
> This being RDS makes it very difficult to debug.  For example I would
> suggest a debugger to find the problem, but you don't have access to
> that.
>

Yes agreed, we'll probably terminate the instance today if there's no
further lines of investigation.  I know this isn't their forum, but I don't
get why AWS pins all their extension versions to the postgres version, If
you use 12.2 on rds, you get pglogical 2.3.0.  If you want 2.3.1, you have
to upgrade to pg 12.3.  I don't get that logic, shouldn't 2.3.1 be the
current version on both?  What's the point of them supporting 12.2 if they
ship an extension version that 2ndQ says you shouldn't use?

Anyway, just musing out loud, don't expect any kind of answer to that here.
:)

Thanks again everyone,

-Michel



> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: invalid non-zero objectSubId for object class

2020-07-10 Thread Adrian Klaver

On 7/10/20 8:06 AM, Michel Pelletier wrote:
On Fri, Jul 10, 2020 at 5:37 AM Alvaro Herrera > wrote:


On 2020-Jul-09, Michel Pelletier wrote:

 > I restored a snapshot and I can drop the tables there, so we'll
likely
 > proceed  to swap the replicas over tomorrow.  I have this corrupted
 > instance i can continue to debug on if necessary.  There seem to
be some
 > other issues now that we're investigating, like a max(timestamp)
query on
 > the old instance is doing a seqscan on all the partitions instead
of using
 > an index that's there, the new instance from the snapshot
correctly uses
 > the index, so I suspect further corruption.

This being RDS makes it very difficult to debug.  For example I would
suggest a debugger to find the problem, but you don't have access to
that.


Yes agreed, we'll probably terminate the instance today if there's no 
further lines of investigation.  I know this isn't their forum, but I 
don't get why AWS pins all their extension versions to the postgres 
version, If you use 12.2 on rds, you get pglogical 2.3.0.  If you want 
2.3.1, you have to upgrade to pg 12.3.  I don't get that logic, 
shouldn't 2.3.1 be the current version on both?  What's the point of 
them supporting 12.2 if they ship an extension version that 2ndQ says 
you shouldn't use?


Anyway, just musing out loud, don't expect any kind of answer to that 
here. :)


Already asked and not really answered:

https://www.postgresql.org/message-id/A51655C0-BDD0-4F52-8DC5-2672AADA32D0%40gmail.com



Thanks again everyone,

-Michel



-- 
Álvaro Herrera https://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Safe switchover

2020-07-10 Thread Stephen Frost
Greetings,

* Paul Förster (paul.foers...@gmail.com) wrote:
> I wouldn't work out the procedure myself, especially since there is a free 
> working solution already. It's dangerous if you do it yourself and make a 
> mistake.
> 
> In our company, we rely on Patroni (https://github.com/zalando/patroni). Yes, 
> it uses pg_rewind in the background but it does switchover and failover fast 
> and reliably (also for high load databases), handles timelines automatically 
> and so on.

Patroni also has the option to use pgbackrest instead of pg_rewind.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Safe switchover

2020-07-10 Thread Paul Förster
Hi Stephen,

> On 10. Jul, 2020, at 17:29, Stephen Frost  wrote:
> 
> Patroni also has the option to use pgbackrest instead of pg_rewind.

right. Sorry, I forgot about that. We use pg_rewind which works great.

Cheers,
Paul




Re: Safe switchover

2020-07-10 Thread Stephen Frost
Greetings,

* Paul Förster (paul.foers...@gmail.com) wrote:
> > On 10. Jul, 2020, at 17:29, Stephen Frost  wrote:
> > Patroni also has the option to use pgbackrest instead of pg_rewind.
> 
> right. Sorry, I forgot about that. We use pg_rewind which works great.

Sure, if you know exactly why the former primary failed and have
confidence that nothing actually bad happened then pg_rewind can work
(though it's still not what I'd generally recommend).

If you don't actually know what happened to the former primary to cause
it to fail then I definitely wouldn't use pg_rewind on it since it
doesn't have any checks to make sure that the data is actually
generally consistent.  These days you could get a bit of a better
feeling by running pg_checksums against the data dir, but that's not
going to be as good as doing a pgbackrest delta restore when it comes to
making sure that everything is valid.

Thanks,

Stephen


signature.asc
Description: PGP signature


PG Admin 4

2020-07-10 Thread rwest
I'm relatively new to PostgreSql and am trying to navigate my way around the
tools like PG Admin 4 to do Database Admin work.

I'm trying to run an entire set of DDL with lots of tables, indexes, etc.
through PG Admin 4 for a database.  The only thing I saw after I ran the
script was a message about the final index being created and how long the
entire script ran in milliseconds.  When running a large batch of objects, I
would expect to see messages about the result of each individual object
being created (like what happens in SSMS for SQL Server, or any of the tools
for Oracle).  Is there some setting or switch somewhere in PG Admin 4 that I
can turn on to see messages for each object that it processes while it is
executing my script?  It doesn't give me a warm a fuzzy feeling to not see
anything while the script it running.

Thanks - RW



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: PG Admin 4

2020-07-10 Thread Rob Sargent




On 7/10/20 9:53 AM, rwest wrote:

I'm relatively new to PostgreSql and am trying to navigate my way around the
tools like PG Admin 4 to do Database Admin work.

I'm trying to run an entire set of DDL with lots of tables, indexes, etc.
through PG Admin 4 for a database.  The only thing I saw after I ran the
script was a message about the final index being created and how long the
entire script ran in milliseconds.  When running a large batch of objects, I
would expect to see messages about the result of each individual object
being created (like what happens in SSMS for SQL Server, or any of the tools
for Oracle).  Is there some setting or switch somewhere in PG Admin 4 that I
can turn on to see messages for each object that it processes while it is
executing my script?  It doesn't give me a warm a fuzzy feeling to not see
anything while the script it running.

Thanks - RW



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


What environment are you in, OS-wise?  A shell of some sort to run those 
scripts is likely your best bet.






Re: PG Admin 4

2020-07-10 Thread Adrian Klaver

On 7/10/20 8:53 AM, rwest wrote:

I'm relatively new to PostgreSql and am trying to navigate my way around the
tools like PG Admin 4 to do Database Admin work.

I'm trying to run an entire set of DDL with lots of tables, indexes, etc.
through PG Admin 4 for a database.  The only thing I saw after I ran the
script was a message about the final index being created and how long the
entire script ran in milliseconds.  When running a large batch of objects, I
would expect to see messages about the result of each individual object
being created (like what happens in SSMS for SQL Server, or any of the tools
for Oracle).  Is there some setting or switch somewhere in PG Admin 4 that I
can turn on to see messages for each object that it processes while it is
executing my script?  It doesn't give me a warm a fuzzy feeling to not see
anything while the script it running.


Have you looked in the Query History tab?:

https://www.pgadmin.org/docs/pgadmin4/4.23/query_tool.html#query-history-panel



Thanks - RW



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PG Admin 4

2020-07-10 Thread rwest
Ah, sorry that i didn't specify.

I'm running on a Windows platform.

We're trying to avoid running in a command-line mode if possible, and
leverage the options PG Admin 4 has available to us.  Just find it odd that
it doesn't report out each Create or Alter statement as it executes the
script.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




RE: PG Admin 4

2020-07-10 Thread Robert West
Yes...all that is in there is the same results for the entire batch..no 
intermediate results for each piece of DDL.

I'm submitting a large series of DDL changes in one batchI'm still looking 
how to see the results of the execution of each DDL statement.

RW

-Original Message-
From: Adrian Klaver  
Sent: Friday, July 10, 2020 1:47 PM
To: Robert West ; pgsql-gene...@postgresql.org
Subject: Re: PG Admin 4

ATTENTION: This email was sent from an external source. Please exercise caution 
when opening attachments or clicking links.


On 7/10/20 8:53 AM, rwest wrote:
> I'm relatively new to PostgreSql and am trying to navigate my way 
> around the tools like PG Admin 4 to do Database Admin work.
>
> I'm trying to run an entire set of DDL with lots of tables, indexes, etc.
> through PG Admin 4 for a database.  The only thing I saw after I ran 
> the script was a message about the final index being created and how 
> long the entire script ran in milliseconds.  When running a large 
> batch of objects, I would expect to see messages about the result of 
> each individual object being created (like what happens in SSMS for 
> SQL Server, or any of the tools for Oracle).  Is there some setting or 
> switch somewhere in PG Admin 4 that I can turn on to see messages for 
> each object that it processes while it is executing my script?  It 
> doesn't give me a warm a fuzzy feeling to not see anything while the script 
> it running.

Have you looked in the Query History tab?:

https://www.pgadmin.org/docs/pgadmin4/4.23/query_tool.html#query-history-panel

>
> Thanks - RW
>
>
>
> --
> Sent from: 
> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: PG Admin 4

2020-07-10 Thread rwest
Oh sorry, should have specified that.

We're running on a Windows platform.

We're trying to avoid running anything command-line when doing DDL releases
and leverage whatever PG Admin 4 can provide us.  I'm just wondering why we
don't see the results of each CREATE or ALTER statement as the script runs
in the tool.  That seems very strange to me.  Is there some sort of setting
or parameter that can force the tool to do this for us?





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




How to do phrase search?

2020-07-10 Thread Anto Aravinth
Hello,

I have the following table:

so2, which has following column details:

​

id, title, posts, body (tsvector).

And I created the index on the following:

"so2_pkey" PRIMARY KEY, btree (id)

"body" gin (body)

​

And I wanted to query on my tsvector with the string: `Is it possible to
toggle the visibility of an element`. I tried the following query:

​

SELECT id, ts_rank(body, keywords) AS rank

FROM so2,

to_tsquery('Is & it & possible & to & toggle & the & visibility & of & an &
element')

keywords

WHERE keywords @@ body ORDER BY rank DESC;

​

This give the result in which the "element" string count is large in
number, however there is a record which exactly matches the search string,
that comes as third in my result. So logically it just checked in my query
which word count occurs the most across my body column, which I don't want.

​

Can I say somehow to postgresql, that I want the pharse search rather than
the text spilt up weight in the document. Is it possible to do?

Edit: I tried with `ts_rank_cd\` as well. Same result.


Re: PG Admin 4

2020-07-10 Thread Tim Clarke
Why would you shun the ease of command line batch control?

Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420

On 10/07/2020 17:36, rwest wrote:

Oh sorry, should have specified that.

We're running on a Windows platform.

We're trying to avoid running anything command-line when doing DDL releases
and leverage whatever PG Admin 4 can provide us.  I'm just wondering why we
don't see the results of each CREATE or ALTER statement as the script runs
in the tool.  That seems very strange to me.  Is there some sort of setting
or parameter that can force the tool to do this for us?





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: PG Admin 4

2020-07-10 Thread Adrian Klaver

On 7/10/20 12:54 PM, Robert West wrote:

Yes...all that is in there is the same results for the entire batch..no 
intermediate results for each piece of DDL.


Hmm, that is something you might have to ask here:

https://www.pgadmin.org/support/list/

> I'm submitting a large series of DDL changes in one batchI'm 

still looking how to see the results of the execution of each DDL statement.

On Unixen I would say tail the log file, not sure how that works in 
Windows.




RW





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PG Admin 4

2020-07-10 Thread Susan Hurst
 

I gave up on PGAdmin4 and went back to PGAdmin3, although 3 is
deprecated now. The History tab will show you what you want after
executing a SQL statement. 

I don't use Windows any more than I have to but when I did try PGAdmin4
on windows, the feedback was sometimes there and sometimes not. Linux
works better. 

That said, I agree that executing a script from a command line is the
best approach for implementing DDL statements. You can capture the
script and the output for further confirmation of success or failure. 

Sue 

---

Susan E Hurst
Principal Consultant
Brookhurst Data LLC
Email: susan.hu...@brookhurstdata.com
Mobile: 314-486-3261

On 2020-07-10 15:20, Tim Clarke wrote: 

> Why would you shun the ease of command line batch control? 
> Tim Clarke MBCS
> IT Director
> Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
> 
> On 10/07/2020 17:36, rwest wrote: 
> 
>> Oh sorry, should have specified that.
>> 
>> We're running on a Windows platform.
>> 
>> We're trying to avoid running anything command-line when doing DDL releases
>> and leverage whatever PG Admin 4 can provide us. I'm just wondering why we
>> don't see the results of each CREATE or ALTER statement as the script runs
>> in the tool. That seems very strange to me. Is there some sort of setting
>> or parameter that can force the tool to do this for us?
>> 
>> --
>> Sent from: 
>> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html [1]
> 
> Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | 
> Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 
> 503 2848
> Web: https://www.manifest.co.uk/ [2] 
> 
> Minerva Analytics Ltd - A Solactive Company
> 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United 
> Kingdom 
> 
> -
> 
> Copyright: This e-mail may contain confidential or legally privileged 
> information. If you are not the named addressee you must not use or disclose 
> such information, instead please report it to ad...@minerva.info
> Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
> Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
> Registered in England Number 2920820 Registered Office at above address. 
> Please Click Here https://www.manifest.co.uk/legal/ [3] for further 
> information.
 

Links:
--
[1] https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
[2] https://www.manifest.co.uk/
[3] https://www.manifest.co.uk/legal/


Re: PG Admin 4

2020-07-10 Thread Miles Elam
I did the same for at least a year, but I must admit that v4 has improved
greatly since its initial release. Also it turns out is handy for running
in a docker-compose environment so no matter who is starting up on your
team, they always have a database and a database admin tool at the ready
along with the app.

On Fri, Jul 10, 2020 at 1:31 PM Susan Hurst 
wrote:

> I gave up on PGAdmin4 and went back to PGAdmin3, although 3 is deprecated
> now.  The History tab will show you what you want after executing a SQL
> statement.
>
> I don't use Windows any more than I have to but when I did try PGAdmin4 on
> windows, the feedback was sometimes there and sometimes not.  Linux works
> better.
>
> That said, I agree that executing a script from a command line is the best
> approach for implementing DDL statements.  You can capture the script and
> the output for further confirmation of success or failure.
>
> Sue
>
>
>
>
>
> ---
> 
> Susan E Hurst
> Principal Consultant
> Brookhurst Data LLC
> Email: susan.hu...@brookhurstdata.com
> Mobile: 314-486-3261
>
> On 2020-07-10 15:20, Tim Clarke wrote:
>
> Why would you shun the ease of command line batch control?
> Tim Clarke MBCS
> IT Director
> Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
>
> On 10/07/2020 17:36, rwest wrote:
>
> Oh sorry, should have specified that.
>
> We're running on a Windows platform.
>
> We're trying to avoid running anything command-line when doing DDL releases
> and leverage whatever PG Admin 4 can provide us.  I'm just wondering why we
> don't see the results of each CREATE or ALTER statement as the script runs
> in the tool.  That seems very strange to me.  Is there some sort of setting
> or parameter that can force the tool to do this for us?
>
>
>
>
>
> --
> Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>
>
> Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 |
> Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1
> 647 503 2848
> Web: https://www.manifest.co.uk/
>
>
>
> Minerva Analytics Ltd - A Solactive Company
> 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United
> Kingdom
>
>
> --
>
>
>
> Copyright: This e-mail may contain confidential or legally privileged
> information. If you are not the named addressee you must not use or
> disclose such information, instead please report it to ad...@minerva.info
> Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd:
> Registered in England Number 11260966 & The Manifest Voting Agency Ltd:
> Registered in England Number 2920820 Registered Office at above address.
> Please Click Here https://www.manifest.co.uk/legal/ for further
> information.
>
>


Re: Postgresql-12 taking more time to execute the query

2020-07-10 Thread Adrian Klaver

On 7/10/20 7:48 AM, Vishwa Kalyankar wrote:

Hi,

   Below is the output of the query explain and analyze result.

    Note : port 5434 is postgresql12  and 5433 is postgresql10


Well that is not enlightening. What's happening inside the function is 
not being shown.  You might have to ask this on PostGIS list:


https://lists.osgeo.org/mailman/listinfo/postgis-users

Somewhere there may have a better idea of what goes on in the function 
and whether that changed from PostGIS  2.5.3 to 3.0.1.





-bash-4.2$ psql -p 5434
psql (12.3)
Type "help" for help.

postgres=# \c IPDS_KSEB
You are now connected to database "IPDS_KSEB" as user "postgres".
IPDS_KSEB=# explain analyze select object_name, objectid, 
ST_AsText(shape) as geom, gisid from 
kseb_geometry_trace_with_barrier_v1(453, 'htline', 2, null, null, null, 
false, true);
 
  QUERY PLAN


  Function Scan on kseb_geometry_trace_with_barrier_v1 
  (cost=0.25..2510.25 rows=1000 width=100) (actual 
time=44246.596..44247.349 rows=252 loops=1)

  Planning Time: 0.254 ms
  Execution Time: 44308.083 ms
(3 rows)

IPDS_KSEB=# \q
-bash-4.2$ psql -p 5433
psql (12.3, server 10.11)
Type "help" for help.

postgres=# \c IPDS_KSEB
psql (12.3, server 10.11)
You are now connected to database "IPDS_KSEB" as user "postgres".
IPDS_KSEB=# explain analyze select object_name, objectid, 
ST_AsText(shape) as geom, gisid from 
kseb_geometry_trace_with_barrier_v1(453, 'htline', 2, null, null, null, 
false, true);
 
  QUERY PLAN


  Function Scan on kseb_geometry_trace_with_barrier_v1 
  (cost=0.25..1885.25 rows=1000 width=100) (actual 
time=19901.708..19902.453 rows=252 loops=1)

  Planning time: 0.154 ms
  Execution time: 19951.016 ms
(3 rows)

IPDS_KSEB=#





On Fri, Jul 10, 2020 at 7:23 PM Adrian Klaver > wrote:


On 7/9/20 11:50 PM, Vishwa Kalyankar wrote:
 > Hi Team,
 >
 >     I Need help or any suggestion on below mentioned issue.
 >
 > Previously we are running postgresql-10with postgis 2.5.3 and now
we are
 > updated to postgresql-12 and postgis-3.0.1, and in postgresql-10 one
 > query is taking 20 sec and same query is taking upto 80 sec.
thanks in
 > advance

The actual query and EXPLAIN ANALYZE for both runs of the query
would be
useful.

 >
 > Regards,
 >
 > Vishwa S Kalyankar
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgresql-12 taking more time to execute the query

2020-07-10 Thread David Rowley
On Sat, 11 Jul 2020 at 02:48, Vishwa Kalyankar
 wrote:
>   Below is the output of the query explain and analyze result.

You may find better help here if you follow the advice given in
https://wiki.postgresql.org/wiki/Slow_Query_Questions

David




Re: Safe switchover

2020-07-10 Thread Paul Förster
Hi Stephen,

> On 10. Jul, 2020, at 17:45, Stephen Frost  wrote:
> 
> Sure, if you know exactly why the former primary failed and have
> confidence that nothing actually bad happened then pg_rewind can work
> (though it's still not what I'd generally recommend).
> 
> If you don't actually know what happened to the former primary to cause
> it to fail then I definitely wouldn't use pg_rewind on it since it
> doesn't have any checks to make sure that the data is actually
> generally consistent.  These days you could get a bit of a better
> feeling by running pg_checksums against the data dir, but that's not
> going to be as good as doing a pgbackrest delta restore when it comes to
> making sure that everything is valid.

we use Netapp plus continuous archiving. To protect agains block corruption, 
all our database clusters have been created with initdb -k. So they should 
report block corruptions in the log.

The usual reason why a database cluster goes down is because the server is shut 
down which initiates a switchover and is not problematic. If the server goes 
down by a power outage, system crash or similar, then an automatic failover is 
initiated, which, according to our experience, is also not problematic. Patroni 
seems to handle both situations well.

The worst case is, that both servers crash, which is pretty unlikely. So, the 
worst case is that we have to perform a volume restore with Netapp and replay 
the WAL files since that last snapshot. Should the replica database cluster be 
damaged too, then we may need to reinit it with Patroni. This is acceptable 
even for large database clusters because replication runs fast. But the 
possibility is very, very small.

Why the -k option of initdb isn't default anyway, is beyond me. Yes, I know the 
argument about pg_upgrade messages, which people can't seem to cope with for 
some reason, but I can't see the reasoning. If I wanted to do a pg_upgrade from 
an older non-checksummed database cluster to a new major version with 
checksums, then I'd do initdb explicitly without checksums and perform the 
upgrade. Then I would enable checksums and that's it from then on. It's a one 
time only simple command for each affected database cluster.

So, in my opinion, -k should be default and if one wanted to create a 
non-checksummed database cluster, it would have to be stated on the command 
line explicitly. This IMHO is a reasonable way to make people migrate to 
checksums over time as database clusters are migrated.

But then, that's only my opinion. There is no absolute truth.

Cheers,
Paul



Re: Issue with timestamp without time zone datatype with default value as now()/localtimestamp

2020-07-10 Thread Tim Cross


Deepika S Gowda  writes:

> Hi,
>
> On postgres 11.7 Master/Slave node, there is column named "createddate"
> with datatype "timestamp without time zone" with default value as "now()";
>
> Column Name | Date Type | Default value
> createddate |timestamp without time zone|Now()
>
>
> Issue: From the java application , data is getting loaded into this table
> where we expect column value should be today's date with timestamp(
> "2020-07-10 10:56:43.21"). But, out of 3K records, 100 records are loaded
> as  "2019-07-10 10:56:43.21" (change in Year).
>
> What could be the issue? we tried changing the default value to
> "localtimestamp".
>

My bet would be you have some SQL statements which include a value for
'createddate', so the default is not being used.


-- 
Tim Cross