Re: Advice request : simultaneous function/data updates on many databases

2020-03-05 Thread Alban Hertroys


> On 4 Mar 2020, at 23:42, Guyren Howe  wrote:
> 
> On Mar 4, 2020, at 14:33 , Rory Campbell-Lange  
> wrote:
>> 
>> Essentially we wish to reduce the window where the frontend and backend
>> aren't synchronised.
>> 
>> If we have (for example) 200 databases which each take 2 seconds to
>> update, a client could be on the wrong frontend code for over 6 minutes.
>> Send each of the servers a PL/PGSQL method that executes all the things in a 
>> transaction and then waits until the same clock time to commit. Then all the 
>> servers are committing at the same moment. They will still be out of synch 
>> somewhat, but this would reduce the degree.


I’m wondering whether this could be done with a more generic event-based 
approach, where each server sends a ‘done’ event to a central machine once it’s 
ready to commit, and the central machine returns an ‘acknowledged’ once the 
last server sent it’s ‘done’ event.
The challenge there is that the ‘ack’ needs to be caught and processed within 
the same waiting transaction… Not sure how to do that right now - maybe through 
web services, MQTT or similar.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





Re: Determining the type of an obkect in plperl

2020-03-05 Thread stan
On Wed, Mar 04, 2020 at 05:09:19PM -0700, David G. Johnston wrote:
> On Wed, Mar 4, 2020 at 4:21 PM stan  wrote:
> 
> > Probably a bit off topic, but I suspect someone on this list knows how to
> > do
> > this.
> >
> > I am in the process of writing a plperl function. In this function I need
> > to compare the data in the NEW versus OLD structures. I am writing this as
> > a
> > generic subroutine, so I am looping through and comparing the 2 to see what
> > is different. Problem is, that I need to know whether to us n != or a ne
> > comparison.
> >
> > how can I determine what the data type of the value element is?
> >
> 
> Not up to speed on Perl but you basically want everything to be done using
> string equality - can't you just use "ne" everywhere and not worry about
> comparing numbers using string comparison logic?  Might want to disabled
> warnings...
> 
> That would have to be faster than executing a type_of function on every
> single column.
> 
> Then measure performance and decide whether a generic routine is performant
> enough.  If not you might try creating custom function dynamically using
> the catalogs as input.

Since I am just looking for differences, this may work. Obviously comparing
numeric values as strings has issues.

Presently I am getting some warnings, so I think I need to deal with the
types. I already dealt with the columns that return NULL, these are
undefined in the Perl hash, so I have to test for their existence before
attempting the compare.
-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Determining the type of an obkect in plperl

2020-03-05 Thread stan
On Thu, Mar 05, 2020 at 12:27:12AM +, Ravi Krishna wrote:
> 
> > 
> > how can I determine what the data type of the value element is?
> > 
> perl has a ref function which can tell what type of object.
> 
> https://perldoc.perl.org/functions/ref.html
> > 
> 
> 

That was my goto answer, but see this in the docs:

If the operand is not a reference, then the empty string will be returned. 

So after I assign this to a variable this function does not help. I may
need to try using this against the actual _TD... hash

Thanks.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Determining the type of an obkect in plperl

2020-03-05 Thread stan
On Thu, Mar 05, 2020 at 12:27:12AM +, Ravi Krishna wrote:
> 
> > 
> > how can I determine what the data type of the value element is?
> > 
> perl has a ref function which can tell what type of object.
> 
> https://perldoc.perl.org/functions/ref.html
> > 
> 
> 
> --
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 
Even checking gisnt teh _TD structure does not return a value:

foreach my $key (sort keys %{$_TD->{old}}) {
my $ref1 =  ref $_TD->{old}->{$key};
my $ref2 =  ref $_TD->{new}->{$key};
elog(NOTICE, "ref1 = $ref1 ref2 = $ref2" );
.
.
.


Results in the following output:

NOTICE:  ref1 =  ref2 = 



-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Determining the type of an obkect in plperl

2020-03-05 Thread Rob Sargent



> On Mar 5, 2020, at 4:22 AM, stan  wrote:
> 
> On Thu, Mar 05, 2020 at 12:27:12AM +, Ravi Krishna wrote:
>> 
>>> 
>>> how can I determine what the data type of the value element is?
>>> 
>> perl has a ref function which can tell what type of object.
>> 
>> https://perldoc.perl.org/functions/ref.html
>>> 
>> 
>> 
>> --
>> This email has been checked for viruses by Avast antivirus software.
>> https://www.avast.com/antivirus
>> 
> Even checking gisnt teh _TD structure does not return a value:
> 
> foreach my $key (sort keys %{$_TD->{old}}) {
>my $ref1 =  ref $_TD->{old}->{$key};
>my $ref2 =  ref $_TD->{new}->{$key};
>elog(NOTICE, "ref1 = $ref1 ref2 = $ref2" );
>.
>.
>  

I’m not a Perl guy but I don’t think you need to sort the keys. You don’t care 
what order the comparisons are done, do you?
If it’s just “is different” your looking for why can’t you rely on Perl’s type 
coercing (toString()) on operands of ne? You won’t have different types under 
the same key. 





Re: Real application clustering in postgres.

2020-03-05 Thread Laurenz Albe
On Thu, 2020-03-05 at 07:45 +, Daulat Ram wrote:
> Is there any possibility/options to setup a real application clustering in 
> Postgres as in Oracle we have a  RAC feature.

No, and as far as I know nobody feels interested in providing it.

RAC is a complicated architecture that doesn't do much good, so most
people feel that it would be a waste of time and effort.

RAC ist not really a scaling solution: because of the shared storage,
you can only scale for more CPUs; I/O remains the bottleneck.

RAC is not really a high availability solution: because of the shared
storage, it has a sibgle point of failure.

Today, people use shared-nothing architectures for high avaliability,
like Patroni.

> What about multi-master replication in Postgres. would you please suggest how 
> it is useful and how can setup it.

There is no support for that in core PostgreSQL.

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and
an applicatoin that uses it has to be specifically designed for that.
Very often a single-master failover solution is a better solution.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Poor plan choice with partial unique indexes on jsonb column and simple RLS policy (with test script)

2020-03-05 Thread Alastair McKinley
Hi Tom,

Thanks once again for your time looking at this.  I have a resolution but 
didn't exactly get to the bottom of what was going on.

Forcing the function used in the index to be leakproof did not work.  I 
guessed, but am not certain, that this is because either to_jsonb() or jsonb_eq 
operator are not leakproof as well?

During my testing of a solution (which basically was not to use jsonb for this) 
I saw this message while using RLS in an unrelated query.

DEBUG:  not using statistics because function "enum_eq" is not leak-proof

I did not see a message like this using my jsonb indexes, even though it seems 
like a related issue.

Is there another effect potentially going on here or incomplete debugging 
messages?

Best regards,

Alastair

From: Tom Lane 
Sent: 04 March 2020 04:22
To: Alastair McKinley 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Poor plan choice with partial unique indexes on jsonb column and 
simple RLS policy (with test script)

Alastair McKinley  writes:
> Thank you for having a look at this.  In the interim I discovered that I 
> could trigger the issue by creating a security barrier view, whereas a 
> regular view worked fine, so I think that also points to your conclusion 
> about leakyness?
> I attempted to workaround the issue with a leakproof function, so far with no 
> success.
> ...
> Is this an approach that could fundamentally work?

Forcing the expression to be considered leakproof should work.
I'm not sure that your partial index is OK for the purpose of
collecting stats, though -- does it help if you make a non-partial
index on that function expression?  Otherwise, it's possible that
I guessed wrong about which part of the WHERE clause is problematic.
You could try doing EXPLAINs with different portions of the WHERE
to see how the rowcount estimate changes.

BTW, just marking something "leakproof" when it isn't really so
is possibly a security problem.  You should think twice about
what threat model you're hoping RLS will protect against.

regards, tom lane


Re: pg_dump and public schema

2020-03-05 Thread Олег Самойлов
Thanks. I expected that the database restored from its dump must be exactly the 
same. As it was before. But something in PostgresQL changes and not always for 
the good.

> 4 марта 2020 г., в 19:19, Adrian Klaver  
> написал(а):
> 
> I believe this is the latest information on public schema handling:
> 
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5955d934194c3888f30318209ade71b53d29777f
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com





A question about the number of times a trigger will fire

2020-03-05 Thread stan
I am in the process of trying to set up a function.trigger pair to create
functionally an updateable view. 

I would think that the trigger would fire the number of times that the
calling statement's WHERE clause seceded. Is this incorrect>

I have a view called purchase_view, one of the tables in it's join is a
table called bom_item.

I have defined this trigger:

REATE TRIGGER test_v_trig
INSTEAD OF INSERT OR UPDATE ON purchase_view
FOR EACH ROW EXECUTE PROCEDURE v_trig_test();

Prior to firing this trigger this query

select count(*)
FROM purchase_view
WHERE
proj_no = 3124
AND
m_name = 'Mencom' ;

returns 11

The resultant statement generated by the function called by this trigger
is:

UPDATE BOM_ITEM SET  cost_per_unit = 23.45 , qty = 12345.00 

Which in retrospect dies not do what I had in mind :-)

So, it appears that I need to create a WHERE clause for the resultant
statement. But I do not see how the function has enough data to use to
create this where clause.

What am I missing, here?



-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




RE: trouble making PG use my Perl

2020-03-05 Thread Kevin Brannen
From: Alan Hodgson 

On Mon, 2020-03-02 at 18:23 -0500, Tom Lane wrote:

Kevin Brannen <



Centos 8 ships with 5.14 (IIRC).



I don't have an actual Centos 8 machine handy to disprove that,

but the info I have says that RHEL8/Centos 8 branched off from

Fedora 28, and F28 most definitely shipped with Perl 5.26.

Looking at their git repo, the last few Fedora releases

shipped with

> I can confirm that CentOS 8 has perl 5.26.3.

{fires up the C8 VM…}

Yes, you're correct. My memory failed me there. ☹
I must have been thinking of Centos 7, which is 5.16.3 and feels old too -- 
though to be fair C7 came out quite some time ago.

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


format return of "age" to hh:mm

2020-03-05 Thread David Gauthier
Hi:

How does one reformat the output of the "age" function to always be in
terms of hours:mins.

E.g.

dvdb=> select age('2020-03-05 01:40:32-05','2020-03-01 21:56:05-05');
   age
-
 3 days 03:44:27
(1 row)

I want...

"75:44"

I'm not married to "age"  If there's a better way to do this that's fine
too.

Thanks in advance !


Re: A question about the number of times a trigger will fire

2020-03-05 Thread David G. Johnston
On Thu, Mar 5, 2020 at 7:58 AM stan  wrote:

> UPDATE BOM_ITEM SET  cost_per_unit = 23.45 , qty = 12345.00
>
> So, it appears that I need to create a WHERE clause for the resultant
> statement. But I do not see how the function has enough data to use to
> create this where clause.
>
> What am I missing, here?
>
>
OLD and/or NEW?

https://www.postgresql.org/docs/12/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER

David J.


Re: pg_dump and public schema

2020-03-05 Thread Adrian Klaver

On 3/5/20 6:57 AM, Олег Самойлов wrote:

Thanks. I expected that the database restored from its dump must be exactly the 
same. As it was before. But something in PostgresQL changes and not always for 
the good.


From what I see:

1)  pg_dump -C -U postgres -d sch_test

CREATE DATABASE sch_test WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';



2) select oid, datname, datallowconn from pg_database;

13297 | template0  | f

update pg_database set datallowconn = 't' where oid = 13297;

3) test=# \c template0
You are now connected to database "template0" as user "postgres".

template0=# \d

Did not find any relations.
template0=# \df
   List of functions
 Schema | Name | Result data type | Argument data types | Type
+--+--+-+--
(0 rows)

template0=# \dn
  List of schemas
  Name  |  Owner
+--
 public | postgres
(1 row)

So the script for restoring the database starts with using template0 as 
the template. This is done to start with an 'empty' database that the 
rest of the script can populate. The exception is the presence of the 
public schema. Obviously, at this point, there is nothing that tracks 
the presence of the public schema in the database being dumped and then 
drops it from the newly created version if it was not present in the 
original.







4 марта 2020 г., в 19:19, Adrian Klaver  написал(а):

I believe this is the latest information on public schema handling:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5955d934194c3888f30318209ade71b53d29777f

--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: format return of "age" to hh:mm

2020-03-05 Thread Andrei Zhidenkov
However, you cannot use to_char() to display the count of days for a given 
interval. In this case, if your interval is larger than 24 hours, you might use 
extract(epoch from ) and perform the conversion manually.

> On 5. Mar 2020, at 17:07, Ray O'Donnell  wrote:
> 
> On 05/03/2020 15:50, David Gauthier wrote:
>> Hi:
>> 
>> How does one reformat the output of the "age" function to always be in
>> terms of hours:mins.
> 
> Hi there,
> 
> age() returns an interval, so without having tried it I'm guessing you
> could use to_char() to format it whatever way you want.
> 
> Ray.
> 
> -- 
> Raymond O'Donnell // Galway // Ireland
> r...@rodonnell.ie
> 
> 





Re: format return of "age" to hh:mm

2020-03-05 Thread David G. Johnston
On Thu, Mar 5, 2020 at 8:50 AM David Gauthier 
wrote:

> Hi:
>
> How does one reformat the output of the "age" function to always be in
> terms of hours:mins.
>
>
>
Custom function.

Use justify_hours(interval) to normalize the input in terms of days
Use extract(field from interval) to get the components, including days
Multiply the days result by 24, add it to the hours result
Deal with fractional hours
Combine and return

There is no justify_minutes function unfortunately which, if implemented to
the behavior of justify_hours, would do what you are looking for.  You
basically want to write one, though I suspect in SQL instead of C.

David J.


Re: format return of "age" to hh:mm

2020-03-05 Thread Ray O'Donnell
On 05/03/2020 15:50, David Gauthier wrote:
> Hi:
> 
> How does one reformat the output of the "age" function to always be in
> terms of hours:mins.

Hi there,

age() returns an interval, so without having tried it I'm guessing you
could use to_char() to format it whatever way you want.

Ray.

-- 
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie




Re: format return of "age" to hh:mm

2020-03-05 Thread Adrian Klaver

On 3/5/20 7:50 AM, David Gauthier wrote:

Hi:

How does one reformat the output of the "age" function to always be in 
terms of hours:mins.


E.g.

dvdb=> select age('2020-03-05 01:40:32-05','2020-03-01 21:56:05-05');
        age
-
  3 days 03:44:27
(1 row)

I want...

"75:44"

I'm not married to "age"  If there's a better way to do this that's fine 
too.


Not sure it's better, but it will give you idea of what needs to be done:


SELECT
floor(
extract(
epoch FROM ('2020-03-05 01:40:32-05'::timestamptz - 
'2020-03-01 21:56:05-05'::timestamptz))

/ 3600)::varchar || ':' ||
((mod(
  extract(
  epoch FROM ('2020-03-05 01:40:32-05'::timestamptz - 
'2020-03-01 21:56:05-05'::timestamptz))::numeric,

 3600::numeric) / 60)::int)::varchar;

?column?
--
 75:44
(1 row)



Thanks in advance !



--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Real application clustering in postgres.

2020-03-05 Thread Daulat Ram
Thanks for your inputs Laurenz Albe.

Would you please explain single-master failover solution.

Suppose we have promoted  standby (replica) as master after the h/w issue at 
Master. 
If after few hours we recovered  the h/w then how we can switchback on the old 
primary. . 

As in Oracle we have switchover method for Dataguard. How we can do in Postgres.

Thanks,

-Original Message-
From: Laurenz Albe  
Sent: Thursday, March 5, 2020 5:37 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: Real application clustering in postgres.

On Thu, 2020-03-05 at 07:45 +, Daulat Ram wrote:
> Is there any possibility/options to setup a real application clustering in 
> Postgres as in Oracle we have a  RAC feature.

No, and as far as I know nobody feels interested in providing it.

RAC is a complicated architecture that doesn't do much good, so most people 
feel that it would be a waste of time and effort.

RAC ist not really a scaling solution: because of the shared storage, you can 
only scale for more CPUs; I/O remains the bottleneck.

RAC is not really a high availability solution: because of the shared storage, 
it has a sibgle point of failure.

Today, people use shared-nothing architectures for high avaliability, like 
Patroni.

> What about multi-master replication in Postgres. would you please suggest how 
> it is useful and how can setup it.

There is no support for that in core PostgreSQL.

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and an applicatoin 
that uses it has to be specifically designed for that.
Very often a single-master failover solution is a better solution.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com



Re: Real application clustering in postgres.

2020-03-05 Thread Virendra Kumar
Failover is easy but failback is little bit tricky.I have implemented failback 
by doing following steps:
1. Start original primary which will be doing crash recovery. It should be 
designed in such a way that once it is up application should not start 
connecting to it otherwise there will be split brain and data-mistach between 
two instances. I implemented it by using a virtual IP mounting on server which 
is actual primary using keepalived.2. Shutdown original primary and do a 
pg_rewind to make that as slave for new primary.3. Once slave (original 
primary) is caught up with primary do failback4. Repeat steps #1-#3 to make 
failed over instance slave again.

Regards,Virendra
 

On Thursday, March 5, 2020, 8:48:54 AM PST, Daulat Ram 
 wrote:  
 
 Thanks for your inputs Laurenz Albe.

Would you please explain single-master failover solution.

Suppose we have promoted  standby (replica) as master after the h/w issue at 
Master. 
If after few hours we recovered  the h/w then how we can switchback on the old 
primary. . 

As in Oracle we have switchover method for Dataguard. How we can do in Postgres.

Thanks,

-Original Message-
From: Laurenz Albe  
Sent: Thursday, March 5, 2020 5:37 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: Real application clustering in postgres.

On Thu, 2020-03-05 at 07:45 +, Daulat Ram wrote:
> Is there any possibility/options to setup a real application clustering in 
> Postgres as in Oracle we have a  RAC feature.

No, and as far as I know nobody feels interested in providing it.

RAC is a complicated architecture that doesn't do much good, so most people 
feel that it would be a waste of time and effort.

RAC ist not really a scaling solution: because of the shared storage, you can 
only scale for more CPUs; I/O remains the bottleneck.

RAC is not really a high availability solution: because of the shared storage, 
it has a sibgle point of failure.

Today, people use shared-nothing architectures for high avaliability, like 
Patroni.

> What about multi-master replication in Postgres. would you please suggest how 
> it is useful and how can setup it.

There is no support for that in core PostgreSQL.

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and an applicatoin 
that uses it has to be specifically designed for that.
Very often a single-master failover solution is a better solution.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

  

Getting a error on creating a partition table index 12.2.

2020-03-05 Thread nikhil raj
HI ALL,

While creating the index on the partition table i am getting error on it
saying

*ERROR: cannot specify default tablespace for partitioned relations SQL
state: 0A00.*

Query is




* CREATE INDEX t_e20so1_doi_c_doid_idxON public.t_e20so1_doi USING
btree(i_doid ASC NULLS LAST)TABLESPACE pg_default;*


Note:- but the same query is executed in the 12.1 version.its working fine.


Re: A question about the number of times a trigger will fire

2020-03-05 Thread stan
On Thu, Mar 05, 2020 at 08:58:32AM -0700, David G. Johnston wrote:
> On Thu, Mar 5, 2020 at 7:58 AM stan  wrote:
> 
> > UPDATE BOM_ITEM SET  cost_per_unit = 23.45 , qty = 12345.00
> >
> > So, it appears that I need to create a WHERE clause for the resultant
> > statement. But I do not see how the function has enough data to use to
> > create this where clause.
> >
> > What am I missing, here?
> >
> >
> OLD and/or NEW?
> 
> https://www.postgresql.org/docs/12/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER

Yes, I was thinking that might be what I had to do. My thinking right now
is to create a WHERE clause for the new statement, using (perhaps a subset)
of the columns returned by this.


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread Adrian Klaver

On 3/5/20 10:04 AM, nikhil raj wrote:

HI ALL,

While creating the index on the partition table i am getting error on 
it  saying


*ERROR: cannot specify default tablespace for partitioned relations SQL 
state: 0A00.*


Query is

*CREATE INDEX t_e20so1_doi_c_doid_idx
     ON public.t_e20so1_doi USING btree
     (i_doid ASC NULLS LAST)
     TABLESPACE pg_default;*
*
*
*
*
Note:- but the same query is executed in the 12.1 version.its working fine.


Well this ERROR appeared here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;f=src/backend/commands/indexcmds.c;h=87259588d0ab0b8e742e30596afa7ae25caadb18

Thu, 25 Apr 2019 06:20:23 -0800 (10:20 -0400)

That would encompass 12.1 also.

Are you doing anything else to public.t_e20so1_doi prior to the above?

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread nikhil raj
Hi Adrian,

On that table nothing was happening just created the table and later o was
creating the index and i was getting this error.

Please can you tell me is this the draw back of that in 12.2 version.

On Fri, 6 Mar 2020, 12:04 am Adrian Klaver, 
wrote:

> On 3/5/20 10:04 AM, nikhil raj wrote:
> > HI ALL,
> >
> > While creating the index on the partition table i am getting error on
> > it  saying
> >
> > *ERROR: cannot specify default tablespace for partitioned relations SQL
> > state: 0A00.*
> >
> > Query is
> >
> > *CREATE INDEX t_e20so1_doi_c_doid_idx
> >  ON public.t_e20so1_doi USING btree
> >  (i_doid ASC NULLS LAST)
> >  TABLESPACE pg_default;*
> > *
> > *
> > *
> > *
> > Note:- but the same query is executed in the 12.1 version.its working
> fine.
>
> Well this ERROR appeared here:
>
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;f=src/backend/commands/indexcmds.c;h=87259588d0ab0b8e742e30596afa7ae25caadb18
>
> Thu, 25 Apr 2019 06:20:23 -0800 (10:20 -0400)
>
> That would encompass 12.1 also.
>
> Are you doing anything else to public.t_e20so1_doi prior to the above?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread Adrian Klaver

On 3/5/20 10:48 AM, nikhil raj wrote:

Hi Adrian,

On that table nothing was happening just created the table and later o 
was creating the index and i was getting this error.


Please can you tell me is this the draw back of that in 12.2 version.



Unfortunately I don't have answer for you on that. Someone with more 
knowledge of the internals will have to comment.


For now the solution would seem to be not to specify the TABLESPACE if 
you want to use the default.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread nikhil raj
Ok, thanks for the clarification.

On Fri, Mar 6, 2020 at 12:31 AM Adrian Klaver 
wrote:

> On 3/5/20 10:48 AM, nikhil raj wrote:
> > Hi Adrian,
> >
> > On that table nothing was happening just created the table and later o
> > was creating the index and i was getting this error.
> >
> > Please can you tell me is this the draw back of that in 12.2 version.
> >
>
> Unfortunately I don't have answer for you on that. Someone with more
> knowledge of the internals will have to comment.
>
> For now the solution would seem to be not to specify the TABLESPACE if
> you want to use the default.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread Alvaro Herrera
On 2020-Mar-05, Adrian Klaver wrote:

> On 3/5/20 10:04 AM, nikhil raj wrote:

> > *CREATE INDEX t_e20so1_doi_c_doid_idx
> >      ON public.t_e20so1_doi USING btree
> >      (i_doid ASC NULLS LAST)
> >      TABLESPACE pg_default;*

> > *ERROR: cannot specify default tablespace for partitioned relations SQL
> > state: 0A00.*

> > Note:- but the same query is executed in the 12.1 version.its working fine.
> 
> Well this ERROR appeared here:
> 
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;f=src/backend/commands/indexcmds.c;h=87259588d0ab0b8e742e30596afa7ae25caadb18

The reason for the error is the expectation that creating an index on a
partitioned table with a tablespace specification will cause the
children indexes (ie. the indexes on the partitions) to use the same
tablespace.

This does not work properly for the default tablespace, so I made that
particular condition an error.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread Adrian Klaver

On 3/5/20 11:31 AM, Alvaro Herrera wrote:

On 2020-Mar-05, Adrian Klaver wrote:


On 3/5/20 10:04 AM, nikhil raj wrote:



*CREATE INDEX t_e20so1_doi_c_doid_idx
      ON public.t_e20so1_doi USING btree
      (i_doid ASC NULLS LAST)
      TABLESPACE pg_default;*



*ERROR: cannot specify default tablespace for partitioned relations SQL
state: 0A00.*



Note:- but the same query is executed in the 12.1 version.its working fine.


Well this ERROR appeared here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;f=src/backend/commands/indexcmds.c;h=87259588d0ab0b8e742e30596afa7ae25caadb18


The reason for the error is the expectation that creating an index on a
partitioned table with a tablespace specification will cause the
children indexes (ie. the indexes on the partitions) to use the same
tablespace.

This does not work properly for the default tablespace, so I made that
particular condition an error.



The OP was wondering why it worked in 12.1 and not 12.2?

I could not see any obvious reason for that, so:

1) There is a not obvious reason

2) It did not work in 12.1 either.

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Getting a error on creating a partition table index 12.2.

2020-03-05 Thread Alvaro Herrera
On 2020-Mar-05, Adrian Klaver wrote:

> The OP was wondering why it worked in 12.1 and not 12.2?
> 
> I could not see any obvious reason for that, so:
> 
> 1) There is a not obvious reason
> 
> 2) It did not work in 12.1 either.

(2) is correct.

55469 12.1 9913=# show server_version;
 server_version 

 12.1
(1 fila)

55469 12.1 9913=# create index on p (a) tablespace pg_default;
ERROR:  cannot specify default tablespace for partitioned relations 
   

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Advice request : simultaneous function/data updates on many databases

2020-03-05 Thread Rory Campbell-Lange
On 04/03/20, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> We have many databases of the same type separated for data governance
> reasons. They, however, share the same web front-end code.
> 
> Presently, replacing functions and performing data updates on the
> databases in series often executes across all databases in less than a
> minute. (The updates are currently done with simple sql files connecting
> to each database and then loading a stub file pointing to each function
> to drop and reload, and running the data update queries.)
> 
> However, for larger updates, the time when the front end code is
> out-of-step with the database can cause end-user problems.

For information, following the very helpful advice here, we intend to
proceed as follows, using a rolling upgrade methodology:

for each database:
* upgrade the functions and sql
* on success, callout haproxy to switch the client from web code
  version old to new
* else investigate the upgrade failure

We're planning to use postgres for recording state.




Table with many NULLS for indexed column yields strange query plan

2020-03-05 Thread greigwise
I have a query like this:

SELECT  "table1".* FROM "table1"
INNER JOIN "table2" ON "table2"."table1_id" = "table1"."id"
INNER JOIN "table3" ON "table3"."id" = "table2"."table3_id" WHERE
"table3"."number" = ''
AND ("table2"."type") IN ('Standard') ;

table2 has a large number of NULLS in the column table3_id.  There is an
index on this column.  Here is the result of explain analyze:

Merge Join  (cost=1001.20..4076.67 rows=17278 width=167) (actual
time=284.918..300.167 rows=2244 loops=1)
   Merge Cond: (table2.table3_id = table3.id)
   ->  Gather Merge  (cost=1000.93..787825.78 rows=621995 width=175) (actual
time=5.786..283.269 rows=64397 loops=1)
 Workers Planned: 4
 Workers Launched: 4
 ->  Nested Loop  (cost=0.87..712740.12 rows=155499 width=175)
(actual time=0.091..102.708 rows=13107 loops=5)
   ->  Parallel Index Scan using index_table2_on_table3_id on
table2  (cost=0.43..489653.08 rows=155499 width=16) (actual
time=0.027..22.327 rows=13107 loops=5)
 Filter: ((type)::text = 'Standard'::text)
   ->  Index Scan using table1_pk on table1  (cost=0.44..1.43
rows=1 width=167) (actual time=0.005..0.005 rows=1 loops=65535)
 Index Cond: (id = table2.table1_id)
   ->  Index Scan using table3_pkey on table3  (cost=0.27..53.40 rows=1
width=8) (actual time=0.041..0.048 rows=1 loops=1)
 Filter: ((number)::text = ''::text)
 Rows Removed by Filter: 35
 Planning time: 0.450 ms
 Execution time: 310.230 ms

You can see the row estimate there is way off on the Parallel Index Scan.  
I suspect that this is because it's including the rows with null in the
selectivity estimate even though the table3_id can't possibly be null here
due to the inner join.  

If I modify the query like this:

SELECT  "table1".* FROM "table1"
INNER JOIN "table2" ON "table2"."table1_id" = "table1"."id"
INNER JOIN "table3" ON "table3"."id" = "table2"."table3_id" WHERE
"table3"."number" = ''
AND ("table2"."type") IN ('Standard') and table3_id is not null;

Just adding in table3_id is not null at the end there, I get a much better
plan.

Nested Loop  (cost=1.14..290.04 rows=66 width=167) (actual
time=0.058..11.258 rows=2244 loops=1)
   ->  Nested Loop  (cost=0.70..64.46 rows=66 width=8) (actual
time=0.049..2.873 rows=2244 loops=1)
 ->  Index Scan using table3_pkey on table3  (cost=0.27..53.40
rows=1 width=8) (actual time=0.030..0.035 rows=1 loops=1)
   Filter: ((number)::text = ''::text)
   Rows Removed by Filter: 35
 ->  Index Scan using index_table2_on_table3_id on table2 
(cost=0.43..11.05 rows=1 width=16) (actual time=0.017..2.102 rows=2244
loops=1)
   Index Cond: ((table3_id = table3.id) AND (table3_id IS NOT
NULL))
   Filter: ((type)::text = 'Standard'::text)
   ->  Index Scan using table1_pk on table1  (cost=0.44..3.42 rows=1
width=167) (actual time=0.003..0.003 rows=1 loops=2244)
 Index Cond: (id = table2. id)
 Planning time: 0.403 ms
 Execution time: 11.672 ms

Can I do anything statistics wise so that I get a better plan here or do I
have to modify the query.  It seems kinda hacky that I would have to specify
is not null on that column since like I said it can't possibly be null.

Thanks,

Greig Wise



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




What do null column values for pg_stat_progress_vacuum mean?

2020-03-05 Thread Mark Haylock
Hi,

We have an autovacuum process that has been running for almost 27 hours:

SELECT * FROM pg_stat_activity WHERE pid = 11731;
-[ RECORD 1 ]+---
datid| 16385
datname  | database_name
pid  | 11731
usesysid |
usename  |
application_name |
client_addr  |
client_hostname  |
client_port  |
backend_start| 2020-03-04 23:40:14.828138+00
xact_start   | 2020-03-04 23:40:14.849367+00
query_start  | 2020-03-04 23:40:14.849367+00
state_change | 2020-03-04 23:40:14.849368+00
wait_event_type  |
wait_event   |
state| active
backend_xid  |
backend_xmin | 3801997676
query| autovacuum: VACUUM public.responses
backend_type | autovacuum worker

A row shows up in pg_stat_progress_vacuum, but it contains null values
for every column.

SELECT * FROM pg_stat_progress_vacuum WHERE pid = 11731;
-[ RECORD 1 ]--+---
pid| 11731
datid  | 16385
datname| d2j496215lfs41
relid  |
phase  |
heap_blks_total|
heap_blks_scanned  |
heap_blks_vacuumed |
index_vacuum_count |
max_dead_tuples|
num_dead_tuples|

I see nothing in the documentation to suggest that this is an expected
state - what does it mean?

Thanks,
Mark.




Re: What do null column values for pg_stat_progress_vacuum mean?

2020-03-05 Thread Mark Haylock
Sorry I've failed to mention which postgres version this is with:
PostgreSQL 10.11.

On Fri, Mar 6, 2020 at 3:39 PM Mark Haylock  wrote:
>
> Hi,
>
> We have an autovacuum process that has been running for almost 27 hours:
>
> SELECT * FROM pg_stat_activity WHERE pid = 11731;
> -[ RECORD 1 ]+---
> datid| 16385
> datname  | database_name
> pid  | 11731
> usesysid |
> usename  |
> application_name |
> client_addr  |
> client_hostname  |
> client_port  |
> backend_start| 2020-03-04 23:40:14.828138+00
> xact_start   | 2020-03-04 23:40:14.849367+00
> query_start  | 2020-03-04 23:40:14.849367+00
> state_change | 2020-03-04 23:40:14.849368+00
> wait_event_type  |
> wait_event   |
> state| active
> backend_xid  |
> backend_xmin | 3801997676
> query| autovacuum: VACUUM public.responses
> backend_type | autovacuum worker
>
> A row shows up in pg_stat_progress_vacuum, but it contains null values
> for every column.
>
> SELECT * FROM pg_stat_progress_vacuum WHERE pid = 11731;
> -[ RECORD 1 ]--+---
> pid| 11731
> datid  | 16385
> datname| d2j496215lfs41
> relid  |
> phase  |
> heap_blks_total|
> heap_blks_scanned  |
> heap_blks_vacuumed |
> index_vacuum_count |
> max_dead_tuples|
> num_dead_tuples|
>
> I see nothing in the documentation to suggest that this is an expected
> state - what does it mean?
>
> Thanks,
> Mark.



-- 
Mark Haylock - Developer
Sydney | Christchurch | Auckland | Boulder
e: m...@trineo.com | w: trineo.com | ph: +64 3 377 4001