Re: BEFORE ... Statement-level trigger

2019-02-19 Thread Jitendra Loyal
Thanks Adrian

I am trying to understand as to how a BEFORE statement-level trigger can be
used. Since it is a trigger, one needs to know which rows are being
affected.

Regards,
Jiten

On Mon 18 Feb, 2019, 9:42 PM Adrian Klaver, 
wrote:

> On 2/18/19 4:06 AM, Jitendra Loyal wrote:
> > The behaviour is not clear for BEFORE Statement-level Trigger. This is
> > because transition tables cannot be used. So how does one get access to
> > the rows being affected? It is not documented either.
>
> If you need the row values then use a FOR ROW trigger.
>
> >
> > Thanks
> > Jiten
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
Unfortunately no! Where can I see those? Will I don't my answer there; I
have referred to the documentation and tried various things.

Thanks and regards,
Ken i

On Mon 18 Feb, 2019, 9:59 PM Adrian Klaver, 
wrote:

> On 2/18/19 8:23 AM, Jitendra Loyal wrote:
> > My bad!
> >
> > It is a transition table. Consider the following revised definition of
> > trigger:
> >
> >
> > CREATE TRIGGER storage_locations_b_u_AS_DML
> > AFTER UPDATE
> > ON storage_locations
> > REFERENCING NEW TABLE AS new_table
> > OLD TABLE AS old_table
> > FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();
>
> Alright I understand now.
>
> Did you see the rest of my previous post about AFTER STATEMENT running
> regardless of number of rows affected?
>
> >
> > Thanks and regards,
> > Jiten
> >
> > On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver,  > > wrote:
> >
> > On 2/18/19 4:11 AM, Jitendra Loyal wrote:
> >  >
> >  > The AFTER Statement-level Trigger runs into infinite execution
> when
> >  > another set of rows are affected for the same table through this
> >  > trigger. Consider this use case where a table storage_locations
> that
> >  > manages a hierarchy of storage_locations in stores, and thus
> having
> >  > following columns (for simplicity):
> >  >
> >  >
> >  >
> >  >
> >  > storage_location_id SERIAL NOT NULL PRIMARY KEY,
> >  > store_id INTEGER NOT NULL, -- REFERENCES stores
> >  > storage_location_nm VARCHAR (25) NOT NULL,
> >  > parent_storage_location_id INTEGER NULL REFERENCES
> > storage_locations,
> >  >  NULL for root storage locations
> >  > storage_location_path TEXT NOT NULL
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > I have a BEFORE ROW trigger, which updates the
> > storage_location_path with
> >  > parent's storage_location_path, if any, concatenated with its
> >  > storage_location_name. This works fine - no issues.
> >  >
> >  > I have another AFTER UPDATE STATEMENT-level Trigger and function
> > definitions
> >  > as below (which updates the storage_path of the children):
> >  >
> >  >
> >  >
> >  >
> >  > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
> >  > RETURNS TRIGGER
> >  > AS $$
> >  > DECLARE
> >  > v_separator VARCHAR (1) = '/';
> >  > v_cnt INT;
> >  > BEGIN
> >  > -- [ -- Required to prevent infinite recursion
> >  > SELECT COUNT (*) INTO v_cnt
> >  > FROM new_table;
> >
> > Where is new_table coming from?
> >
> >  >
> >  > IF (v_cnt > 0) THEN
> >  > -- ] -- Required to prevent infinite recursion
> >  > UPDATE storage_locations
> >  > SET storage_location_path = COALESCE (i.storage_location_path ||
> >  > v_separator, '') || storage_locations.storage_location_nm
> >  > FROM inserted i
> >  > JOIN deleted d
> >  > ON ( i.storage_location_id = d.storage_location_id
> >  > AND i.storage_location_path != d.storage_location_path
> >  > )
> >  > WHERE storage_locations.parent_storage_location_id =
> > i.storage_location_id;
> >  > END IF;
> >  > RETURN NULL;
> >  > END
> >  > $$ LANGUAGE plpgsql;
> >  >
> >  > CREATE TRIGGER storage_locations_b_u_AS_DML
> >  > AFTER UPDATE
> >  > ON storage_locations
> >  > REFERENCING NEW TABLE AS inserted
> >  > OLD TABLE AS deleted
> >  > FOR EACH STATEMENT EXECUTE FUNCTION
> > TRG_storage_locations_b_u_AS_DML ();
> >  >
> >  > Notice that the Trigger is getting called endlessly (if the
> > number of
> >  > rows in the NEW TABLE are NOT checked). I reckon if there are not
> > any
> >  > rows, what is the need to call the trigger. Or, may be, I am
> missing
> >  > something, which I need to learn.
> >
> > Yes:
> >
> > https://www.postgresql.org/docs/10/sql-createtrigger.html
> >
> > "... In contrast, a trigger that is marked FOR EACH STATEMENT only
> > executes once for any given operation, regardless of how many rows it
> > modifies (in particular, an operation that modifies zero rows will
> > still
> > result in the execution of any applicable FOR EACH STATEMENT
> triggers)."
> >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > Thanks,
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > Jiten
> >  >
> >  >
> >  >
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
I do understand that the statement level trigger will be executed once
before the operation. My point is.. if one does not know the rows, what
kind of use it can be put to. What is the use case? Like in after triggers,
one gets the rows in transition tables, how does one do with vefore trigger.

Thanks and regards,
Jiten


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
My bad!

It is a transition table. Consider the following revised definition of
trigger:


CREATE TRIGGER storage_locations_b_u_AS_DML
AFTER UPDATE
ON storage_locations
REFERENCING NEW TABLE AS new_table
OLD TABLE AS old_table
FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();

Thanks and regards,
Jiten

On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver, 
wrote:

> On 2/18/19 4:11 AM, Jitendra Loyal wrote:
> >
> > The AFTER Statement-level Trigger runs into infinite execution when
> > another set of rows are affected for the same table through this
> > trigger. Consider this use case where a table storage_locations that
> > manages a hierarchy of storage_locations in stores, and thus having
> > following columns (for simplicity):
> >
> >
> >
> >
> > storage_location_id SERIAL NOT NULL PRIMARY KEY,
> > store_id INTEGER NOT NULL, -- REFERENCES stores
> > storage_location_nm VARCHAR (25) NOT NULL,
> > parent_storage_location_id INTEGER NULL REFERENCES storage_locations,
> >  NULL for root storage locations
> > storage_location_path TEXT NOT NULL
> >
> >
> >
> >
> >
> > I have a BEFORE ROW trigger, which updates the storage_location_path with
> > parent's storage_location_path, if any, concatenated with its
> > storage_location_name. This works fine - no issues.
> >
> > I have another AFTER UPDATE STATEMENT-level Trigger and function
> definitions
> > as below (which updates the storage_path of the children):
> >
> >
> >
> >
> > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
> > RETURNS TRIGGER
> > AS $$
> > DECLARE
> > v_separator VARCHAR (1) = '/';
> > v_cnt INT;
> > BEGIN
> > -- [ -- Required to prevent infinite recursion
> > SELECT COUNT (*) INTO v_cnt
> > FROM new_table;
>
> Where is new_table coming from?
>
> >
> > IF (v_cnt > 0) THEN
> > -- ] -- Required to prevent infinite recursion
> > UPDATE storage_locations
> > SET storage_location_path = COALESCE (i.storage_location_path ||
> > v_separator, '') || storage_locations.storage_location_nm
> > FROM inserted i
> > JOIN deleted d
> > ON ( i.storage_location_id = d.storage_location_id
> > AND i.storage_location_path != d.storage_location_path
> > )
> > WHERE storage_locations.parent_storage_location_id =
> i.storage_location_id;
> > END IF;
> > RETURN NULL;
> > END
> > $$ LANGUAGE plpgsql;
> >
> > CREATE TRIGGER storage_locations_b_u_AS_DML
> > AFTER UPDATE
> > ON storage_locations
> > REFERENCING NEW TABLE AS inserted
> > OLD TABLE AS deleted
> > FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();
> >
> > Notice that the Trigger is getting called endlessly (if the number of
> > rows in the NEW TABLE are NOT checked). I reckon if there are not any
> > rows, what is the need to call the trigger. Or, may be, I am missing
> > something, which I need to learn.
>
> Yes:
>
> https://www.postgresql.org/docs/10/sql-createtrigger.html
>
> "... In contrast, a trigger that is marked FOR EACH STATEMENT only
> executes once for any given operation, regardless of how many rows it
> modifies (in particular, an operation that modifies zero rows will still
> result in the execution of any applicable FOR EACH STATEMENT triggers)."
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Thanks,
> >
> >
> >
> >
> >
> > Jiten
> >
> >
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: BEFORE ... Statement-level trigger

2019-02-19 Thread Jitendra Loyal
I have gone through the documentation quite a number of times to establish
the understanding. However, I had been wondering about the recursion in the
case I put forth.

Is there a better way to handle this requirement? The point is that the
trigger is being called when no rows are affected.

Thanks and regards,
Jiten


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
Thanks for all your efforts. I appreciate it.

Let us wait and see if someone can enlighten us, or you locate the
conversation.

Thanks once again

Regards,
Jiten

On Tue 19 Feb, 2019, 3:19 AM Adrian Klaver, 
wrote:

> On 2/18/19 9:07 AM, Jitendra Loyal wrote:
> > I do understand that the statement level trigger will be executed once
> > before the operation. My point is.. if one does not know the rows, what
> > kind of use it can be put to. What is the use case? Like in after
> > triggers, one gets the rows in transition tables, how does one do with
> > vefore trigger.
>
> Use FOR EACH ROW.
>
> Why you cannot use a FOR EACH STATEMENT trigger is something I thought I
> remember being discussed on the list before. Unfortunately I cannot find
> that conversation at the moment. Someone else will need to weigh in on
> this.
>
> >
> > Thanks and regards,
> > Jiten
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: BEFORE ... Statement-level trigger

2019-02-19 Thread Jitendra Loyal
I will not prefer to use a row trigger on this case for that will be
relatively inefficient.

So can we conclude that a sanity check is essential when using statement
level trigger.

Thanks and regards,
Jiten


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Stephen Frost
Greetings,

* Martín Fernández (fmarti...@gmail.com) wrote:
> After reading the pg_upgrade documentation multiple times, it seems that 
> after running pg_upgrade on the primary instance, we can't start it until we 
> run rsync from the primary to the standby. I'm understanding this from the 
> following section in the pg_upgrade manual page.
> 
> ```
> You will not be running pg_upgrade on the standby servers, but rather rsync 
> on the
>            primary. Do not start any servers yet.
> ```
> 
> I'm understanding the `any` as primary and standbys.

Yes, that's correct, you shouldn't start up anything yet.

> On the other hand, we've been doing tests that start the primary instance as 
> soon as pg_upgrade is done. This tests have worked perfectly fine so far. We 
> make the rsync call with the primary instance running and the standby can 
> start later on after rsync is done and we copy the new configuration files.

This is like taking an online backup of the primary without actually
doing pg_start_backup / pg_stop_backup and following the protocol for
that, meaning that the replica will start up without a backup_label and
will think it's at whatever point in the WAL stream that the pg_control
file says its at as of whenever the rsync copies that file.

That is NOT SAFE and it's a sure way to end up with corruption.

The rsync while everything is down should be pretty fast, unless you
have unlogged tables that are big (in which case, you should truncate
them before shutting down the primary) or temporary tables left around
(which you should clean up) or just generally other things that a
replica doesn't normally have.

If you can't have any downtime during this process then, imv, the answer
is to build out a new replica that will essentially be a 'throw-away',
move all the read load over to it and then go through the documented
pg_upgrade process with the primary and the other replicas, then flip
the traffic back to the primary + original replicas and then you can
either throw away the replica that was kept online or rebuild it using
the traditional methods of pg_basebackup (or for a larger system, you
could use pgbackrest which can run in parallel and is much, much faster
than pg_basebackup).

> If what we are doing is wrong, we need to run `rsync` before starting the 
> primary instance, that would mean that the primary and the standby are not 
> usable if pg10 doesn't start correctly in the primary right ? 

This is another reason why it's good to have an independent replica, as
it can be a fail-safe if things go completely south (you can just
promote it and have it be the primary and then rebuild replicas using
the regular backup+restore method and figure out what went wrong with
the pg10 migration).

Thanks!

Stephen


signature.asc
Description: PGP signature


Plpythonu extension

2019-02-19 Thread Vincent Predoehl
hi

I have PostgreSQL 11.1 installed on MacOS X Mojave and I used brew to do the 
install.  It appears that the extension plpython was not included in the 
default installation and I would like to know how to install it.

Thanks…


Re: HAVING query structured wrong

2019-02-19 Thread Chuck Martin
On Mon, Feb 18, 2019 at 12:37 PM Andrew Gierth 
wrote:

> > "Chuck" == Chuck Martin  writes:
>
>  Chuck> I am trying to create a query that returns all transactions for
>  Chuck> each person who has a balance over a given amount. I thought
>  Chuck> HAVING was the answer, but if so, I'm mis-using it. This query
>  Chuck> returns only transactions exceeding the given amount rather than
>  Chuck> transactions for people whose balance is over the amount:
>  [snip]
>  Chuck> Since that returned the wrong set of records, I created another
>  Chuck> that returns the correct set of people with balances over the
>  Chuck> given amount. But I can't figure out how to use this to get all
>  Chuck> the transactions for people returned by this query:
>
>  Chuck> SELECT case_pkey
>  Chuck> FROM trans,ombcase,status
>  Chuck> WHERE case_fkey = case_pkey
>  Chuck> AND status_fkey = status_pkey
>  Chuck> AND statusopen = 1
>  Chuck> AND transistrust <> 1
>  Chuck> AND transcleared <> 1
>  Chuck> GROUP BY case_pkey
>  Chuck> HAVING sum(transamount) >= 50
>
>  Chuck> ORDER BY case_pkey
>
>  Chuck> So how do I get all transactions for each case_pkey?
>
> You can join the result of any subquery as if it were a table, either
> with or without using a CTE:
>
> SELECT ...
>   FROM (select case_pkey from ... having ...) AS cases,
>trans
>  WHERE trans.case_fkey = cases.case_pkey;


>
> (incidentally, please qualify all the column references in your query
> with a table name or alias, otherwise people reading your code have no
> idea which column is supposed to be in which table)


Sorry. That was sloppy.

But working with this idea, I got the query working, so I appreciate the
pointer. For anyone wanting to see how I did so, here is the working query
(replacing the variable for the minimum balance to include with "50"):

-- Find transactions for client invoices using subquery to find client total
SELECT
contactnameaddress.fullname,contactnameaddress.streetaddress,contactnameaddress.towncityname,
contactnameaddress.stateprovabbrev,contactnameaddress.postalcode,
trans.transdate,linkedname.linkednameid,trans.transreference,
trans.transamount,ombcase.caseid,trans.transcheckno,contactnameaddress.lastorcompanyname,contactnameaddress.firstname
FROM trans,ombcase,client,contactnameaddress,linkedname,status ,
 (SELECT case_pkey FROM  ombcase,trans , status
WHERE trans.case_fkey = ombcase.case_pkey
AND ombcase.status_fkey = status_pkey
AND status.statusopen = 1
AND trans.transistrust <> 1
AND trans.transcleared <> 1
GROUP BY ombcase.case_pkey
HAVING sum(trans.transamount) >= 50) AS cases
WHERE trans.case_fkey = cases.case_pkey
AND trans.transistrust <> 1
AND ombcase.client_fkey = client.client_pkey
AND client.clientname_fkey = contactnameaddress.contactname_pkey
AND trans.linkedname_fkey = linkedname.contactname_pkey
AND ombcase.status_fkey = status.status_pkey
AND status.statusopen = 1
AND trans.transcleared <> 1
AND trans.Case_fkey = ombcase.case_pkey
GROUP BY ombcase.case_pkey,contactnameaddress.streetaddress,
contactnameaddress.towncityname, contactnameaddress.stateprovabbrev,
contactnameaddress.postalcode,
trans.transdate,trans.transreference,trans.transamount,
contactnameaddress.fullname,linkedname.linkednameid,
contactnameaddress.lastorcompanyname,
contactnameaddress.firstname,ombcase.caseid,
trans.transcheckno

I'll try to learn the other methods using your examples. Again, I
appreciate the help.

or with a CTE,
>
> WITH cases AS (select ... from ... having ...)
> SELECT ...
>   FROM cases, trans
>  WHERE trans.case_fkey = cases.case_pkey;
>
> There's also a third method with window functions instead of GROUP BY,
> which is to do something like
>
> SELECT ...
>   FROM (select ...,
>sum(transamount) over (partition by case_pkey) as total_amt
>   from ...) s
>  WHERE total_amt > 50;
>
> --
> Andrew (irc:RhodiumToad)
>
>


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Hellmuth Vargas
Hi

But could you do the following procedure?:
pg_upgrade of the master
rysnc with a hot standby
arracar master
hot standby start
stop hot standby and rsync the other hot standby with the migrated hot
standby?

El mar., 19 de feb. de 2019 a la(s) 06:12, Stephen Frost (sfr...@snowman.net)
escribió:

> Greetings,
>
> * Martín Fernández (fmarti...@gmail.com) wrote:
> > After reading the pg_upgrade documentation multiple times, it seems that
> after running pg_upgrade on the primary instance, we can't start it until
> we run rsync from the primary to the standby. I'm understanding this from
> the following section in the pg_upgrade manual page.
> >
> > ```
> > You will not be running pg_upgrade on the standby servers, but rather
> rsync on the
> >primary. Do not start any servers yet.
> > ```
> >
> > I'm understanding the `any` as primary and standbys.
>
> Yes, that's correct, you shouldn't start up anything yet.
>
> > On the other hand, we've been doing tests that start
> the primary instance as soon as pg_upgrade is done. This tests have worked
> perfectly fine so far. We make the rsync call with the primary instance
> running and the standby can start later on after rsync is done and we copy
> the new configuration files.
>
> This is like taking an online backup of the primary without actually
> doing pg_start_backup / pg_stop_backup and following the protocol for
> that, meaning that the replica will start up without a backup_label and
> will think it's at whatever point in the WAL stream that the pg_control
> file says its at as of whenever the rsync copies that file.
>
> That is NOT SAFE and it's a sure way to end up with corruption.
>
> The rsync while everything is down should be pretty fast, unless you
> have unlogged tables that are big (in which case, you should truncate
> them before shutting down the primary) or temporary tables left around
> (which you should clean up) or just generally other things that a
> replica doesn't normally have.
>
> If you can't have any downtime during this process then, imv, the answer
> is to build out a new replica that will essentially be a 'throw-away',
> move all the read load over to it and then go through the documented
> pg_upgrade process with the primary and the other replicas, then flip
> the traffic back to the primary + original replicas and then you can
> either throw away the replica that was kept online or rebuild it using
> the traditional methods of pg_basebackup (or for a larger system, you
> could use pgbackrest which can run in parallel and is much, much faster
> than pg_basebackup).
>
> > If what we are doing is wrong, we need to run `rsync` before starting
> the primary instance, that would mean that the primary and the standby are
> not usable if pg10 doesn't start correctly in the primary right ?
>
> This is another reason why it's good to have an independent replica, as
> it can be a fail-safe if things go completely south (you can just
> promote it and have it be the primary and then rebuild replicas using
> the regular backup+restore method and figure out what went wrong with
> the pg10 migration).
>
> Thanks!
>
> Stephen
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Stephen Frost
Greetings,

* Hellmuth Vargas (hiv...@gmail.com) wrote:
> But could you do the following procedure?:

> pg_upgrade of the master
> rysnc with a hot standby

The above should be alright provided both the primary and the standby
are down and the instructions in the pg_upgrade docs are followed.

> arracar master
> hot standby start

So, start both the primary and the replica?  That part should be fine by
itself.

> stop hot standby and rsync the other hot standby with the migrated hot
> standby?

At some later point, shut down the replica completely, then do an rsync
from that replica to the other replica and build its hard-link tree that
way, and update anything that's changed while the 'migrated' replica was
online?  I don't see any obvious issue with that as the result should
mean that the two replicas are identical from PG's perspective from that
point moving forward.

Ultimately, it really depends on your specific environment though, of
course.  It also might not be a bad idea to do a regular backup of the
upgraded primary and then restore that to the second replica, just to
make sure you have that whole process working and to test out your
restore process.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Hellmuth Vargas
Thank you Stephen 👍

El mar., 19 de feb. de 2019 a la(s) 10:05, Stephen Frost (sfr...@snowman.net)
escribió:

> Greetings,
>
> * Hellmuth Vargas (hiv...@gmail.com) wrote:
> > But could you do the following procedure?:
>
> > pg_upgrade of the master
> > rysnc with a hot standby
>
> The above should be alright provided both the primary and the standby
> are down and the instructions in the pg_upgrade docs are followed.
>
> > arracar master
> > hot standby start
>
> So, start both the primary and the replica?  That part should be fine by
> itself.
>
> > stop hot standby and rsync the other hot standby with the migrated hot
> > standby?
>
> At some later point, shut down the replica completely, then do an rsync
> from that replica to the other replica and build its hard-link tree that
> way, and update anything that's changed while the 'migrated' replica was
> online?  I don't see any obvious issue with that as the result should
> mean that the two replicas are identical from PG's perspective from that
> point moving forward.
>
> Ultimately, it really depends on your specific environment though, of
> course.  It also might not be a bad idea to do a regular backup of the
> upgraded primary and then restore that to the second replica, just to
> make sure you have that whole process working and to test out your
> restore process.
>
> Thanks!
>
> Stephen
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Martín Fernández
Stephen,

Thanks for information! I've refactor our migration scripts to follow the 
suggestions. 

One extra question that popped up. As long as we don't start the standby (after 
running rsync), we can always `rm -f $PGDATA_10` and promote the standby if 
necessary for failover right ? We also need to `mv` pg_control.old to 
pg_control in the old data directory.

Thanks!

Martín

On Tue, Feb 19th, 2019 at 12:39 PM, Hellmuth Vargas  wrote:

> 
> Thank you Stephen 👍
> 
> El mar., 19 de feb. de 2019 a la(s) 10:05, Stephen Frost ( sfr...@snowman.net
> ) escribió:
> 
> 
>> Greetings,
>> 
>> * Hellmuth Vargas ( hiv...@gmail.com ) wrote:
>> > But could you do the following procedure?:
>> 
>> > pg_upgrade of the master
>> > rysnc with a hot standby
>> 
>> The above should be alright provided both the primary and the standby
>> are down and the instructions in the pg_upgrade docs are followed.
>> 
>> > arracar master
>> > hot standby start
>> 
>> So, start both the primary and the replica?  That part should be fine by
>> itself.
>> 
>> > stop hot standby and rsync the other hot standby with the migrated hot
>> > standby?
>> 
>> At some later point, shut down the replica completely, then do an rsync
>> from that replica to the other replica and build its hard-link tree that
>> way, and update anything that's changed while the 'migrated' replica was
>> online?  I don't see any obvious issue with that as the result should
>> mean that the two replicas are identical from PG's perspective from that
>> point moving forward.
>> 
>> Ultimately, it really depends on your specific environment though, of
>> course.  It also might not be a bad idea to do a regular backup of the
>> upgraded primary and then restore that to the second replica, just to
>> make sure you have that whole process working and to test out your
>> restore process.
>> 
>> Thanks!
>> 
>> Stephen
>> 
> 
> 
> 
> 
> 
> --
> Cordialmente,
> 
> Ing. Hellmuth I. Vargas S.
> Esp. Telemática y Negocios por Internet 
> Oracle Database 10g Administrator Certified Associate
> EnterpriseDB Certified PostgreSQL 9.3 Associate
> 
> 
>

Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Stephen Frost
Greetings,

* Martín Fernández (fmarti...@gmail.com) wrote:
> Thanks for information! I've refactor our migration scripts to follow the 
> suggestions. 

Please don't top-post on these mailing lists.

> One extra question that popped up. As long as we don't start the standby 
> (after running rsync), we can always `rm -f $PGDATA_10` and promote the 
> standby if necessary for failover right ? We also need to `mv` pg_control.old 
> to pg_control in the old data directory.

Not sure which standby we're talking about here, but in general, yes, as
long as you haven't actually started the system after the
pg_upgrade/rsync, you should be able to blow away the new cluster that
pg_upgrade/rsync created and start the old cluster back up and promote
it (if necessary) and use it.

Note that you should *not* need to do anything with pg_control, I have
no idea what you're referring to there, but the old cluster should have
the pg_control file and all the catalog tables in place from before the
pg_upgrade/rsync (those aren't touched during the pg_upgrade/rsync
process) and you would just need to start up the old binaries pointing
at the old PG data directory and everything should just work.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Martín Fernández
Stephen,

@bilby91


On Tue, Feb 19, 2019 at 1:37 PM Stephen Frost  wrote:

> Greetings,
>
> * Martín Fernández (fmarti...@gmail.com) wrote:
> > Thanks for information! I've refactor our migration scripts to follow
> the suggestions.
>
> Please don't top-post on these mailing lists.
>
> > One extra question that popped up. As long as we don't start the standby
> (after running rsync), we can always `rm -f $PGDATA_10` and promote the
> standby if necessary for failover right ? We also need to `mv`
> pg_control.old to pg_control in the old data directory.
>
> Not sure which standby we're talking about here, but in general, yes, as
> long as you haven't actually started the system after the
> pg_upgrade/rsync, you should be able to blow away the new cluster that
> pg_upgrade/rsync created and start the old cluster back up and promote
> it (if necessary) and use it.
>
> Note that you should *not* need to do anything with pg_control, I have
> no idea what you're referring to there, but the old cluster should have
> the pg_control file and all the catalog tables in place from before the
> pg_upgrade/rsync (those aren't touched during the pg_upgrade/rsync
> process) and you would just need to start up the old binaries pointing
> at the old PG data directory and everything should just work.
>
>
I did some successful tests yesterday around this scenario. That standby in
this context is that one that received the rsync from the master but was
never started. The old data directory stays intact except for the fact that
globa/pg_control was renmaed with a .old

I have found the documentation on pg_ugprade that states this:

` If you ran pg_upgrade without --link or did not start the new server, the
old cluster was not modified except that, if linking started, a .old suffix
was appended to
   $PGDATA/global/pg_control. To reuse the old cluster,
possibly remove the .old suffix from $PGDATA/global/pg_control; you can
then restart the old cluster.`


> Thanks!
>
> Stephen
>


Re: Promoted slave tries to archive previously archived WAL file

2019-02-19 Thread Andre Piwoni
I call pg_ctl -D /var/lib/pgsql/10/data promote to upgrade slave to master
when failover happens
I repoint slave to the master by stopping it, updating recovery.conf and
restarting it. Let me know if I'm doing it wrong.

Thank you for clarifying that contents of WAL files from stream and fetch
method should be different. The fact remains that WAL file created during
the backup by stream method does not have extension .done and new master
repeatedly fails to archive it because previous master archived it. This
does not happen in fetch mode where file created during backup has .done
extension.


On Sun, Feb 17, 2019 at 5:27 PM Michael Paquier  wrote:

> On Fri, Feb 15, 2019 at 12:03:39PM -0800, Andre Piwoni wrote:
> > My slave nodes were created using pg_basebackup with --wal-method=stream.
> > If I understand right this option streams WAL files generated during
> backup
> > and this WAL file was 00010002 but its contents were
> > different from what was on the primary and in WAL archive. When I changed
> > --wal-method=fetch which collects WAL files generated during the backup
> at
> > the end then diff did not detect any changes. When I failover I don't
> seem
> > to have the issue with new primary to archive.
> >
> > It seems streaming WAL segments created during backup produced corrupt
> file
> > based on diff.
>
> Nope, --wal-method=stream and fetch behave as they are designed for.
> "fetch" would include all the contents of pg_wal/ as part of a
> BASE_BACKUP command using the replication protocol while "stream"
> would make the base backup avoid copying all WAL files, copying them
> in parallel with a secondary process which uses the same facility as
> pg_receiverwal, in which case the set of files in
> pg_wal/archive_status/ has little meaning.
>
> I am also curious about the way you use to do failovers.  Let me
> guess: you stop the standby, delete its recovery.conf and then restart
> the former standby?  This would prevent a timeline jump at promotion
> which would explain the conflicts you are seeing when archiving two
> times the same segment.
> --
> Michael
>


-- 

*Andre Piwoni*

Sr. Software Developer, BI/Database

*Web*MD Health Services

Mobile: 801.541.4722

www.webmdhealthservices.com


Re: Postgresql RDS DB Latency Chossing Hash join Plan

2019-02-19 Thread Michael Lewis
1) You can increase it as much as you want but (auto)analyze will take
longer to examine the values of default_stat_target * 300 rows and compute
the most common values and the frequencies of those values. How much
variation does you data actually have? If your data only has 50 distinct
values with fairly even distribution, then no need to increase it from 100
even. Oh, deciding on the best query plan will take a little more time for
the optimizer since it will be examining bigger collection of stats on the
tables that you have increased.

2) I am not aware.

3) I am not aware of anything about your application so I can't recommend
any number outright, but 4MB for work_mem definitely seems low to me
assuming you have 16GB or more memory available unless you have very high
concurrency. It will depend on how many sorts per statement, how many
users, etc. If you spill over to disk on routine operations, then things
are definitely going to be much slower than if you are able to keep things
in memory. You could try running explain analyze and just verify that you
are keeping things in memory. You could also turn on automatic gathering of
explain analyze plans on live if you have the room for logging and can
tolerate just a little latency.

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM

I'm glad your performance is improved in AWS/dev environment. It can be a
big hassle to test things in an environment that performs significantly
different.


*Michael Lewis*


On Sun, Feb 17, 2019 at 10:01 AM github kran  wrote:

>
>
> On Thu, Feb 14, 2019 at 4:58 PM Michael Lewis  wrote:
>
>> This is beyond my expertise except to say that if your storage is SSDs in
>> AWS, then you definitely want random_page_cost close to the same as
>> seq_page_cost (1 by default) assuming your data is likely to be in cache as
>> discussed in the documentation
>> .
>> As it says- "Reducing this value relative to seq_page_cost will cause the
>> system to prefer index scans" as you saw. Changing the value on production
>> would again depend on the storage type used, and how good the cache hit
>> rate is.
>>
>> As far as I know, dropping old partitions should not be significantly
>> impactful to the system other than no longer needing to store that data
>> (cost, time for full backups, etc).
>>
>> Again, as I understand things, there is not a big impact from having old
>> unused tables in terms of maintenance. They should be ignored by normal
>> processes.
>>
>> Glad you got your issue resolved.
>>
>>
>> *Michael Lewis*
>>
>
>  Thanks for the feedback.You have been giving your
> thoughts/suggestions since the beginning of the case. It was helpful.  I
> think I realized later based on your suggestion to increase the default
> statistics target from 100. It was not correctly initially
>  as I had that set at session level without setting them on the
> partition tables. As next steps I have the stats to 1000 on all of the
> partitiion tables we use from Nov 2018 to Feb 19 and then did ANALYZE. Iam
> currently running a load test to test
> how the DB performance is behaving right now and so far its running
> good than before. ( I have reset all the previous changes done except the
> statistics change).  I will keep you posted after the test finishes
>
> Questions.
> 1)  Can i further increase the Setting to 3000 and see the system
> behaves. ?. How do I know the best value to be used for my database in
> terms of the sampling limit with the default statistics setting ?.
>  2) Apart from analyzing the tables do I need to do any other changes
> with the statistics setting ?
>  3)  Also the current work mem is set to 4 MB and we didnt play with
> this value so far. For future needs can I increase the WORK MEM setting ?.
>
> Appreciate your reply.
>
> Thanks
>
>
>> On Thu, Feb 14, 2019 at 3:11 PM github kran  wrote:
>>
>>>
>>>
>>> On Thu, Feb 14, 2019 at 12:43 PM Michael Lewis 
>>> wrote:
>>>
 How many total rows in these tables? I am assuming these are partitions
 and those 100 million rows got distributed. If the data difference is
 significant, then you aren't guaranteed similar performance. You may want
 to follow more of the suggested steps on.

 https://wiki.postgresql.org/wiki/SlowQueryQuestions


 *Michael Lewis  |  Software Engineer*
 *Entrata*
 *c: **619.370.8697 <619-370-8697>*

 Michael - Yes correct the data of 100 million rows is distributed to
>>> all the partitions.
>>> FInally I feel we have come to conclusion after we changed the
>>> random_page_cost from 4 to 2 in Non prod instance and we see improvements
>>> in the query which use to take from 1 minute to 1 -2 seconds.
>>> That's correct we have around 490 million rows in few of our partition
>>> tables. The par

Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Stephen Frost
Greetings,

* Martín Fernández (fmarti...@gmail.com) wrote:
> On Tue, Feb 19, 2019 at 1:37 PM Stephen Frost  wrote:
> > * Martín Fernández (fmarti...@gmail.com) wrote:
> > > Thanks for information! I've refactor our migration scripts to follow
> > the suggestions.
> >
> > Please don't top-post on these mailing lists.
> >
> > > One extra question that popped up. As long as we don't start the standby
> > (after running rsync), we can always `rm -f $PGDATA_10` and promote the
> > standby if necessary for failover right ? We also need to `mv`
> > pg_control.old to pg_control in the old data directory.
> >
> > Not sure which standby we're talking about here, but in general, yes, as
> > long as you haven't actually started the system after the
> > pg_upgrade/rsync, you should be able to blow away the new cluster that
> > pg_upgrade/rsync created and start the old cluster back up and promote
> > it (if necessary) and use it.
> >
> > Note that you should *not* need to do anything with pg_control, I have
> > no idea what you're referring to there, but the old cluster should have
> > the pg_control file and all the catalog tables in place from before the
> > pg_upgrade/rsync (those aren't touched during the pg_upgrade/rsync
> > process) and you would just need to start up the old binaries pointing
> > at the old PG data directory and everything should just work.
> >
> I did some successful tests yesterday around this scenario. That standby in
> this context is that one that received the rsync from the master but was
> never started. The old data directory stays intact except for the fact that
> globa/pg_control was renmaed with a .old
> 
> I have found the documentation on pg_ugprade that states this:
> 
> ` If you ran pg_upgrade without --link or did not start the new server, the
> old cluster was not modified except that, if linking started, a .old suffix
> was appended to
>$PGDATA/global/pg_control. To reuse the old cluster,
> possibly remove the .old suffix from $PGDATA/global/pg_control; you can
> then restart the old cluster.`

Ah, right, I forgot that it did that, fair enough.

I've never been thrilled with that particular approach due to the
inherent risks of people messing directly with files like pg_control,
but that's how it is for now.

Thanks!

Stephen


signature.asc
Description: PGP signature


procedures and transactions

2019-02-19 Thread Rob Nikander
Hi,

I’m trying to understand how procedures work with transactions. I tried the 
code below - it’s a simple procedure to print some notices and commit a 
transaction. If I call it from psql after a `begin`, then it gives an error. 
What does that error mean? Are procedures not allowed to commit/rollback if 
they are called within in an outer transaction?

Also, I tried putting a `start transaction` command in the procedure. I got 
another error: `unsupported transaction command in PL/pgSQL`. Are procedures 
not allowed to start transactions? Or is there another command?

thanks,
Rob

create or replace procedure t_test(n integer)
as $$
begin
raise notice 'current isolation level: %', (select 
current_setting('transaction_isolation'));
raise notice 'current txid: %', (select txid_current());
raise notice '---';
commit;
raise notice 'current isolation level: %', (select 
current_setting('transaction_isolation'));
raise notice 'current txid: %', (select txid_current());
end;
$$ language plpgsql;

psql> begin;
psql> call t_test(1);

NOTICE:  current isolation level: read committed
NOTICE:  current txid: 111490
NOTICE:  ---
ERROR:  invalid transaction termination






Re: procedures and transactions

2019-02-19 Thread David G. Johnston
On Tuesday, February 19, 2019, Rob Nikander  wrote:

>  Are procedures not allowed to commit/rollback if they are called within
> in an outer transaction?
>

 https://www.postgresql.org/docs/11/sql-call.html

Also, I tried putting a `start transaction` command in the procedure. I got
> another error: `unsupported transaction command in PL/pgSQL`. Are
> procedures not allowed to start transactions? Or is there another command?
>

 https://www.postgresql.org/docs/11/plpgsql-transactions.html

David J.


Re: procedures and transactions

2019-02-19 Thread Adrian Klaver

On 2/19/19 12:31 PM, Rob Nikander wrote:

Hi,

I’m trying to understand how procedures work with transactions. I tried the 
code below - it’s a simple procedure to print some notices and commit a 
transaction. If I call it from psql after a `begin`, then it gives an error. 
What does that error mean? Are procedures not allowed to commit/rollback if 
they are called within in an outer transaction?

Also, I tried putting a `start transaction` command in the procedure. I got 
another error: `unsupported transaction command in PL/pgSQL`. Are procedures 
not allowed to start transactions? Or is there another command?

thanks,
Rob

create or replace procedure t_test(n integer)
as $$
begin
 raise notice 'current isolation level: %', (select 
current_setting('transaction_isolation'));
 raise notice 'current txid: %', (select txid_current());
 raise notice '---';
 commit;
 raise notice 'current isolation level: %', (select 
current_setting('transaction_isolation'));
 raise notice 'current txid: %', (select txid_current());
end;
$$ language plpgsql;

psql> begin;
psql> call t_test(1);


Don't use the begin;

call t_test(1);
NOTICE:  current isolation level: read committed
NOTICE:  current txid: 592
NOTICE:  ---
NOTICE:  current isolation level: read committed
NOTICE:  current txid: 593
CALL


A function already starts in a transaction.



NOTICE:  current isolation level: read committed
NOTICE:  current txid: 111490
NOTICE:  ---
ERROR:  invalid transaction termination








--
Adrian Klaver
adrian.kla...@aklaver.com



Partial index on JSON column

2019-02-19 Thread Samuel Williams
Hello

I have a table with ~3 billion events.

Of this, there are a small subset of events which match the following query:

CREATE INDEX index_user_event_for_suggestion_notification ON
public.user_event USING btree parameters ->>
'suggestion_id'::text))::integer), what) WHERE ((parameters ->>
'suggestion_id'::text) IS NOT NULL)

When I do this query:

EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
'suggestion_id'::text)::integer = 26) AND what = 'suggestion_notification';

Aggregate  (cost=7115959.48..7115959.49 rows=1 width=8)
  ->  Bitmap Heap Scan on user_event  (cost=37360.24..7115907.56 rows=20771
width=0)
Recheck Cond: ((what)::text = 'suggestion_notification'::text)
Filter: (((parameters ->> 'suggestion_id'::text))::integer = 26)
->  Bitmap Index Scan on index_user_event_for_clustering
 (cost=0.00..37355.05 rows=4154273 width=0)
  Index Cond: ((what)::text = 'suggestion_notification'::text)

It's slow. I need to explicitly add the NULL constraint:

EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
'suggestion_id'::text)::integer = 26) AND ((parameters ->> 'suggestion_id'::
text) IS NOT NULL) AND what = 'suggestion_notification';

Aggregate  (cost=38871.48..38871.49 rows=1 width=8)
  ->  Index Scan using index_user_event_for_suggestion_notification on
user_event  (cost=0.42..38819.81 rows=20668 width=0)
Index Cond: parameters ->> 'suggestion_id'::text))::integer =
26) AND ((what)::text = 'suggestion_notification'::text))

I feel like the null constraint should be implicit.

That being said:

- Is my partial index wrong? Should I write it differently so the optimiser
knows this?
- Is my query wrong? How can I make the most use of this index without
being explicit?
- Any other suggestions for how I can do this?

Thanks
Samuel


PG 9.6 managed fail over of master, how do i manage a pg_logical subscriber

2019-02-19 Thread Reid Thompson
Hi, 
I have the following configuration.

pg9.6_master --> streaming replica pg9.6_standy
 --> pg_logical replication of several schemas to pg9.6_logical

We plan to perform a managed fail over of pg9.6_master to pg9.6_standy.
As part of the fail over, the virtual IP address and hostname of
pg9.6_master will be migrated to pg9.6_standy.
I'm looking for guidance on how to keep the pg_logical replication going
to pg9.6_logical from the newly promoted 'master'.

If I shutdown pg9.6_logical prior to the fail over, and start it back up
after the failover, it should come up 'believing' that pg9.6_standby is
pg9.6_master, will it 'just work'? or is there a process that I need to
follow have pg9.6_logical succesfully replicate from the newly promoted
host?

thanks,
reid



Re: Partial index on JSON column

2019-02-19 Thread Tom Lane
Samuel Williams  writes:
> When I do this query:

> EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
> 'suggestion_id'::text)::integer = 26) AND what = 'suggestion_notification';

> It's slow. I need to explicitly add the NULL constraint:

Try it like

EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
'suggestion_id'::text) = '26') AND what = 'suggestion_notification';

I don't think we assume that CoerceViaIO is strict, and without that
the deduction that the value couldn't be null doesn't hold.  In any
case you're better off without the runtime type conversion: that
isn't doing much for you except raising the odds of getting an error.

regards, tom lane



Re: Partial index on JSON column

2019-02-19 Thread Samuel Williams
Thanks for the quick reply Tom,

I will try your advice.

The reason why I used ::integer for the INDEX is because I assumed it would
be more efficient both in space and performance.

In the JSONB field, it is actually an integer, i.e. {"location_age": 1,
"suggestion_id": 26}

So, now that I think about it, maybe the way I'm using ::text is wrong.

Any further advice is most appreciated.

Kind regards,
Samuel

On Wed, 20 Feb 2019 at 10:14, Tom Lane  wrote:

> Samuel Williams  writes:
> > When I do this query:
>
> > EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
> > 'suggestion_id'::text)::integer = 26) AND what =
> 'suggestion_notification';
>
> > It's slow. I need to explicitly add the NULL constraint:
>
> Try it like
>
> EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
> 'suggestion_id'::text) = '26') AND what = 'suggestion_notification';
>
> I don't think we assume that CoerceViaIO is strict, and without that
> the deduction that the value couldn't be null doesn't hold.  In any
> case you're better off without the runtime type conversion: that
> isn't doing much for you except raising the odds of getting an error.
>
> regards, tom lane
>


Re: Partial index on JSON column

2019-02-19 Thread Michael Lewis
On Tue, Feb 19, 2019 at 1:41 PM Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Hello
>
> I have a table with ~3 billion events.
>
> Of this, there are a small subset of events which match the following
> query:
>
> CREATE INDEX index_user_event_for_suggestion_notification ON
> public.user_event USING btree parameters ->>
> 'suggestion_id'::text))::integer), what) WHERE ((parameters ->>
> 'suggestion_id'::text) IS NOT NULL)
>

You may want to use this-
CREATE INDEX index_user_event_for_suggestion_notification ON
public.user_event USING btree (parameters ->> 'suggestion_id', what) WHERE
parameters ? 'suggestion_id';


-Michael


Re: Partial index on JSON column

2019-02-19 Thread Michael Lewis
>
> On Wed, 20 Feb 2019 at 10:14, Tom Lane  wrote:
>
>> Samuel Williams  writes:
>> > When I do this query:
>>
>> > EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
>> > 'suggestion_id'::text)::integer = 26) AND what =
>> 'suggestion_notification';
>>
>> > It's slow. I need to explicitly add the NULL constraint:
>>
>> Try it like
>>
>> EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
>> 'suggestion_id'::text) = '26') AND what = 'suggestion_notification';
>>
>> I don't think we assume that CoerceViaIO is strict, and without that
>> the deduction that the value couldn't be null doesn't hold.  In any
>> case you're better off without the runtime type conversion: that
>> isn't doing much for you except raising the odds of getting an error.
>>
>
For run-time safe comparisons of something stored in jsonb that I expect to
only have booleans (and I don't trust input to always be valid), I have a
utility function like this-

CREATE OR REPLACE FUNCTION public.util_to_bool( pInput TEXT )
RETURNS BOOL AS
$BODY$
BEGIN
RETURN pInput::BOOL;
EXCEPTION WHEN OTHERS THEN
RETURN FALSE;
END;
$BODY$
LANGUAGE 'plpgsql' IMMUTABLE;


This is probably not best practice however.



>
>> regards, tom lane
>>
>


Re: Partial index on JSON column

2019-02-19 Thread Samuel Williams
So, I checked my original code, and I didn't add `::text`, it was actually
added by Postgres:

EXPLAIN for: SELECT "user_event".* FROM "user_event" WHERE
"user_event"."what" = $1 AND (((parameters ->> 'suggestion_id')::integer) =
119 AND ((parameters ->> 'suggestion_id') IS NOT NULL)) [["what",
"suggestion_notification"]]
   QUERY PLAN

 Index Scan using index_user_event_for_suggestion_notification on
user_event  (cost=0.42..77193.95 rows=20669 width=138)
   Index Cond: parameters ->> 'suggestion_id'::text))::integer = 119)
AND ((what)::text = 'suggestion_notification'::text))
(2 rows)

Is there some way to directly use the integer value in the index with
minimal type coercions?

Thanks
Samuel

On Wed, 20 Feb 2019 at 10:24, Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Thanks for the quick reply Tom,
>
> I will try your advice.
>
> The reason why I used ::integer for the INDEX is because I assumed it
> would be more efficient both in space and performance.
>
> In the JSONB field, it is actually an integer, i.e. {"location_age": 1,
> "suggestion_id": 26}
>
> So, now that I think about it, maybe the way I'm using ::text is wrong.
>
> Any further advice is most appreciated.
>
> Kind regards,
> Samuel
>
> On Wed, 20 Feb 2019 at 10:14, Tom Lane  wrote:
>
>> Samuel Williams  writes:
>> > When I do this query:
>>
>> > EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
>> > 'suggestion_id'::text)::integer = 26) AND what =
>> 'suggestion_notification';
>>
>> > It's slow. I need to explicitly add the NULL constraint:
>>
>> Try it like
>>
>> EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
>> 'suggestion_id'::text) = '26') AND what = 'suggestion_notification';
>>
>> I don't think we assume that CoerceViaIO is strict, and without that
>> the deduction that the value couldn't be null doesn't hold.  In any
>> case you're better off without the runtime type conversion: that
>> isn't doing much for you except raising the odds of getting an error.
>>
>> regards, tom lane
>>
>


Re: procedures and transactions

2019-02-19 Thread Rob Nikander
I thought if I had some application logic that needed a certain kind of 
transaction (maybe a non-default isolation level), I could hide that fact in a 
procedure. App code (Java/Python/whatever) could remain unaware of transactions 
(except maybe needing to retry after a failure) and simply send `call foo(?, 
?)` to the DB. But maybe that kind of design is not supported, and application 
code needs to start transactions and set isolation levels. Is that accurate?  I 
supposed a procedure could throw an exception if it doesn’t like the value in 
`current_setting('transaction_isolation’)`.

Rob

> On Feb 19, 2019, at 2:38 PM, David G. Johnston  
> wrote:
> 
> On Tuesday, February 19, 2019, Rob Nikander  > wrote:
>  Are procedures not allowed to commit/rollback if they are called within in 
> an outer transaction?
> 
>  https://www.postgresql.org/docs/11/sql-call.html 
> 
> 
> Also, I tried putting a `start transaction` command in the procedure. I got 
> another error: `unsupported transaction command in PL/pgSQL`. Are procedures 
> not allowed to start transactions? Or is there another command?
> 
>  https://www.postgresql.org/docs/11/plpgsql-transactions.html 
> 
> 
> David J.
> 



Re: procedures and transactions

2019-02-19 Thread Adrian Klaver

On 2/19/19 1:48 PM, Rob Nikander wrote:
I thought if I had some application logic that needed a certain kind of 
transaction (maybe a non-default isolation level), I could hide that 
fact in a procedure. App code (Java/Python/whatever) could remain 
unaware of transactions (except maybe needing to retry after a failure) 
and simply send `call foo(?, ?)` to the DB. But maybe that kind of 
design is not supported, and application code needs to start 
transactions and set isolation levels. Is that accurate?  I supposed a 
procedure could throw an exception if it doesn’t like the value in 
`current_setting('transaction_isolation’)`.


Per the docs in the link David posted:

https://www.postgresql.org/docs/11/sql-call.html

"If CALL is executed in a transaction block, then the called procedure 
cannot execute transaction control statements. Transaction control 
statements are only allowed if CALL is executed in its own transaction."


So:

psql> begin;
psql> call t_test(1);


will not work.



Rob

On Feb 19, 2019, at 2:38 PM, David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Tuesday, February 19, 2019, Rob Nikander > wrote:


 Are procedures not allowed to commit/rollback if they are called
within in an outer transaction?


https://www.postgresql.org/docs/11/sql-call.html

Also, I tried putting a `start transaction` command in the
procedure. I got another error: `unsupported transaction command
in PL/pgSQL`. Are procedures not allowed to start transactions? Or
is there another command?


https://www.postgresql.org/docs/11/plpgsql-transactions.html

David J.






--
Adrian Klaver
adrian.kla...@aklaver.com



adding more space to the existing 9.6 cluster

2019-02-19 Thread Julie Nishimura
Hello, we are almost out of space on our main data directory, and about to 
introduce new directory to our cluster. We cannot use multiple physical disks 
as a single volume, so we are thinking about creation new tablespace.
Our current data_directory shows as follows:
/data/postgresql/9.6/main
postgres=# SELECT spcname FROM pg_tablespace;
  spcname

 pg_default
 pg_global
(2 rows)

We also have 36 existing databases on this cluster.
If we add new directory, will it be enough to execute the following commands in 
order to force new data there:

CREATE TABLESPACE tablespace01 LOCATION '/data/postgresql/9.6/main01';

ALTER DATABASE db_name SET TABLESPACE

tablespace01

Do I need to repeat it for all our existing databases? Should I change our 
"template*" dbs as well?
Do I need to do something else?

Thank you for your advises.

-Julie






Re: Partial index on JSON column

2019-02-19 Thread Tom Lane
I wrote:
> Try it like

> EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
> 'suggestion_id'::text) = '26') AND what = 'suggestion_notification';

> I don't think we assume that CoerceViaIO is strict, and without that
> the deduction that the value couldn't be null doesn't hold.

Hmm ... on closer inspection, we do assume that CoerceViaIO is strict,
but not everyplace knows that, so there's something that could be
improved here.  In the meantime, try it as stated above.

regards, tom lane



Re: Partial index on JSON column

2019-02-19 Thread Samuel Williams
Thanks Tom, I did solve the problem by adding the null constraint for now,
it's a quick solution, and I look forward to the future where this case is
handled appropriately.

On Wed, 20 Feb 2019 at 12:17, Tom Lane  wrote:

> I wrote:
> > Try it like
>
> > EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
> > 'suggestion_id'::text) = '26') AND what = 'suggestion_notification';
>
> > I don't think we assume that CoerceViaIO is strict, and without that
> > the deduction that the value couldn't be null doesn't hold.
>
> Hmm ... on closer inspection, we do assume that CoerceViaIO is strict,
> but not everyplace knows that, so there's something that could be
> improved here.  In the meantime, try it as stated above.
>
> regards, tom lane
>