Function `set_config` doesn't work in with query?

2019-01-04 Thread Zexuan Luo
For instance:
```
with t as (
select set_config('blah', '1', false)
)
select current_setting('blah');
select current_setting('blah');
```

Execute queries above gets these error messages:
psql:test-query-dump.sql:4: ERROR:  unrecognized configuration parameter "blah"
psql:test-query-dump.sql:5: ERROR:  unrecognized configuration parameter "blah"

Thank you for any responses.



Re: Function `set_config` doesn't work in with query?

2019-01-04 Thread Rene Romero Benavides
On Fri, Jan 4, 2019 at 3:37 AM Zexuan Luo  wrote:

> For instance:
> ```
> with t as (
> select set_config('blah', '1', false)
> )
> select current_setting('blah');
> select current_setting('blah');
> ```
>
> Execute queries above gets these error messages:
> psql:test-query-dump.sql:4: ERROR:  unrecognized configuration parameter
> "blah"
> psql:test-query-dump.sql:5: ERROR:  unrecognized configuration parameter
> "blah"
>
> Thank you for any responses.
>
>
The only parameters you can set that way, are the ones listed in:
SELECT name FROM pg_settings;

For user defined parameters, check this:
https://dba.stackexchange.com/questions/29961/how-do-i-set-and-get-custom-database-variables

-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Re: Function `set_config` doesn't work in with query?

2019-01-04 Thread Andrew Gierth
> "Zexuan" == Zexuan Luo  writes:

 Zexuan> For instance:
 Zexuan> ```
 Zexuan> with t as (
 Zexuan> select set_config('blah', '1', false)
 Zexuan> )
 Zexuan> select current_setting('blah');

A CTE containing a SELECT query which is not referenced anywhere will
not be executed, even if it contains volatile functions. (CTEs
containing INSERT/UPDATE/DELETE that are not referenced _will_ still be
executed.)

-- 
Andrew (irc:RhodiumToad)



Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Kristjan Tammekivi
Hi,

I've noticed a change in the behaviour in triggers / hstores in Postgres
11.1 when compared to Postgres 10.5.
The following won't work on Postgres 10.5 but in Postgres 11.1 it works
just fine:

CREATE EXTENSION hstore;

CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);
CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);

CREATE FUNCTION test1_trigger ()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$BODY$
BEGIN
INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) -
hstore(NEW));
RETURN NEW;
END
$BODY$;

CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1
FOR EACH ROW EXECUTE PROCEDURE test1_trigger();

INSERT INTO _tmp_test1 (val) VALUES (5);
ERROR:  record "old" is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.
CONTEXT:  SQL statement "INSERT INTO _tmp_test1_changes (id, changes)
VALUES (NEW.id, hstore(OLD) - hstore(NEW))"
PL/pgSQL function test1_trigger() line 3 at SQL statement

I couldn't find anything about this in the release notes (
https://www.postgresql.org/docs/11/release-11.html), but maybe I just
didn't know what to look for.


Re: Function `set_config` doesn't work in with query?

2019-01-04 Thread Zexuan Luo
Thank you!

Something like
```
with t as (
select set_config('ns.blah', '1', false) as res
)
select res from t;
select current_setting('ns.blah');
```
works for me.

Andrew Gierth  于2019年1月4日周五 下午6:27写道:
>
> > "Zexuan" == Zexuan Luo  writes:
>
>  Zexuan> For instance:
>  Zexuan> ```
>  Zexuan> with t as (
>  Zexuan> select set_config('blah', '1', false)
>  Zexuan> )
>  Zexuan> select current_setting('blah');
>
> A CTE containing a SELECT query which is not referenced anywhere will
> not be executed, even if it contains volatile functions. (CTEs
> containing INSERT/UPDATE/DELETE that are not referenced _will_ still be
> executed.)
>
> --
> Andrew (irc:RhodiumToad)



RE: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Charles Clavadetscher
Hello

 

From: Kristjan Tammekivi  
Sent: Freitag, 4. Januar 2019 11:46
To: pgsql-gene...@postgresql.org
Subject: Potentially undocumented behaviour change in Postgres 11 concerning 
OLD record in an after insert trigger

 

Hi,

 

I've noticed a change in the behaviour in triggers / hstores in Postgres 11.1 
when compared to Postgres 10.5.

The following won't work on Postgres 10.5 but in Postgres 11.1 it works just 
fine:

 

CREATE EXTENSION hstore;

CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);
CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);

CREATE FUNCTION test1_trigger ()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$BODY$
BEGIN
INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - 
hstore(NEW));
RETURN NEW;
END
$BODY$;

CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1
FOR EACH ROW EXECUTE PROCEDURE test1_trigger();

 

INSERT INTO _tmp_test1 (val) VALUES (5);

ERROR:  record "old" is not assigned yet

DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.

CONTEXT:  SQL statement "INSERT INTO _tmp_test1_changes (id, changes) VALUES 
(NEW.id, hstore(OLD) - hstore(NEW))"

PL/pgSQL function test1_trigger() line 3 at SQL statement

 

I couldn't find anything about this in the release notes 
(https://www.postgresql.org/docs/11/release-11.html), but maybe I just didn't 
know what to look for.

 

I doubt that this works on any PG version for INSERT.

 

According to the documentation:

 

https://www.postgresql.org/docs/10/plpgsql-trigger.html and 
https://www.postgresql.org/docs/11/plpgsql-trigger.html

 

OLD: Data type RECORD; variable holding the old database row for UPDATE/DELETE 
operations in row-level triggers. This variable is unassigned in 
statement-level triggers and for INSERT operations.

 

Regards

Charles



Use bytearray for blobs or not?

2019-01-04 Thread Thomas Güttler

Some months ago I wrote a little application with Python+Django which stores
blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?

Which alternatives could be useful?

Regards,
  Thomas




--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines



Refresh using barman

2019-01-04 Thread Rijo Roy
Hi All, 
Hi all, I need some help over barman..Recently, I refreshed one of non-prod 
environments with production database and I ran into this problem or rather 
unforeseen point that I cannot change the system catalogs ownership to the 
respective non prod owner account. And I really don't want to tamper the system 
catalogs in any manner. Is there a solution for this?
PostgreSQL 10 and barman 2.4 on rhel 7 is my environment. 
Thanks, Rijo Roy 
Sent from Yahoo Mail on Android

Re: Use bytearray for blobs or not?

2019-01-04 Thread Achilleas Mantzios

On 4/1/19 1:41 μ.μ., Thomas Güttler wrote:

Some months ago I wrote a little application with Python+Django which stores
blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?


Performance (at least for JDBC) is known to be better with blobs.
However, with bytea life is just easier for many reasons (backups, logical 
replication, other types of replication, sys management, etc).



Which alternatives could be useful?

Regards,
  Thomas







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Kristjan Tammekivi
Hi,
I've read the documentation, that's why I said this might be undocumented.
Try the SQL in Postgres 11 and see that it works for yourself.
I have an analogous trigger in production from yesterday and I've tested it
in local environment as well.

On Fri, Jan 4, 2019 at 12:56 PM Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hello
>
>
>
> *From:* Kristjan Tammekivi 
> *Sent:* Freitag, 4. Januar 2019 11:46
> *To:* pgsql-gene...@postgresql.org
> *Subject:* Potentially undocumented behaviour change in Postgres 11
> concerning OLD record in an after insert trigger
>
>
>
> Hi,
>
>
>
> I've noticed a change in the behaviour in triggers / hstores in Postgres
> 11.1 when compared to Postgres 10.5.
>
> The following won't work on Postgres 10.5 but in Postgres 11.1 it works
> just fine:
>
>
>
> CREATE EXTENSION hstore;
>
> CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);
> CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);
>
> CREATE FUNCTION test1_trigger ()
> RETURNS TRIGGER
> LANGUAGE plpgsql
> AS
> $BODY$
> BEGIN
> INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) -
> hstore(NEW));
> RETURN NEW;
> END
> $BODY$;
>
> CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1
> FOR EACH ROW EXECUTE PROCEDURE test1_trigger();
>
>
>
> INSERT INTO _tmp_test1 (val) VALUES (5);
>
> ERROR:  record "old" is not assigned yet
>
> DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.
>
> CONTEXT:  SQL statement "INSERT INTO _tmp_test1_changes (id, changes)
> VALUES (NEW.id, hstore(OLD) - hstore(NEW))"
>
> PL/pgSQL function test1_trigger() line 3 at SQL statement
>
>
>
> I couldn't find anything about this in the release notes (
> https://www.postgresql.org/docs/11/release-11.html), but maybe I just
> didn't know what to look for.
>
>
>
> *I doubt that this works on any PG version for INSERT.*
>
>
>
> *According to the documentation:*
>
>
>
> *https://www.postgresql.org/docs/10/plpgsql-trigger.html
>  and
> https://www.postgresql.org/docs/11/plpgsql-trigger.html
> *
>
>
>
> *OLD: **Data type **RECORD**; variable holding the old database row for *
> *UPDATE**/**DELETE** operations in row-level triggers. This variable is
> unassigned in statement-level triggers and for **INSERT** operations.*
>
>
>
> *Regards*
>
> *Charles*
>


Re: Use bytearray for blobs or not?

2019-01-04 Thread Thomas Güttler




Am 04.01.19 um 12:48 schrieb Achilleas Mantzios:

On 4/1/19 1:41 μ.μ., Thomas Güttler wrote:

Some months ago I wrote a little application with Python+Django which stores
blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?


Performance (at least for JDBC) is known to be better with blobs.
However, with bytea life is just easier for many reasons (backups, logical replication, other types of replication, sys 
management, etc).



I could switch to a s3 like storage server, too. Up to now, this would only be
some lines of code. I could store the s3 IDs in postgres.




--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines



Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Erika Knihti-Van Driessche
Hi,

I have 10+ postgresql clusters (v 9.6) running on one machine - each having
their own data directory and port. They all share same binaries though, and
this is now giving me some headache.. I cannot shutdown all clusters at the
same time, so upgrading them is quite impossible.

I know that running several versions of postgres is possible on one machine
- I suppose this would also work out with same version running on different
locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. etc..? Has anyone
any experience with this kind of problem?

All help and ideas appreciated! :-)

Thanks,
Erika


Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread MichaelDBA
To put it simply: you cannot run different major versions of PostgreSQL 
with the same binaries. 3 things need to be separate.  You named 2 of 
them: data directory and port.  The 3rd one is separate binary locations 
for each PG cluster instance running on the same host.


What I do is create a separate .profile_pgsql file for each PG cluster 
instance.  This file contains all the stuff necessary to distinguish 
between multiple PG clusters.  Here is an example for a specific Linux 
distro...


export VERSION=9.6
export PGROOT=/usr/pgsql-$VERSION
export PGBASE=/var/lib/pgsql/$VERSION
export PATH=$PGROOT/bin:$PGROOT/share:$PATH
export PGLIB=$PGROOT/lib
export MANPATH=$PGROOT/share/man:$MANPATH
export LD_LIBRARY_PATH=$PGROOT/bin:$PGBASE/share:$PATH
export PGDATA=$PGBASE/data
export PGPASSFILE=~/.pgpass
export PGUSER=postgres
export PGDATABASE=postgres
export PGPORT=5432




Regards,
Michael Vitale


Erika Knihti-Van Driessche 
Friday, January 4, 2019 9:21 AM
Hi,

I have 10+ postgresql clusters (v 9.6) running on one machine - each 
having their own data directory and port. They all share same binaries 
though, and this is now giving me some headache.. I cannot shutdown 
all clusters at the same time, so upgrading them is quite impossible.


I know that running several versions of postgres is possible on one 
machine - I suppose this would also work out with same version running 
on different locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. 
etc..? Has anyone any experience with this kind of problem?


All help and ideas appreciated! :-)

Thanks,
Erika




Re: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Adrian Klaver

On 1/4/19 4:20 AM, Kristjan Tammekivi wrote:

Hi,
I've read the documentation, that's why I said this might be 
undocumented. Try the SQL in Postgres 11 and see that it works for yourself.
I have an analogous trigger in production from yesterday and I've tested 
it in local environment as well.


I can confirm:

select version();
  version 



 PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit



INSERT INTO _tmp_test1 (val) VALUES (5);
INSERT 0 1

select * from _tmp_test1_changes ;
 id | changes
+-
  1 | "id"=>NULL, "val"=>NULL
(1 row)

I would file a bug report:

https://www.postgresql.org/account/submitbug/



On Fri, Jan 4, 2019 at 12:56 PM Charles Clavadetscher 
mailto:clavadetsc...@swisspug.org>> wrote:


Hello

__ __

*From:*Kristjan Tammekivi mailto:kristjantammek...@gmail.com>>
*Sent:* Freitag, 4. Januar 2019 11:46
*To:* pgsql-gene...@postgresql.org 
*Subject:* Potentially undocumented behaviour change in Postgres 11
concerning OLD record in an after insert trigger

__ __

Hi,

__ __

I've noticed a change in the behaviour in triggers / hstores in
Postgres 11.1 when compared to Postgres 10.5.

The following won't work on Postgres 10.5 but in Postgres 11.1 it
works just fine:

__ __

CREATE EXTENSION hstore;

CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);
CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);

CREATE FUNCTION test1_trigger ()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$BODY$
BEGIN
INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id,
hstore(OLD) - hstore(NEW));
RETURN NEW;
END
$BODY$;

CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1
FOR EACH ROW EXECUTE PROCEDURE test1_trigger();

__ __

INSERT INTO _tmp_test1 (val) VALUES (5);

ERROR:  record "old" is not assigned yet

DETAIL:  The tuple structure of a not-yet-assigned record is
indeterminate.

CONTEXT:  SQL statement "INSERT INTO _tmp_test1_changes (id,
changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))"

PL/pgSQL function test1_trigger() line 3 at SQL statement

__ __

I couldn't find anything about this in the release notes
(https://www.postgresql.org/docs/11/release-11.html), but maybe I
just didn't know what to look for.

__ __

*I doubt that this works on any PG version for INSERT.*

*__ __*

*According to the documentation:*

*__ __*

*https://www.postgresql.org/docs/10/plpgsql-trigger.html and
https://www.postgresql.org/docs/11/plpgsql-trigger.html*

*__ __*

*OLD: **Data type **RECORD**; variable holding the old database row
for **UPDATE**/**DELETE**operations in row-level triggers. This
variable is unassigned in statement-level triggers and for
**INSERT**operations.***

*__ __*

*Regards*

*Charles*




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Adrian Klaver

On 1/4/19 6:21 AM, Erika Knihti-Van Driessche wrote:

Hi,

I have 10+ postgresql clusters (v 9.6) running on one machine - each 
having their own data directory and port. They all share same binaries 
though, and this is now giving me some headache.. I cannot shutdown all 
clusters at the same time, so upgrading them is quite impossible.


What OS(version)?

How did you install Postgres?



I know that running several versions of postgres is possible on one 
machine - I suppose this would also work out with same version running 
on different locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. 
etc..? Has anyone any experience with this kind of problem?


All help and ideas appreciated! :-)

Thanks,
Erika



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Ray O'Donnell

On 04/01/2019 14:21, Erika Knihti-Van Driessche wrote:

Hi,

I have 10+ postgresql clusters (v 9.6) running on one machine - each 
having their own data directory and port. They all share same binaries 
though, and this is now giving me some headache.. I cannot shutdown all 
clusters at the same time, so upgrading them is quite impossible.


I know that running several versions of postgres is possible on one 
machine - I suppose this would also work out with same version running 
on different locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. 
etc..? Has anyone any experience with this kind of problem?


What OS are you running? The Debian packages from apt.postgresql.org 
allows multiple versions to co-exist on the same machine (though 
naturally on different ports), and multiple clusters within each version 
(again on different ports).


Ray.

--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Shreeyansh Dba
Hi Erika,

You can create the different version services and use these services for
PostgreSQL start/stop.
ex.
service postgresql-10 start
service postgresql-9.6 start

Thanks & Regards,
*Shreeyansh DBA Team*
www.shreeyansh.com


On Fri, Jan 4, 2019 at 7:51 PM Erika Knihti-Van Driessche <
erika.kni...@gmail.com> wrote:

> Hi,
>
> I have 10+ postgresql clusters (v 9.6) running on one machine - each
> having their own data directory and port. They all share same binaries
> though, and this is now giving me some headache.. I cannot shutdown all
> clusters at the same time, so upgrading them is quite impossible.
>
> I know that running several versions of postgres is possible on one
> machine - I suppose this would also work out with same version running on
> different locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. etc..?
> Has anyone any experience with this kind of problem?
>
> All help and ideas appreciated! :-)
>
> Thanks,
> Erika
>


RE: Relocatable Binaries (RPMs) : custom installation path for PostgreSQL

2019-01-04 Thread Kevin Brannen
From: chiru r 

> I have tried to intall the RPMs with -relocate option,however it is not 
> working as expected and throwing below error.
>
> [root@Server1dev:/root/PG11]#
> #-> rpm -ivh --relocate /usr/pgsql-11/=/u01/postgres/pg11_relocate/ 
> postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm
> warning: postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm: Header V4 DSA/SHA1 
> Signature, key ID 442df0f8: NOKEY
> Preparing...  # [100%]
>path /usr/pgsql-11 in package 
> postgresql11-server-11.1-1PGDG.rhel7.x86_64 is not relocatable
> [root@server1dev:/root/PG11]#

Then you'll probably have to get the source from the Pg download area and 
compile it yourself and use "--prefix=/u01/postgres/pg11_relocate/" as an arg 
to the "configure" command. It's not hard, the process is well documented. We 
do this and 1 reason is to have it install into /opt, although we also compile 
it ourselves so we know what's going into the install.

It's either that or let it install where it wants to and then put a symlink in 
the place you want it, as someone else suggested.

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Erika Knihti-Van Driessche
Hi,

Thanks all for your replies and help! I already thought that it’ll be possible 
to just install binaries in different locations. My current installation I have 
all done using one binary location and initdb.. not a very good solution on 
hindsight.. Oh, and I’m on RHEL 7. I used the installation packages from EDB 
for Postgres. Previously I did do a minor upgrade - shutdown all clusters and 
upgrade, then startup all.. but now they start doing difficult with downtime 
for all applications at the same time, so with next upgrade, I’ll separate all 
binaries.

Cheers,
Erika

> On 4 Jan 2019, at 15:50, Shreeyansh Dba  wrote:
> 
> Hi Erika,
> 
> You can create the different version services and use these services for 
> PostgreSQL start/stop.
> ex.
> service postgresql-10 start
> service postgresql-9.6 start
> 
> Thanks & Regards,
> Shreeyansh DBA Team
> www.shreeyansh.com 
> 
> On Fri, Jan 4, 2019 at 7:51 PM Erika Knihti-Van Driessche 
> mailto:erika.kni...@gmail.com>> wrote:
> Hi,
> 
> I have 10+ postgresql clusters (v 9.6) running on one machine - each having 
> their own data directory and port. They all share same binaries though, and 
> this is now giving me some headache.. I cannot shutdown all clusters at the 
> same time, so upgrading them is quite impossible.
> 
> I know that running several versions of postgres is possible on one machine - 
> I suppose this would also work out with same version running on different 
> locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. etc..? Has anyone 
> any experience with this kind of problem? 
> 
> All help and ideas appreciated! :-)
> 
> Thanks,
> Erika



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Adrian Klaver

On 1/4/19 7:30 AM, Erika Knihti-Van Driessche wrote:

Hi,

Thanks all for your replies and help! I already thought that it’ll be 
possible to just install binaries in different locations. My current 
installation I have all done using one binary location and initdb.. not 
a very good solution on hindsight.. Oh, and I’m on RHEL 7. I used the 
installation packages from EDB for Postgres. Previously I did do a minor 
upgrade - shutdown all clusters and upgrade, then startup all.. but now 
they start doing difficult with downtime for all applications at the 
same time, so with next upgrade, I’ll separate all binaries.


https://www.enterprisedb.com/docs/en/11.0/EPAS_11_Inst_Linux/EDB_Postgres_Advanced_Server_Installation_Guide_Linux.1.16.html#pID0E0SH0HA




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread Rui DeSousa


> On Jan 4, 2019, at 9:32 AM, MichaelDBA  wrote:
> 
>  The 3rd one is separate binary locations for each PG cluster instance 
> running on the same host.

Don’t need separate binaries for each cluster; only separate binaries for each 
version needed; i.e. 9.6.1, 9.6.2, etc.



Re: Multiple postgresql clusters with same version and separate binaries

2019-01-04 Thread MichaelDBA

I did say you need to run with different binaries for different versions:

To put it simply: you cannot run different major versions of PostgreSQL 
with the same binaries.



So when I subsequently said the following it was in that context.

The 3rd one is separate binary locations for each PG cluster instance 
running on the same host.



Rui DeSousa 
Friday, January 4, 2019 10:53 AM



Don’t need separate binaries for each cluster; only separate binaries 
for each version needed; i.e. 9.6.1, 9.6.2, etc.


MichaelDBA 
Friday, January 4, 2019 9:32 AM
To put it simply: you cannot run different major versions of 
PostgreSQL with the same binaries. 3 things need to be separate.  You 
named 2 of them: data directory and port.  The 3rd one is separate 
binary locations for each PG cluster instance running on the same host.


What I do is create a separate .profile_pgsql file for each PG cluster 
instance.  This file contains all the stuff necessary to distinguish 
between multiple PG clusters.  Here is an example for a specific Linux 
distro...


export VERSION=9.6
export PGROOT=/usr/pgsql-$VERSION
export PGBASE=/var/lib/pgsql/$VERSION
export PATH=$PGROOT/bin:$PGROOT/share:$PATH
export PGLIB=$PGROOT/lib
export MANPATH=$PGROOT/share/man:$MANPATH
export LD_LIBRARY_PATH=$PGROOT/bin:$PGBASE/share:$PATH
export PGDATA=$PGBASE/data
export PGPASSFILE=~/.pgpass
export PGUSER=postgres
export PGDATABASE=postgres
export PGPORT=5432



Regards,
Michael Vitale


Erika Knihti-Van Driessche 
Friday, January 4, 2019 9:21 AM
Hi,

I have 10+ postgresql clusters (v 9.6) running on one machine - each 
having their own data directory and port. They all share same binaries 
though, and this is now giving me some headache.. I cannot shutdown 
all clusters at the same time, so upgrading them is quite impossible.


I know that running several versions of postgres is possible on one 
machine - I suppose this would also work out with same version running 
on different locations, like /db1/PostgreSQL/10, /db2/PostgreSQL/10.. 
etc..? Has anyone any experience with this kind of problem?


All help and ideas appreciated! :-)

Thanks,
Erika




Re: Use bytearray for blobs or not?

2019-01-04 Thread Rob Sargent



On 1/4/19 4:48 AM, Achilleas Mantzios wrote:

On 4/1/19 1:41 μ.μ., Thomas Güttler wrote:
Some months ago I wrote a little application with Python+Django which 
stores

blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?


Performance (at least for JDBC) is known to be better with blobs.


Do you have any details on which part is slower with bytea? Original 
insert, read back to client, general (p)sql access.  I'm moving towards 
bytea but still have time to change my mind if I cannot afford the 
performance hit.


However, with bytea life is just easier for many reasons (backups, 
logical replication, other types of replication, sys management, etc).

Yes, and in my case I even get a cheap compression from the original file.




Which alternatives could be useful?

Regards,
  Thomas











(Again) Column Store on PostGreSQL

2019-01-04 Thread Simon AUBERT
Hello,

We can find this very informative blog post :
https://blog.2ndquadrant.com/column-store-plans/

And this wiki page :
https://wiki.postgresql.org/wiki/ColumnOrientedSTorage
I must say the approach with the "orientation" option is a genius idea.

I won't discuss much the advantages of COS, I had three years using Vertica
-I loved it-, some tests of Monetdb, and recently Column Store Index on
MSQL Server etc. And I'm pretty sure everybody is convinced this works
great for Instant Analytics with products such as Tableau or Spotfire.. and
much better than, saying, Hive. MonetDb is not so much enterprise ready
(not even paid support available), CH is young while PGSQL has proven its
high value in Transactionnal DB, the only thing missing for even a bigger
deployment is this feature.

>From what I understand, there is a team at 2ndquadrant.com that works on it
(but I'm not sure it's still in the dev pipe).

My questions :
-do you develop from scratch or do you plan to use some code of
MonetDb/Clickhouse/C_Store_fdw for that?  I don't know if even a
collaboration between teams is feasible (I may be very naive but that would
kick ass ^^)
.
-is it still in the pipe? do you have an idea of the workload or even ETA?

-I'm not a developer myself but I would be interested in testing,
benchmarking, etc.. how to get involved?

Best regards,

-- 
Simon AUBERT
aubert.si...@gmail.com
+33 (0)6 66 28 52 04


Re: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Tom Lane
Kristjan Tammekivi  writes:
> I've noticed a change in the behaviour in triggers / hstores in Postgres
> 11.1 when compared to Postgres 10.5.
> [ reference to OLD in an insert trigger doesn't throw error anymore ]

Hmm.  This seems to be a side effect of the changes we (I) made in v11
to rationalize the handling of NULL vs ROW(NULL,NULL,...) composite
values in plpgsql.  The "unassigned" trigger row variables are now
acting as though they are plain NULL values.  I'm not sure now whether
I realized that this would happen --- it looks like there are not and
were not any regression test cases that would throw an error for the
disallowed-reference case, so it's quite possible that it just escaped
notice.

The old behavior was pretty darn squirrely; in particular it would let
you parse but not execute a reference to "OLD.column", a behavior that
could not occur for any other composite variable.  Now that'll just
return NULL.  In a green field I don't think there'd be complaints
about this behavior --- I know lots of people have spent considerable
effort programming around the other behavior.

While I haven't looked closely, I think duplicating the old behavior
would require adding a special-purpose flag to plpgsql DTYPE_REC
variables, which'd cost a little performance (extra error checks
in very hot code paths) and possibly break compatibility with
pldebugger, if there's been a v11 release of that.

So I'm a bit inclined to accept this behavior change and adjust
the documentation to say that OLD/NEW are "null" rather than
"unassigned" when not relevant.

Thoughts?

regards, tom lane



Re: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger

2019-01-04 Thread Pavel Stehule
pá 4. 1. 2019 v 17:44 odesílatel Tom Lane  napsal:

> Kristjan Tammekivi  writes:
> > I've noticed a change in the behaviour in triggers / hstores in Postgres
> > 11.1 when compared to Postgres 10.5.
> > [ reference to OLD in an insert trigger doesn't throw error anymore ]
>
> Hmm.  This seems to be a side effect of the changes we (I) made in v11
> to rationalize the handling of NULL vs ROW(NULL,NULL,...) composite
> values in plpgsql.  The "unassigned" trigger row variables are now
> acting as though they are plain NULL values.  I'm not sure now whether
> I realized that this would happen --- it looks like there are not and
> were not any regression test cases that would throw an error for the
> disallowed-reference case, so it's quite possible that it just escaped
> notice.
>
> The old behavior was pretty darn squirrely; in particular it would let
> you parse but not execute a reference to "OLD.column", a behavior that
> could not occur for any other composite variable.  Now that'll just
> return NULL.  In a green field I don't think there'd be complaints
> about this behavior --- I know lots of people have spent considerable
> effort programming around the other behavior.
>
> While I haven't looked closely, I think duplicating the old behavior
> would require adding a special-purpose flag to plpgsql DTYPE_REC
> variables, which'd cost a little performance (extra error checks
> in very hot code paths) and possibly break compatibility with
> pldebugger, if there's been a v11 release of that.
>
> So I'm a bit inclined to accept this behavior change and adjust
> the documentation to say that OLD/NEW are "null" rather than
> "unassigned" when not relevant.
>

It is maybe unwanted effect, but it is not bad from my view. new behave is
consistent - a initial value of variables is just NULL

+1

Pavel


> Thoughts?
>
> regards, tom lane
>
>


Comparing dates in DDL

2019-01-04 Thread Rich Shepard

  I have a projects table that includes these two columns:

start_date date DEFAULT CURRENT_DATE,
end_date date
CONSTRAINT valid_start_date
CHECK (start_date <= end_date),

  1. Do I need a DEFAULT value for the end_date?
  2. If so, please suggest a value for it.

TIA,

Rich



Showing table comments with psql

2019-01-04 Thread Mark Jeffcoat
I'm creating table and view comments with "COMMENT ON", and I can find
the comment in pg_description, but I can't find a way to show the
table comments using psql.

$ psql --version
psql (PostgreSQL) 11.1 (Debian 11.1-1+b2)

I'd expect to see table comments by using \d+, and found an old post
on this list where \dd worked; neither work for me today. Am I looking
in the right place? Is this a regression?

Thank you.


postgres=# create table test_table (col1 integer);
CREATE TABLE

postgres=# comment on table test_table is 'this is a table comment';
COMMENT

postgres=# \d+ test_table
Table "public.test_table"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats
target | Description
+-+---+--+-+-+--+-
 col1   | integer |   |  | | plain   |  |

postgres=# \dd test_table
 Object descriptions
 Schema | Name | Object | Description
+--++-
(0 rows)

postgres=# select * from pg_description where objoid = (select oid
from pg_class where relname = 'test_table');
 objoid | classoid | objsubid |   description
+--+--+-
 714760 | 1259 |0 | this is a table comment
(1 row)

-- 
Mark Jeffcoat
Austin, TX



Re: Showing table comments with psql

2019-01-04 Thread Pavel Stehule
Hi


pá 4. 1. 2019 v 17:57 odesílatel Mark Jeffcoat 
napsal:

> I'm creating table and view comments with "COMMENT ON", and I can find
> the comment in pg_description, but I can't find a way to show the
> table comments using psql.
>
> $ psql --version
> psql (PostgreSQL) 11.1 (Debian 11.1-1+b2)
>
> I'd expect to see table comments by using \d+, and found an old post
> on this list where \dd worked; neither work for me today. Am I looking
> in the right place? Is this a regression?
>

postgres=#  create table test_table (col1 integer);
CREATE TABLE
postgres=# comment on table test_table is 'this is a table comment';
COMMENT
postgres=# \dt+
 List of relations
┌┬┬───┬───┬─┬─┐
│ Schema │Name│ Type  │ Owner │  Size   │   Description   │
╞╪╪═══╪═══╪═╪═╡
│ public │ test_table │ table │ pavel │ 0 bytes │ this is a table comment │
└┴┴───┴───┴─┴─┘
(1 row)

postgres=#


Re: Comparing dates in DDL

2019-01-04 Thread Igor Korot
Hi, Rich,

On Fri, Jan 4, 2019 at 10:53 AM Rich Shepard  wrote:
>
>I have a projects table that includes these two columns:
>
> start_date date DEFAULT CURRENT_DATE,
> end_date date
> CONSTRAINT valid_start_date
> CHECK (start_date <= end_date),
>
>1. Do I need a DEFAULT value for the end_date?
>2. If so, please suggest a value for it.

start_date.day() + 1?

Thank you.

>
> TIA,
>
> Rich
>



Re: Comparing dates in DDL

2019-01-04 Thread Andreas Kretschmer




Am 04.01.19 um 17:53 schrieb Rich Shepard:

I have a projects table that includes these two columns:

start_date date DEFAULT CURRENT_DATE,
end_date date
CONSTRAINT valid_start_date
CHECK (start_date <= end_date),

  1. Do I need a DEFAULT value for the end_date?


no, you can use NULL, for instance. You don't need an explicit value.
But maybe you want to set the start_date to NOT NULL.


2. If so, please suggest a value for it.


other solution for such 2 fields: you can use DATERANGE, only one field.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Comparing dates in DDL

2019-01-04 Thread Rob Sargent



On 1/4/19 10:12 AM, Igor Korot wrote:

Hi, Rich,

On Fri, Jan 4, 2019 at 10:53 AM Rich Shepard  wrote:

I have a projects table that includes these two columns:

start_date date DEFAULT CURRENT_DATE,
end_date date
 CONSTRAINT valid_start_date
 CHECK (start_date <= end_date),

1. Do I need a DEFAULT value for the end_date?
2. If so, please suggest a value for it.

start_date.day() + 1?

Thank you.


TIA,

Rich



Is the end_date always knowable at record insert?

CHECK(end_date is null or start_date <= end_date)




Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Igor Korot wrote:


   1. Do I need a DEFAULT value for the end_date?
   2. If so, please suggest a value for it.


start_date.day() + 1?


  Thanks, Igor. I did not pick up this syntax when I looked at data types
and their DDL usage.

Regards,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread David G. Johnston
On Friday, January 4, 2019, Rob Sargent  wrote:
>
> CHECK(end_date is null or start_date <= end_date)
>

The is null expression is redundant since check constraints pass when the
result is unknown.

David J.


Re: (Again) Column Store on PostGreSQL

2019-01-04 Thread legrand legrand
Hi,

VOPS, is one of the propotypes worked on 
see

https://www.postgresql.org/message-id/4fb855c3-22b9-444f-21bf-114fa23cc...@postgrespro.ru

https://github.com/postgrespro/vops

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Rob Sargent wrote:


Is the end_date always knowable at record insert?


Rob,

  Not always. Sometimes projects have known end dates, other times the end
is interminate until it happens.


CHECK(end_date is null or start_date <= end_date)


  So a default of NULL should be applied, or just allowed to happen?

Thanks,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, David G. Johnston wrote:


The is null expression is redundant since check constraints pass when the
result is unknown.


David,

  I wondered about this since NULL can be missing, unknown, or otherwise
defined. Are there benefits to allowing an empty value in that column when
checking that it's later than the start date rather than explicitly setting
a default date after the start date?

Regards,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Andreas Kretschmer wrote:


no, you can use NULL, for instance. You don't need an explicit value.
But maybe you want to set the start_date to NOT NULL.


Andreas,

  Yes, I added NOT NULL to the start_date column.


2. If so, please suggest a value for it.


other solution for such 2 fields: you can use DATERANGE, only one field.


  Only if all projects have a known end_date; some don't.

Thanks,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread David G. Johnston
On Friday, January 4, 2019, Rich Shepard  wrote:

> On Fri, 4 Jan 2019, David G. Johnston wrote:
>
> The is null expression is redundant since check constraints pass when the
>> result is unknown.
>>
>
> David,
>
>   I wondered about this since NULL can be missing, unknown, or otherwise
> defined. Are there benefits to allowing an empty value in that column when
> checking that it's later than the start date rather than explicitly setting
> a default date after the start date?
>
>
I don’t understand the question...

David J.


Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, David G. Johnston wrote:


  I wondered about this since NULL can be missing, unknown, or otherwise
defined. Are there benefits to allowing an empty value in that column when
checking that it's later than the start date rather than explicitly setting
a default date after the start date?



I don’t understand the question...


David,

  Understandable. :-)

  Perhaps this is more clear: is there a difference between not specifying a
default end_date value and specifying NULL as the default end_date value?

Regards,

Rich



Re: (Again) Column Store on PostGreSQL

2019-01-04 Thread Pavel Stehule
Hi

pá 4. 1. 2019 v 17:12 odesílatel Simon AUBERT 
napsal:

> Hello,
>
> We can find this very informative blog post :
> https://blog.2ndquadrant.com/column-store-plans/
>
> And this wiki page :
> https://wiki.postgresql.org/wiki/ColumnOrientedSTorage
> I must say the approach with the "orientation" option is a genius idea.
>
> I won't discuss much the advantages of COS, I had three years using
> Vertica -I loved it-, some tests of Monetdb, and recently Column Store
> Index on MSQL Server etc. And I'm pretty sure everybody is convinced this
> works great for Instant Analytics with products such as Tableau or
> Spotfire.. and much better than, saying, Hive. MonetDb is not so much
> enterprise ready (not even paid support available), CH is young while PGSQL
> has proven its high value in Transactionnal DB, the only thing missing for
> even a bigger deployment is this feature.
>
> From what I understand, there is a team at 2ndquadrant.com that works on
> it (but I'm not sure it's still in the dev pipe).
>
> My questions :
> -do you develop from scratch or do you plan to use some code of
> MonetDb/Clickhouse/C_Store_fdw for that?  I don't know if even a
> collaboration between teams is feasible (I may be very naive but that would
> kick ass ^^)
>

 With high probability the code should be written from scratch - every
database has lot of unique features, code base, memory management,
optimizer, .. It is hard, almost impossible to reuse some code from other
database.

Regards

Pavel

.
> -is it still in the pipe? do you have an idea of the workload or even ETA?
>
> -I'm not a developer myself but I would be interested in testing,
> benchmarking, etc.. how to get involved?
>
> Best regards,
>
> --
> Simon AUBERT
> aubert.si...@gmail.com
> +33 (0)6 66 28 52 04
>


Re: Showing table comments with psql

2019-01-04 Thread Mark Jeffcoat
On Fri, Jan 4, 2019 at 11:00 AM Pavel Stehule  wrote:

> postgres=#  create table test_table (col1 integer);
> CREATE TABLE
> postgres=# comment on table test_table is 'this is a table comment';
> COMMENT
> postgres=# \dt+
>  List of relations
> ┌┬┬───┬───┬─┬─┐
> │ Schema │Name│ Type  │ Owner │  Size   │   Description   │
> ╞╪╪═══╪═══╪═╪═╡
> │ public │ test_table │ table │ pavel │ 0 bytes │ this is a table comment │
> └┴┴───┴───┴─┴─┘
> (1 row)
>

I was so close! That works; thank you very much for your help, Pavel.

-- 
Mark Jeffcoat
Austin, TX


Re: Comparing dates in DDL

2019-01-04 Thread David G. Johnston
On Friday, January 4, 2019, Rich Shepard  wrote:

> On Fri, 4 Jan 2019, David G. Johnston wrote:
>
>   I wondered about this since NULL can be missing, unknown, or otherwise
>>> defined. Are there benefits to allowing an empty value in that column
>>> when
>>> checking that it's later than the start date rather than explicitly
>>> setting
>>> a default date after the start date?
>>>
>>>
>>> I don’t understand the question...
>>
>
> David,
>
>   Understandable. :-)
>
>   Perhaps this is more clear: is there a difference between not specifying
> a
> default end_date value and specifying NULL as the default end_date value?
>
>
No.  If no default is available and a value for the field is not provided
the stored value will be null; a default of null is thus also redundant
specification.

David J.


Re: Comparing dates in DDL

2019-01-04 Thread Rob Sargent



On 1/4/19 10:26 AM, Rich Shepard wrote:

On Fri, 4 Jan 2019, Rob Sargent wrote:


Is the end_date always knowable at record insert?


Rob,

  Not always. Sometimes projects have known end dates, other times the 
end

is interminate until it happens.


CHECK(end_date is null or start_date <= end_date)


  So a default of NULL should be applied, or just allowed to happen?

Thanks,

Rich

This is exactly what null is for.  Much preferable to some arbitrary 
value which will confuse analyses.




Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, David G. Johnston wrote:


No. If no default is available and a value for the field is not provided
the stored value will be null; a default of null is thus also redundant
specification.


David,

  Thanks for clarifying.

Regards,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread Andreas Kretschmer




Am 04.01.19 um 18:32 schrieb Rich Shepard:

other solution for such 2 fields: you can use DATERANGE, only one field.


  Only if all projects have a known end_date; some don't.


that's not a problem:

test=*# create table projects(duration daterange default 
daterange(current_date,null) check(lower(duration) is not null));

CREATE TABLE


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




32-bit supporting binaries for version 11

2019-01-04 Thread Ken Benson
I have a need for 32-bit binaries (pg_dump.exe, psql.exe, etc) that will work 
with version 11.1 of the PostgreSQL server.

I'm not able to find these anywhere. Help, please?

Ken Benson


Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Andreas Kretschmer wrote:


  Only if all projects have a known end_date; some don't.


that's not a problem:

test=*# create table projects(duration daterange default 
daterange(current_date,null) check(lower(duration) is not null));


Andreas,

  Thank you. That's a data type I've not before used.

Regards,

Rich



Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Rich Shepard wrote:


 Thank you. That's a data type I've not before used.


Andreas,

  Thinking more about duration perhaps I'm seeing a problem that really does
not exist: it's a single column for both dates in the table while the UI
needs separate date data entry widgets. Unless I use middleware code when a
project row is first entered I am not seeing how two discrete dates are
combined with sqlalchemy inserts them into the table.

Regards,

Rich




Re: Comparing dates in DDL

2019-01-04 Thread David G. Johnston
On Fri, Jan 4, 2019 at 2:21 PM Rich Shepard  wrote:
>Thinking more about duration perhaps I'm seeing a problem that really does
> not exist: it's a single column for both dates in the table while the UI
> needs separate date data entry widgets. Unless I use middleware code when a
> project row is first entered I am not seeing how two discrete dates are
> combined with sqlalchemy inserts them into the table.

That would be the decision to make - does your toolkit support (or can
be made to support) the type and are you willing to choose a
sub-optimal database model because one or more applications happen to
do things differently?

IMO the daterange datatype is the best type you can choose for the
model; now you have to figure out and decide where any tradeoffs are
and if they are worth it given your specific circumstances.

David J.



Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, David G. Johnston wrote:


That would be the decision to make - does your toolkit support (or can be
made to support) the type and are you willing to choose a sub-optimal
database model because one or more applications happen to do things
differently?

IMO the daterange datatype is the best type you can choose for the model;
now you have to figure out and decide where any tradeoffs are and if they
are worth it given your specific circumstances.


David,

  Thanks for the insights.

Regards,

Rich



Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Ken Tanzer
Hi.  I've got a text field in a table that holds this style of timestamp:

2014-10-23T00:00:00

I'd like to be able to create an index on the date portion of this field
(as a date), because I have lots of queries that are searching for
particular dates or ranges.

I've tried various ways of getting to a date (::date, LEFT(x,10)::date,
etc.), but all of them throw the error "functions in index expression must
be marked IMMUTABLE."

Is there any way, hacky or otherwise, people know of to do this?  Thanks in
advance.

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Adrian Klaver

On 1/4/19 2:21 PM, Ken Tanzer wrote:

Hi.  I've got a text field in a table that holds this style of timestamp:

2014-10-23T00:00:00

I'd like to be able to create an index on the date portion of this field 
(as a date), because I have lots of queries that are searching for 
particular dates or ranges.


I've tried various ways of getting to a date (::date, LEFT(x,10)::date, 
etc.), but all of them throw the error "functions in index expression 
must be marked IMMUTABLE."


?:
'2014-10-23T00:00:00'::timestamp

Can you also show the actual index expression?



Is there any way, hacky or otherwise, people know of to do this?  Thanks 
in advance.


Ken


--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://demo.agency-software.org/client/
ken.tan...@agency-software.org 
(253) 245-3801

Subscribe to the mailing list 
 to

learn more about AGENCY or
follow the discussion.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Ken Tanzer
On Fri, Jan 4, 2019 at 2:27 PM Adrian Klaver 
wrote:

> On 1/4/19 2:21 PM, Ken Tanzer wrote:
> >
> > I've tried various ways of getting to a date (::date, LEFT(x,10)::date,
> > etc.), but all of them throw the error "functions in index expression
> > must be marked IMMUTABLE."
>
> ?:
> '2014-10-23T00:00:00'::timestamp
>
> Can you also show the actual index expression?
>
>
Here's a few of the ones I tried:

ag_reach=> CREATE INDEX ON export_hch_encounter_history
(CAST("Service_Date" AS date));
ERROR:  functions in index expression must be marked IMMUTABLE
ag_reach=> CREATE INDEX ON export_hch_encounter_history
(CAST(LEFT("Service_Date",10) AS date));
ERROR:  functions in index expression must be marked IMMUTABLE
ag_reach=> CREATE INDEX ON export_hch_encounter_history
(CAST("Service_Date"::timestamp AS date));
ERROR:  functions in index expression must be marked IMMUTABLE
ag_reach=> CREATE INDEX ON export_hch_encounter_history
(("Service_Date"::date));
ERROR:  functions in index expression must be marked
IMMUTABLEag_reach=> CREATE INDEX ON export_hch_encounter_history
((LEFT("Service_Date",10)::date));
ERROR:  functions in index expression must be marked IMMUTABLE



-- 

AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Andrew Gierth
> "Ken" == Ken Tanzer  writes:

 Ken> Hi. I've got a text field in a table that holds this style of
 Ken> timestamp:

 Ken> 2014-10-23T00:00:00

You can't make this a field of type "timestamp" rather than text?

Casts from text to either date or timestamp are mutable because they
depend on the current DateStyle value (_you_ might know that your values
are always ISO format, but the code doesn't). You can't even avoid this
with to_date or to_timestamp and a fixed format, because those functions
are also mutable since some of the format options are locale-dependent
(again, you might know that you're not using those, but the code
doesn't).

If the column type were timestamp, though, then you could do comparisons
directly, or cast the value to "date" (which is an immutable cast).

If you absolutely can't change the column type, then one option would be
to do your own fixed-format date parsing function (and label it
immutable), e.g.

create function iso_timestamp(text)
  returns timestamp without time zone
  as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\dT\d\d:?\d\d:?\d\d$'
then $1::timestamp
else null end $$
  set DateStyle = 'ISO,YMD'
  language sql immutable strict;

or

create function iso_date(text)
  returns date
  as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\d(?![^T])'
then substring($1 from '^\d\d\d\d-?\d\d-?\d\d')::date
else null end $$
  set DateStyle = 'ISO,YMD'
  language sql immutable strict;

-- 
Andrew (irc:RhodiumToad)



Re: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Ken Tanzer
On Fri, Jan 4, 2019 at 2:54 PM Andrew Gierth 
wrote:

> > "Ken" == Ken Tanzer  writes:
>
>  Ken> Hi. I've got a text field in a table that holds this style of
>  Ken> timestamp:
>
>  Ken> 2014-10-23T00:00:00
>
> You can't make this a field of type "timestamp" rather than text?
>
>
 I actually can't, or rather don't want to.  The underlying data this is
drawn from is actually a date field, but this particular table keeps a
history of what we actually transmitted to another organization, and I want
to keep it as an exact replication of what we sent.



> If you absolutely can't change the column type, then one option would be
> to do your own fixed-format date parsing function (and label it
> immutable), e.g.
>
> create function iso_timestamp(text)
>   returns timestamp without time zone
>   as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\dT\d\d:?\d\d:?\d\d$'
> then $1::timestamp
> else null end $$
>   set DateStyle = 'ISO,YMD'
>   language sql immutable strict;
>
> or
>
> create function iso_date(text)
>   returns date
>   as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\d(?![^T])'
> then substring($1 from '^\d\d\d\d-?\d\d-?\d\d')::date
> else null end $$
>   set DateStyle = 'ISO,YMD'
>   language sql immutable strict;
>
>
Yeah, I thought I might have to do that, but when I create that index it
still doesn't seem to use the index for queries.  I also found the
(immutable) make_date function, with the same problem.  I can see why the
planner wouldn't know how to use them:

CREATE INDEX ON export_hch_encounter_history ( iso_date("Service_Date"));
CREATE INDEX ON export_hch_encounter_history (
make_date(LEFT("Service_Date",4)::int,SUBSTRING("Service_Date",6,2)::int,SUBSTRING("Service_Date",9,2)::int));

EXPLAIN ANALYZE SELECT * FROM export_hch_encounter_history WHERE
"Service_Date"::date BETWEEN '2018-01-01'::date AND
'2018-12-31'::date;

  QUERY
PLAN
--
 Seq Scan on export_hch_encounter_history  (cost=0.00..19458.53
rows=885 width=656) (actual time=117.246..253.583 rows=26548 loops=1)
   Filter: ((("Service_Date")::date >= '2018-01-01'::date) AND
(("Service_Date")::date <= '2018-12-31'::date))
   Rows Removed by Filter: 150393
 Planning time: 0.401 ms
 Execution time: 284.036 ms
(5 rows)

As opposed to casting the WHERE clause parameters:


EXPLAIN ANALYZE SELECT * FROM export_hch_encounter_history WHERE
"Service_Date" BETWEEN '2018-01-01'::text AND '2018-12-31'::text;

  QUERY PLAN

---
-
 Bitmap Heap Scan on export_hch_encounter_history
(cost=799.91..16639.67 rows=26487 width=656) (actual
time=15.611..51.309 rows=26548 loops=1)
   Recheck Cond: (("Service_Date" >= '2018-01-01'::text) AND
("Service_Date" <= '2018-12-31'::text))
   Heap Blocks: exact=2432
   ->  Bitmap Index Scan on
"export_hch_encounter_history_Service_Date_idx"  (cost=0.00..793.29
rows=26487 width=0) (actual time=15.250..15.252 rows=26
548 loops=1)
 Index Cond: (("Service_Date" >= '2018-01-01'::text) AND
("Service_Date" <= '2018-12-31'::text))
 Planning time: 0.739 ms
 Execution time: 80.523 ms
(7 rows)


Thanks for your help and response!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


RE: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Kevin Brannen
From: Ken Tanzer 

On Fri, Jan 4, 2019 at 2:54 PM Andrew Gierth 
mailto:and...@tao11.riddles.org.uk>> wrote:
> "Ken" == Ken Tanzer mailto:ken.tan...@gmail.com>> 
> writes:

 Ken> Hi. I've got a text field in a table that holds this style of
 Ken> timestamp:

 Ken> 2014-10-23T00:00:00

You can't make this a field of type "timestamp" rather than text?

> I actually can't, or rather don't want to.  The underlying data this is drawn 
> from is actually a date field, but this particular table keeps a history of 
> what we actually transmitted to another organization, and I want to keep it 
> as an exact replication of what we sent.


If it’s not too painful, add another column to your table of type DATE, and on 
INSERT shove your “timestamp” into that, converting/casting as needed, then 
index that. So at the cost of an extra 4 bytes per row, you can have both your 
“transmission” value and an indexable value.

Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Comparing dates in DDL

2019-01-04 Thread Jeremy Finzel
On Fri, Jan 4, 2019 at 4:19 PM Rich Shepard 
wrote:

> On Fri, 4 Jan 2019, David G. Johnston wrote:
>
> > That would be the decision to make - does your toolkit support (or can be
> > made to support) the type and are you willing to choose a sub-optimal
> > database model because one or more applications happen to do things
> > differently?
> >
> > IMO the daterange datatype is the best type you can choose for the model;
> > now you have to figure out and decide where any tradeoffs are and if they
> > are worth it given your specific circumstances.
>
> David,
>
>Thanks for the insights.
>
> Regards,
>
> Rich



Another suggestion which hasn’t been mentioned is using ‘infinity’ as the
end date. I like this because it IMO indicates that the record is clearly
the current valid record more than null.

But I’m not sure exactly what you are trying to do either. If you are
creating a new record with changes and closing (ending) the range of the
original record, then a GIST exclusion index would ensure you have no
overlapping date ranges for all historical records put together.

Thanks,
Jeremy

>


Re: Comparing dates in DDL

2019-01-04 Thread Rich Shepard

On Fri, 4 Jan 2019, Jeremy Finzel wrote:


Another suggestion which hasn’t been mentioned is using ‘infinity’ as the
end date. I like this because it IMO indicates that the record is clearly
the current valid record more than null.


Jeremy,

  I believe that infinity was mentioned in this thread.


But I’m not sure exactly what you are trying to do either. If you are
creating a new record with changes and closing (ending) the range of the
original record, then a GIST exclusion index would ensure you have no
overlapping date ranges for all historical records put together.


  Having overlapping date ranges in a projects table is a Good Thing(TM)
because it means I have multiple, concurrent revenue streams. :-)

Carpe weekend,

Rich

--
I dream of an America where a chicken can cross the road without having its
motives questioned.



ALTER TABLE with multiple SET NOT NULL

2019-01-04 Thread Allison Kaptur
Hi folks,

I encountered a surprising error when writing a migration that both added a
primary key to a table and added a new NOT NULL column. It threw the error "
column "col_d" contains null values", even though I supplied a default. The
migration looks like this:
CREATE TABLE new_table AS SELECT col_a, col_b, col_c from existing_table;
ALTER TABLE new_table
ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4(),
ADD PRIMARY KEY (col_a, col_b, col_c);

Because of the `DEFAULT uuid_generate_v4()`, I wouldn't expect it to be
possible for the new column to have null values, so I was surprised to get
an integrity error with the message "column "col_d" contains null values".

I found two workarounds that don't produce the error. First, if I instead
set the NOT NULL last, I get no error:
ALTER TABLE new_table
ADD COLUMN col_d UUID UNIQUE DEFAULT uuid_generate_v4(),
ADD PRIMARY KEY (col_a, col_b, col_c),
ALTER COLUMN col_d SET NOT NULL;

Second, if I do the two steps in two ALTER TABLE statements, I also get no
error.
ALTER TABLE new_table
ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4();
ALTER TABLE new_table
ADD PRIMARY KEY (col_a, col_b, col_c);

I'm running postgres 9.6.2.

I know that adding a column with a default requires the table & its indexes
to be rewritten, and I know that adding a primary key on a column that
doesn't have an existing NOT NULL constraint does ALTER COLUMN SET NOT NULL
on each column in the primary key. So I'm wondering if Postgres is
reordering the SET NOT NULL operations in a way that causes it to attempt
setting col_d to NOT NULL before the default values are supplied.

My understanding from the docs is that I should be able to combine any
ALTER TABLE statements into one if they don't involve RENAME or SET SCHEMA
(or a few other things in v10, which I'm not using).

So my questions are:
- Is there a way I can see what Postgres is doing under the hood? I wanted
to use EXPLAIN ANALYZE but it doesn't appear to work on alter table
statements.
- Am I missing something about my original migration, or is there a reason
I shouldn't expect it to work?

Thanks,
Allison Kaptur


Re: ALTER TABLE with multiple SET NOT NULL

2019-01-04 Thread Tom Lane
Allison Kaptur  writes:
> I encountered a surprising error when writing a migration that both added a
> primary key to a table and added a new NOT NULL column. It threw the error "
> column "col_d" contains null values", even though I supplied a default. The
> migration looks like this:
> CREATE TABLE new_table AS SELECT col_a, col_b, col_c from existing_table;
> ALTER TABLE new_table
> ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4(),
> ADD PRIMARY KEY (col_a, col_b, col_c);

Hm, this can be made a good deal more self-contained:

regression=# create table t1 (a int);
CREATE TABLE
regression=# insert into t1 values(1);
INSERT 0 1
regression=# alter table t1 add column b float8 not null default random(),
add primary key(a);
ERROR:  column "b" contains null values

It fails like that as far back as I tried (8.4).  I'm guessing that we're
doing the ALTER steps in the wrong order, but haven't looked closer than
that.

Interestingly, in v11 and HEAD it works if you use a constant default,
suggesting that the fast-default feature is at least adjacent to the
problem.

regards, tom lane



Re: Immutable way to cast timestamp TEXT to DATE? (for index)

2019-01-04 Thread Andrew Gierth
> "Ken" == Ken Tanzer  writes:

 >> If you absolutely can't change the column type, then one option
 >> would be to do your own fixed-format date parsing function (and
 >> label it immutable), e.g.
 >> 
 >> create function iso_timestamp(text)
 >> returns timestamp without time zone
 >> as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\dT\d\d:?\d\d:?\d\d$'
 >> then $1::timestamp
 >> else null end $$
 >> set DateStyle = 'ISO,YMD'
 >> language sql immutable strict;
 >> 
 >> or
 >> 
 >> create function iso_date(text)
 >> returns date
 >> as $$ select case when $1 ~ '^\d\d\d\d-?\d\d-?\d\d(?![^T])'
 >> then substring($1 from '^\d\d\d\d-?\d\d-?\d\d')::date
 >> else null end $$
 >> set DateStyle = 'ISO,YMD'
 >> language sql immutable strict;

 Ken> Yeah, I thought I might have to do that, but when I create that
 Ken> index it still doesn't seem to use the index for queries.

It won't use the index unless you use the same function in the query
too.

i.e.

CREATE INDEX ON ... (iso_date("Service_Date"));

SELECT * FROM ... WHERE iso_date("Service_Date") BETWEEN ... AND ...;

-- 
Andrew (irc:RhodiumToad)



Get LSN at which a cluster was promoted on previous timeline

2019-01-04 Thread Jeremy Finzel
I am having a hard time finding out how I can easily determine at which LSN
a streamer was promoted *after *promotion.  A related question is that if I
pause wal replay, I am able to see the last replayed LSN but I can't simply
immediately promote.

I want to know the state of the streamer post-promotion with respect to the
master by getting the LSN at which it was promoted.

Thanks,
Jeremy


Re: Get LSN at which a cluster was promoted on previous timeline

2019-01-04 Thread Jerry Sievers
Jeremy Finzel  writes:

> I am having a hard time finding out how I can easily determine at
> which LSN a streamer was promoted after promotion.  A related
> question is that if I pause wal replay, I am able to see the last
> replayed LSN but I can't simply immediately promote.

Take a look at the *.history file in pg_xlog or pg_wal.

> I want to know the state of the streamer post-promotion with respect
> to the master by getting the LSN at which it was promoted.
>
> Thanks,
> Jeremy
>
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net



Re: Get LSN at which a cluster was promoted on previous timeline

2019-01-04 Thread Michael Paquier
On Fri, Jan 04, 2019 at 08:41:35PM -0600, Jerry Sievers wrote:
> Take a look at the *.history file in pg_xlog or pg_wal.

These files are also archived.  If you want to be able to see such
contents at SQL level, you would need some parsing logic like this one
for example which is a toy of mine (this comes mainly from
rewind_parseTimeLineHistory() if I recall correctly):
https://github.com/michaelpq/pg_plugins/tree/master/wal_utils
--
Michael


signature.asc
Description: PGP signature