[GENERAL] Using relations in the SELECT part

2017-03-29 Thread Giuseppe Sacco
Hello,
I am writing to this list since I wrote a query that I cannot really
understand. So, thanks to anyone who will light my darkness :-)

I have a table with two columns, the first one is a key, the second one
is a list of car plates. What I need to extract is a result set that
contains two columns, the first one should always be the key, the
second one only one plate. If the record contains many plates, the
result set should contains a row for each plate.

Example:

postgres=# create temporary table t
 (key varchar primary key, plates varchar);
postgres=# insert into t values ('00','AA888BB CC777DD GG333JJ'),
 ('11','ZZ888KK');
INSERT 0 2
postgres=# select * from t;
  key   | plates  
+-
 00 | AA888BB CC777DD GG333JJ
 11 | ZZ888KK


This is what I would like to extract from t:
  key   | plate
+-
 00 | AA888BB
 00 | CC777DD
 00 | GG333JJ
 11 | ZZ888KK

the solution I found is:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate from t;
  key   |  plate  
+-
 00 | AA888BB
 00 | CC777DD
 00 | GG333JJ
 11 | ZZ888KK


What did I write? The first operation is to convert the original space
separated list into an array, then convert that array to a relation
that contains many records.


Early questions:

1. why may I put in the SELECT part (instead of the FROM) a relation?
When I studied SQL, I was told to put all relations in FROM, and put in
the SELECT part only the colmns or expressions with columns for
formatting the output.

2. why postgresql create a cartesian product using a first element (a
single columns "key") and a second element (a relation "plate")?

3. how postgresql define the second element? it is not "static" since
it depends fomr the first element: it depends on the current record.
For every "key", there a different "plate" result set.


Furthermore, let's assume postgres does a cartesian product, if I add a
new relation as third element, does it create 4x3 product? Let's see:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate1,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate2 from t;
  key   | plate1  | plate2  
+-+-
 00 | AA888BB | AA888BB
 00 | CC777DD | CC777DD
 00 | GG333JJ | GG333JJ
 11 | ZZ888KK | ZZ888KK

4. what happened? May this be somewhta related to IMMUTABLE function?
Is unnest an immutable function? And, in any case, why this this is not
a cartesia product?

Let's try in a different way, with a different array:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate1, 
 unnest('{1,2}'::int[]) AS array2 from t;
  key   | plate1  | array2 
+-+
 00 | AA888BB |  1
 00 | CC777DD |  2
 00 | GG333JJ |  1
 00 | AA888BB |  2
 00 | CC777DD |  1
 00 | GG333JJ |  2
 11 | ZZ888KK |  1
 11 | ZZ888KK |  2

this time it is a cartesian product. Why postgresql acts differently?

Thank you,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Permissions Article

2017-03-29 Thread Karsten Hilbert
On Tue, Mar 28, 2017 at 09:47:40AM -0700, Paul Jungwirth wrote:

> I wrote a blog post about the Postgres permissions system, and I thought I'd
> share:
> 
> http://illuminatedcomputing.com/posts/2017/03/postgres-permissions/

> I also shared a few opinions amidst the facts (like that `USAGE` for schemas
> doesn't add much), so I am very pleased to have those challenged. You can
> consider them my own outstanding questions. I'd be especially grateful for
> any feedback there.

Not that I am an expert in any way but here's a thought on
why a permission on foreign key creation might be useful:

Being able to create foreign keys may allow to indirectly
discover whether certain values exists in a table which I
don't otherwise have access to (by means of failure or
success to create a judiciously crafted FK).

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to get correct local time

2017-03-29 Thread Andrus


   select current_time at time zone 'GMT-2'

returns

"11:54:40.22045+02"

but correct local time in Windows is one hour different:

12:54

How to get correct local time ?


Using

"PostgreSQL 9.6.0, compiled by Visual C++ build 1800, 32-bit"

with  standard postgresql.conf file in Windows 10
Same issue occurs also in ealier Postgres and in earlier windows.

Server time in Windows is correct.

Daylight saving time was changed by one hour a week ago.
Maybe postgres didnt recognized it.

Posted also in

http://stackoverflow.com/questions/43090328/how-to-return-correct-local-time-in-postgres

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get correct local time

2017-03-29 Thread Vitaly Burovoy
On 3/29/17, Andrus  wrote:
>
> select current_time at time zone 'GMT-2'
>
> returns
>
> "11:54:40.22045+02"
>
> but correct local time in Windows is one hour different:
>
> 12:54
>
> How to get correct local time ?
>
>
> Using
>
> "PostgreSQL 9.6.0, compiled by Visual C++ build 1800, 32-bit"
>
> with  standard postgresql.conf file in Windows 10
> Same issue occurs also in ealier Postgres and in earlier windows.
>
> Server time in Windows is correct.
>
> Daylight saving time was changed by one hour a week ago.
> Maybe postgres didnt recognized it.
>
> Posted also in
>
> http://stackoverflow.com/questions/43090328/how-to-return-correct-local-time-in-postgres
>
> Andrus.
>

Try

SELECT now();

or

SELECT current_timestamp;

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to add feature to the Position function

2017-03-29 Thread Ron Ben
I never expected it to be implemented now.. I understand that there are policies and priorities
I tried to find a feature request chanle but there is none...
I don't know how the postresql team decied what is on the "to do list" and what is not.
This is a feature which I think people will find usefull and it doesn't seem like a lot of work to implement it.ב מרץ 28, 2017 19:42, David G. Johnston כתב:On Tue, Mar 28, 2017 at 7:09 AM, Adrian Klaver  wrote:On 03/28/2017 12:29 AM, Ron Ben wrote:

Here is a refer to the stackoverflow question:

http://stackoverflow.com/questions/42974822/find-sub-string-position-from-the-end-of-string-in-postgresql

it carry another example and the solution however I still think that
this fuctionality should be avliable in a build-in postgresql function.



This would be a new feature and Postgres only introduces new features in new major version releases. The next major release is 10 and it is the latter stages of development. Not sure if there is a feature freeze on yet or not. In any case it will not be released for production until fall sometime(best guess, I have nothing to do with release schedule).​Not technically frozen but if the implementing patch isn't already in the commit-fest system it isn't getting into 10.  All feature freeze means at this point is that anything currently in the commit-fest system that doesn't get committed by that time it going to be punted to 11.​  If it did not make it into that release you are looking another year out. So if this is something you need soon, then I would say the solution would be to create your own function using the suggestions provided. If you really want this as a built in, regardless of when it is done I would suggest either filing bug:

https://www.postgresql.org/account/login/?next=/account/submitbug/
This is a poor recommendation given that its absence is not a bug.​
or

post to the --hackers list:

https://www.postgresql.org/list/pgsql-hackers/Maybe not ideal but we don't have any kind of "feature requests" lists so its either -hackers​ or -general.I'd even go so far as to say "-general" is better because you get more eyes on the request for people to give it support and, apparently in this case, finding people to do the leg work of writing a patch.David J.



Re: [GENERAL] How to get correct local time

2017-03-29 Thread Vitaly Burovoy
On 3/29/17, Vitaly Burovoy  wrote:
> On 3/29/17, Andrus  wrote:
>>
>> select current_time at time zone 'GMT-2'
>>
>> returns
>>
>> "11:54:40.22045+02"
>>
>> but correct local time in Windows is one hour different:
>>
>> 12:54
>>
>> How to get correct local time ?
>>
>>
>> Using
>>
>> "PostgreSQL 9.6.0, compiled by Visual C++ build 1800, 32-bit"
>>
>> with  standard postgresql.conf file in Windows 10
>> Same issue occurs also in ealier Postgres and in earlier windows.
>>
>> Server time in Windows is correct.
>>
>> Daylight saving time was changed by one hour a week ago.
>> Maybe postgres didnt recognized it.
>>
>> Posted also in
>>
>> http://stackoverflow.com/questions/43090328/how-to-return-correct-local-time-in-postgres
>>
>> Andrus.
>>
>
> Try
>
> SELECT now();
>
> or
>
> SELECT current_timestamp;

Oops. I missed you need a time part only.

Why do you use "at time zone"  with a constant shift from GMT? If you
want to get your local zone, use just

SELECT current_time;

If you want to get current time in a specified part of the world, use
a region name:

postgres=# SELECT current_time AT TIME ZONE 'Asia/Tokyo', current_time
AT TIME ZONE 'US/Samoa';
  timezone  |  timezone
+
 20:10:57.924534+09 | 00:10:57.924534-11

Postgres uses set of rules when and how a shift from GMT changed (or
will change by knowledge at the current moment) for the concrete
region.

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get correct local time

2017-03-29 Thread Adrian Klaver

On 03/29/2017 03:03 AM, Andrus wrote:


   select current_time at time zone 'GMT-2'

returns

"11:54:40.22045+02"

but correct local time in Windows is one hour different:

12:54

How to get correct local time ?


What time zone are you in?

What is the TimeZone  set to in postgresql.conf?




Using

"PostgreSQL 9.6.0, compiled by Visual C++ build 1800, 32-bit"

with  standard postgresql.conf file in Windows 10
Same issue occurs also in ealier Postgres and in earlier windows.

Server time in Windows is correct.

Daylight saving time was changed by one hour a week ago.
Maybe postgres didnt recognized it.

Posted also in

http://stackoverflow.com/questions/43090328/how-to-return-correct-local-time-in-postgres


Andrus.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Permissions Article

2017-03-29 Thread Tom Lane
Karsten Hilbert  writes:
> On Tue, Mar 28, 2017 at 09:47:40AM -0700, Paul Jungwirth wrote:
>> I wrote a blog post about the Postgres permissions system, and I thought I'd
>> share:
>> http://illuminatedcomputing.com/posts/2017/03/postgres-permissions/

> Not that I am an expert in any way but here's a thought on
> why a permission on foreign key creation might be useful:

> Being able to create foreign keys may allow to indirectly
> discover whether certain values exists in a table which I
> don't otherwise have access to (by means of failure or
> success to create a judiciously crafted FK).

Aside from that, an FK can easily be used to cause effective
denial-of-service, for example preventing rows from being deleted
within a table, or adding enormous overhead to such a deletion.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using relations in the SELECT part

2017-03-29 Thread Tom Lane
Giuseppe Sacco  writes:
> the solution I found is:
> postgres=# select key,
>  unnest(regexp_split_to_array(plates, E'\\s+')) AS plate from t;

> 1. why may I put in the SELECT part (instead of the FROM) a relation?
> When I studied SQL, I was told to put all relations in FROM, and put in
> the SELECT part only the colmns or expressions with columns for
> formatting the output.

If you don't like it, don't do it ;-).  A more theoretically pure approach
is

select key, plate
 from t,
  lateral unnest(regexp_split_to_array(plates, E'\\s+')) as plate;

although really you chose the wrong regexp function and should have used

select key, plate
 from t,
  lateral regexp_split_to_table(plates, E'\\s+') as plate;

Also, the keyword "lateral" is optional here per SQL standard, although
I think it's better to include it to make it clearer what's happening.

The fact that Postgres allows set-returning functions in the
SELECT targetlist is a hangover from Berkeley QUEL, which at this
point we sort of regret not having ripped out twenty years ago;
it's a real wart both semantically and implementation-wise.  But it's
hard to get rid of such things.  Putting a set-returning function
in LATERAL is cleaner and more standards-compliant, though.

> 2. why postgresql create a cartesian product using a first element (a
> single columns "key") and a second element (a relation "plate")?

It's not really very different from what happens with LATERAL, at least
for the case with just one SRF in the targetlist.

> Furthermore, let's assume postgres does a cartesian product, if I add a
> new relation as third element, does it create 4x3 product?

You've hit on the reason why it's semantically ugly: it's not very
clear what to do with multiple SRFs in one targetlist.  LATERAL,
together with the ROWS FROM construct, allows clear specification
of both of the useful behaviors (cartesian product and eval-set-
returning-functions-in-lockstep).  The multiple-SRFs-in-targetlist
behavior that we inherited from Berkeley is just a mess, as it
effectively runs the SRFs until reaching the least common multiple of
their periods.  We're changing that for v10 though.  You might find
this commit informative (at least the commit message and documentation
changes):

https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=69f4b9c85

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to add feature to the Position function

2017-03-29 Thread Adrian Klaver

On 03/29/2017 04:06 AM, Ron Ben wrote:

I never expected it to be implemented now.. I understand that there are
policies and priorities

I tried to find a feature request chanle but there is none...


This list and and/or  --hackers would be that channel.


I don't know how the postresql team decied what is on the "to do list"
and what is not.


If you scan the Todo list:

https://wiki.postgresql.org/wiki/Todo

most of the items reference messages to --general or --hackers, where 
enough people expressed interest in the feature. To go from there to 
done will require someone willing to do the coding.


I will start the ball rolling by saying I am interested in the feature. 
Unfortunately I do not have the coding skills to make it happen.




This is a feature which I think people will find usefull and it doesn't
seem like a lot of work to implement it.


ב מרץ 28, 2017 19:42, David G. Johnston כתב:

On Tue, Mar 28, 2017 at 7:09 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>>wrote:

On 03/28/2017 12:29 AM, Ron Ben wrote:

Here is a refer to the stackoverflow question:


http://stackoverflow.com/questions/42974822/find-sub-string-position-from-the-end-of-string-in-postgresql



it carry another example and the solution however I
still think that
this fuctionality should be avliable in a build-in
postgresql function.


This would be a new feature and Postgres only introduces new
features in new major version releases. The next major
release is 10 and it is the latter stages of development.
Not sure if there is a feature freeze on yet or not. In any
case it will not be released for production until fall
sometime(best guess, I have nothing to do with release
schedule).


​Not technically frozen but if the implementing patch isn't
already in the commit-fest system it isn't getting into 10.  All
feature freeze means at this point is that anything currently in
the commit-fest system that doesn't get committed by that time
it going to be punted to 11.​


If it did not make it into that release you are looking
another year out. So if this is something you need soon,
then I would say the solution would be to create your own
function using the suggestions provided. If you really want
this as a built in, regardless of when it is done I would
suggest either filing bug:

https://www.postgresql.org/account/login/?next=/account/submitbug/



This is a poor recommendation given that its absence is not a bug.​

or

post to the --hackers list:

https://www.postgresql.org/list/pgsql-hackers/



Maybe not ideal but we don't have any kind of "feature requests"
lists so its either -hackers​ or -general.

I'd even go so far as to say "-general" is better because you
get more eyes on the request for people to give it support and,
apparently in this case, finding people to do the leg work of
writing a patch.

David J.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logical Replication: adjacent COMMIT messages with the wrong StartLSN

2017-03-29 Thread Stefan Smith
Hi,

I have been evaluating the logical replication feature in v42.0.0 against
postgres v9.6. One scenario I'm testing is streaming a high volume of
transactions, where each transaction contains multiple INSERTs.

Sometimes, two transaction COMMITS are side by side in the DB transaction
log, and so the transactions arrive one after the other in the logical
replication stream. This is expected behaviour.

I apply and flush the LastReceivedLSN and force an update to the server
after every COMMIT message. I'm also disconnecting and reconnecting on
every fourth INSERT received, to stress test the recovery behaviour.

In most cases I see the following expected behaviour:

The WAL might have (with made-up LSNs for illustrative purposes):

LSN=10  BEGIN  (TXN 1)
LSN=20  BEGIN  (TXN 2)
LSN=30  INSERT (TXN 1)
LSN=40  INSERT (TXN 2)
LSN=50  INSERT (TXN 1)
LSN=60  INSERT (TXN 2)
LSN=70  INSERT (TXN 1)
LSN=80  INSERT (TXN 2)
LSN=90  COMMIT (TXN 1)
LSN=100 COMMIT (TXN 2)

And so the stream receives:

BEGIN  (TXN 1) LastReceivedLSN=19
INSERT (TXN 1) LastReceivedLSN=39
INSERT (TXN 1) LastReceivedLSN=59
INSERT (TXN 1) LastReceivedLSN=79
COMMIT (TXN 1) LastReceivedLSN=99

BEGIN  (TXN 2) LastReceivedLSN=29
INSERT (TXN 2) LastReceivedLSN=49

BEGIN  (TXN 2) LastReceivedLSN=29
INSERT (TXN 2) LastReceivedLSN=49
INSERT (TXN 2) LastReceivedLSN=69
INSERT (TXN 2) LastReceivedLSN=89
COMMIT (TXN 2) LastReceivedLSN=109

The above behaviour makes sense since the replication slot's
confirmed_flush_lsn=99 upon reconnect.

My issue: occasionally after reconnecting, I observe that the INSERTs for
TXN 2 are not resent and instead the stream moves on to TXN 3.

With the same WAL as above, the stream looks like:

BEGIN  (TXN 1) LastReceivedLSN=19
INSERT (TXN 1) LastReceivedLSN=39
INSERT (TXN 1) LastReceivedLSN=59
INSERT (TXN 1) LastReceivedLSN=79
COMMIT (TXN 1) LastReceivedLSN=109 <-- This LSN is wrong!

BEGIN  (TXN 2) LastReceivedLSN=29
INSERT (TXN 2) LastReceivedLSN=49

BEGIN  (TXN 3) ...

Debugging the JDBC driver and comparing it with the WAL (via pg_xlogdump),
it seems that on these occasions COMMIT (TXN 1) arrives with the correct
payload but with StartLSN set to the StartLSN for COMMIT (TXN 2)! The JDBC
driver computes COMMIT (TXN 1) LastReceivedLSN = COMMIT (TXN 2) StartLSN +
length of COMMIT (TXN 1) payload. This causes us to unwittingly set
confirmed_flush_lsn to the end WAL position of COMMIT (TXN 2) before
disconnecting, meaning I don't decode TXN 2 after reconnection.

Is this a known issue? Is it caused by the JDBC driver or something in the
server?


Re: [GENERAL] Postgres Permissions Article

2017-03-29 Thread Paul Jungwirth

On 03/29/2017 06:36 AM, Tom Lane wrote:

Karsten Hilbert  writes:

Being able to create foreign keys may allow to indirectly
discover whether certain values exists in a table which I
don't otherwise have access to (by means of failure or
success to create a judiciously crafted FK).


Aside from that, an FK can easily be used to cause effective
denial-of-service, for example preventing rows from being deleted
within a table, or adding enormous overhead to such a deletion.


Thank you both for taking a look! I agree those are both worthwhile 
concerns. It still seems a little strange it is not just part of the 
CREATE permission (for example). I understand why not everyone can 
create a foreign key, I just have trouble imagining a use case where it 
is helpful to separate it from other DDL commands. Anyway, I didn't 
write the article to nitpick details like that, but sometimes by asking 
"why" you learn new things. I really appreciate your offering your thoughts!


Paul


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Handling psql lost connections

2017-03-29 Thread Steve Crawford
When firewalls/VPNs stand between my psql client and a remote PostgreSQL
server the connection will on occasion time out and drop. This results in
the following scenario:

-Leave for lunch mid project - leave psql open.

-Return from lunch, complete and submit large query.

-Notice query is taking too long. cancel it.

-Cancel doesn't return - realize that connection has dropped.

-Kill psql - history is not written out. Start query from scratch.

Is there:

1) A way to set psql to send keepalives?

2) A way to gracefully kill psql ensuring that the history is saved?

Yes, I know I and my coworkers could spend brain cycles trying to
unerringly remember to close and restart connections, write all queries in
an external editor and then submit them, etc. but I'm looking for more user
friendly options.

Cheers,
Steve


Re: [GENERAL] Handling psql lost connections

2017-03-29 Thread Adrian Klaver

On 03/29/2017 08:49 AM, Steve Crawford wrote:

When firewalls/VPNs stand between my psql client and a remote PostgreSQL
server the connection will on occasion time out and drop. This results
in the following scenario:

-Leave for lunch mid project - leave psql open.

-Return from lunch, complete and submit large query.

-Notice query is taking too long. cancel it.

-Cancel doesn't return - realize that connection has dropped.

-Kill psql - history is not written out. Start query from scratch.

Is there:

1) A way to set psql to send keepalives?


From server side:
https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS

tcp_keepalives*

I guess you could abuse \watch:

https://www.postgresql.org/docs/9.6/static/app-psql.html

\watch [ seconds ]

Repeatedly execute the current query buffer (as \g does) until 
interrupted or the query fails. Wait the specified number of seconds 
(default 2) between executions. Each query result is displayed with a 
header that includes the \pset title string (if any), the time as of 
query start, and the delay interval.


aklaver@test=> \watch 2
Watch every 2s  Wed Mar 29 08:59:55 2017

 ?column?
--
1
(1 row)

Watch every 2s  Wed Mar 29 08:59:57 2017

 ?column?
--
1
(1 row)

With a larger value of seconds.



2) A way to gracefully kill psql ensuring that the history is saved?

Yes, I know I and my coworkers could spend brain cycles trying to
unerringly remember to close and restart connections, write all queries
in an external editor and then submit them, etc. but I'm looking for
more user friendly options.


Use the internal editor(\e)?



Cheers,
Steve



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling psql lost connections

2017-03-29 Thread Steve Crawford
On Wed, Mar 29, 2017 at 9:05 AM, Adrian Klaver 
wrote:

> On 03/29/2017 08:49 AM, Steve Crawford wrote:
>
>> When firewalls/VPNs stand between my psql client and a remote PostgreSQL
>> server the connection will on occasion time out and drop. This results
>> in the following scenario:
>>
>> -Leave for lunch mid project - leave psql open.
>>
>> -Return from lunch, complete and submit large query.
>>
>> -Notice query is taking too long. cancel it.
>>
>> -Cancel doesn't return - realize that connection has dropped.
>>
>> -Kill psql - history is not written out. Start query from scratch.
>>
>> Is there:
>>
>> 1) A way to set psql to send keepalives?
>>
>
> From server side:
> https://www.postgresql.org/docs/9.6/static/runtime-config-
> connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS
>
> tcp_keepalives*
>
> I guess you could abuse \watch:
>
> https://www.postgresql.org/docs/9.6/static/app-psql.html
>
> \watch [ seconds ]
>
> Repeatedly execute the current query buffer (as \g does) until
> interrupted or the query fails. Wait the specified number of seconds
> (default 2) between executions. Each query result is displayed with a
> header that includes the \pset title string (if any), the time as of query
> start, and the delay interval.
>
> aklaver@test=> \watch 2
> Watch every 2s  Wed Mar 29 08:59:55 2017
>
>  ?column?
> --
> 1
> (1 row)
>
> Watch every 2s  Wed Mar 29 08:59:57 2017
>
>  ?column?
> --
> 1
> (1 row)
>
> With a larger value of seconds.



If I could remember to do that I would remember that I had psql running in
one or more terminals on one of my virtual screens and just close it. As it
is, I try to remember to close psql and restart if it has been sitting for
more than a few minutes.




> 2) A way to gracefully kill psql ensuring that the history is saved?
>>
>> Yes, I know I and my coworkers could spend brain cycles trying to
>> unerringly remember to close and restart connections, write all queries
>> in an external editor and then submit them, etc. but I'm looking for
>> more user friendly options.
>>
>
> Use the internal editor(\e)?


That is actually the typical *cause* of the problems. I usually do use \e
to fire up the external $EDITOR for anything more than a trivial query and
if I need to stop or I step away mid-edit then finish and write/quit, the
query is not visible on the screen where I could scroll back to it. If the
connection has dropped, I have to kill psql and the history is lost as well.

I think for now that I'll just add some tcp settings to sysctl.conf to deal
with the firewalls.

Cheers,
Steve


[GENERAL] Constraint exclusion-like behavior for UNION ALL views

2017-03-29 Thread Tony Cebzanov
PostgreSQL's materialized view functionality is very useful, but one 
problem is that when the view gets large, there is no way to refresh 
part of it.  I know that table partitioning is coming in Postgres 10, 
but I haven't heard anything about ideas for partitioning / sharding of 
materialized views, so I'm trying to work out a strategy for doing it 
manually until that's supported in the future.


Because there's no table inheritance for materialized views, the only 
way I can think of to be able to get this behavior is to create 
materialized views manually for each partition (e.g. each month) and 
then use a UNION ALL view to act as the "parent" table.  It looks 
something like this:


CREATE MATERIALIZED VIEW prices_2016_04 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/04/01' AND sale_date < '2016/05/01'
);

CREATE MATERIALIZED VIEW prices_2016_05 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/05/01' AND sale_date < '2016/06/01'
);

CREATE MATERIALIZED VIEW prices_2016_06 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/06/01' AND sale_date < '2016/07/01'
);

CREATE VIEW prices_2016 AS (
SELECT * FROM prices_2016_04
UNION ALL
SELECT * FROM prices_2016_05
UNION ALL
SELECT * FROM prices_2016_06
);

The problem comes when these views get large.  Unlike when the 
underlying objects are tables, I see no way to create CHECK constraints 
on these materialized views, which means that queries like:


 SELECT * FROM prices_2016 WHERE sale_date = '2016/04/15'

end up searching through all of the underlying materialized views, even 
though the date in the query will only ever match one of them.


As a workaround, I have added logic to my queries to only search tables 
for months where the time filters could match, but it would be ideal if 
we could create CHECK constraints on materialized views, or if the query 
planner could analyze the underlying query that creates the materialized 
view to infer that certain rows will never match based on the conditions 
that created the view.


Are either of these things that could be supported in the future?  If 
not, is there a better way to get this kind of behavior so that 
materialized views are more useful when the amount of data increases and 
it's not feasible to update them in their entirety?


Thanks.
-Tony


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling psql lost connections

2017-03-29 Thread Adrian Klaver

On 03/29/2017 11:48 AM, Steve Crawford wrote:

On Wed, Mar 29, 2017 at 9:05 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 03/29/2017 08:49 AM, Steve Crawford wrote:

When firewalls/VPNs stand between my psql client and a remote
PostgreSQL
server the connection will on occasion time out and drop. This
results
in the following scenario:

-Leave for lunch mid project - leave psql open.

-Return from lunch, complete and submit large query.

-Notice query is taking too long. cancel it.

-Cancel doesn't return - realize that connection has dropped.

-Kill psql - history is not written out. Start query from scratch.

Is there:

1) A way to set psql to send keepalives?


>From server side:

https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS



tcp_keepalives*

I guess you could abuse \watch:

https://www.postgresql.org/docs/9.6/static/app-psql.html


\watch [ seconds ]

Repeatedly execute the current query buffer (as \g does) until
interrupted or the query fails. Wait the specified number of seconds
(default 2) between executions. Each query result is displayed with
a header that includes the \pset title string (if any), the time as
of query start, and the delay interval.

aklaver@test=> \watch 2
Watch every 2s  Wed Mar 29 08:59:55 2017

 ?column?
--
1
(1 row)

Watch every 2s  Wed Mar 29 08:59:57 2017

 ?column?
--
1
(1 row)

With a larger value of seconds.



If I could remember to do that I would remember that I had psql running
in one or more terminals on one of my virtual screens and just close it.
As it is, I try to remember to close psql and restart if it has been
sitting for more than a few minutes.




2) A way to gracefully kill psql ensuring that the history is saved?

Yes, I know I and my coworkers could spend brain cycles trying to
unerringly remember to close and restart connections, write all
queries
in an external editor and then submit them, etc. but I'm looking for
more user friendly options.


Use the internal editor(\e)?


That is actually the typical *cause* of the problems. I usually do use
\e to fire up the external $EDITOR for anything more than a trivial
query and if I need to stop or I step away mid-edit then finish and
write/quit, the query is not visible on the screen where I could scroll
back to it. If the connection has dropped, I have to kill psql and the
history is lost as well.


Save it to a file from inside the editor before you run it and then if 
you have to kill psql, pull it back in from the file:


test=# \e
select.sql 



 ?column? 



-- 



1

Where the content of select.sql is

SELECT 1;

OR

Look for the most recent /tmp/psql.edit..sql file.
That is the path on my machine, yours might be different. It will be 
shown at the bottom of buffer when you do \e.





I think for now that I'll just add some tcp settings to sysctl.conf to
deal with the firewalls.

Cheers,
Steve



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion-like behavior for UNION ALL views

2017-03-29 Thread David G. Johnston
On Wed, Mar 29, 2017 at 12:19 PM, Tony Cebzanov  wrote:

> Are either of these things that could be supported in the future?  If not,
> is there a better way to get this kind of behavior so that materialized
> views are more useful when the amount of data increases and it's not
> feasible to update them in their entirety?
>

​The current implementation of materialized views has limitations​, some of
which you've listed.  Materialized Views can be (and previously were)
manually implemented using triggers - and you can still do so.  I recall
seeing some recent blog articles discussing various schemes for
materialized views and their pros and cons.

That said, I'm not sure what using materialized views instead of normal
tables buys you in the first place.  I could see possibly using a
materialized view as the current month's table but the historical tables
usually don't require refreshing.

David J.


Re: [GENERAL] Constraint exclusion-like behavior for UNION ALL views

2017-03-29 Thread Tony Cebzanov

On 3/29/17 3:39 PM, David G. Johnston wrote:


That said, I'm not sure what using materialized views instead of normal
tables buys you in the first place.  I could see possibly using a
materialized view as the current month's table but the historical tables
usually don't require refreshing.


My example was simplified for brevity.  The actual materialized views in 
question do a significant amount of work, pulling from several other 
tables, grouping/aggregating, etc.  It would be possible to have that 
same query populate a normal table instead of being stored as a 
materialized view, but there's a reason materialized views were created 
in the first place -- to avoid the overhead of manually creating 
triggers and so forth -- and I was hoping to find a way to retain those 
advantages while also being able to partition the views by date.


My thought was that since check constraints already exist for regular 
tables, and since materialized views are implemented as tables (or 
table-like substances) it seems reasonable that materialized views 
should support check constraints and the query optimization that comes 
with them.


-Tony



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuuming tables with BRIN index and CLUSTER ON index

2017-03-29 Thread Cherio
I have an insert/select only table (no update/delete expected) and a BRIN
index on the timestamp column as follows

CREATE TABLE log_table (
  id BIGSERIAL NOT NULL,
  data TEXT,
  created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
  CONSTRAINT log_table__pk PRIMARY KEY(id)
);

CREATE INDEX log_table__created_at__idx ON log_table USING BRIN
(created_at);

As records are added both "id" and "created_at" should be stored in
ascending order. My concern is VACUUMING, whether it will keep physical
record order or not. If either VACUUM or VACUUM FULL break the existing
physical order I would have to enforce it with CLUSTERing on primary key
which I am trying to avoid considering the table is expected to grow very
large.

If my concern is valid would adding

ALTER TABLE log_table CLUSTER ON log_table__pk;

alleviate the issue and prompt VACUUM to keep rows ordered?

Thanks!


Re: [GENERAL] Handling psql lost connections

2017-03-29 Thread JP Jacoupy
Not a response to OP but this also occurs with libpq connections.

The only thing I found at the moment is to lower the net.ipv4.tcp_retries2 
value to 8 (instead of 15).

This will lower the TCP timeout from kernel to around 100 seconds instead of 15 
minutes.

Sent from ProtonMail mobile

 Original Message 
On 29 mars 2017 à 18:05, Adrian Klaver wrote:
On 03/29/2017 08:49 AM, Steve Crawford wrote:
> When firewalls/VPNs stand between my psql client and a remote PostgreSQL
> server the connection will on occasion time out and drop. This results
> in the following scenario:
>
> -Leave for lunch mid project - leave psql open.
>
> -Return from lunch, complete and submit large query.
>
> -Notice query is taking too long. cancel it.
>
> -Cancel doesn't return - realize that connection has dropped.
>
> -Kill psql - history is not written out. Start query from scratch.
>
> Is there:
>
> 1) A way to set psql to send keepalives?

From server side:
https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS

tcp_keepalives*

I guess you could abuse \watch:

https://www.postgresql.org/docs/9.6/static/app-psql.html

\watch [ seconds ]

Repeatedly execute the current query buffer (as \g does) until
interrupted or the query fails. Wait the specified number of seconds
(default 2) between executions. Each query result is displayed with a
header that includes the \pset title string (if any), the time as of
query start, and the delay interval.

aklaver@test=> \watch 2
Watch every 2s Wed Mar 29 08:59:55 2017

?column?
--
1
(1 row)

Watch every 2s Wed Mar 29 08:59:57 2017

?column?
--
1
(1 row)

With a larger value of seconds.

>
> 2) A way to gracefully kill psql ensuring that the history is saved?
>
> Yes, I know I and my coworkers could spend brain cycles trying to
> unerringly remember to close and restart connections, write all queries
> in an external editor and then submit them, etc. but I'm looking for
> more user friendly options.

Use the internal editor(\e)?

>
> Cheers,
> Steve

--
Adrian Klaver
adrian.kla...@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Vacuuming tables with BRIN index and CLUSTER ON index

2017-03-29 Thread David G. Johnston
On Wed, Mar 29, 2017 at 1:34 PM, Cherio  wrote:

> I have an insert/select only table (no update/delete expected) and a BRIN
> index on the timestamp column as follows
>
> CREATE TABLE log_table (
>   id BIGSERIAL NOT NULL,
>   data TEXT,
>   created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
>   CONSTRAINT log_table__pk PRIMARY KEY(id)
> );
>
> CREATE INDEX log_table__created_at__idx ON log_table USING BRIN
> (created_at);
>
> As records are added both "id" and "created_at" should be stored in
> ascending order. My concern is VACUUMING, whether it will keep physical
> record order or not. If either VACUUM or VACUUM FULL break the existing
> physical order I would have to enforce it with CLUSTERing on primary key
> which I am trying to avoid considering the table is expected to grow very
> large.
>
> If my concern is valid would adding
>
> ALTER TABLE log_table CLUSTER ON log_table__pk;
>
> alleviate the issue and prompt VACUUM to keep rows ordered?
>
>
​You should review the three documentation sections below.  The first
describes what "ALTER TABLE ... CLUSTER ON"​ does.

https://www.postgresql.org/docs/9.6/static/sql-altertable.html

This one explain CLUSTER and the fact it is a one-time operation and that
repeated use is required in the face of inserts and deletes.

https://www.postgresql.org/docs/9.6/static/sql-cluster.html

And this one explains the difference between VACUUM and VACUUM FULL -
namely only the former is a maintenance routine.

https://www.postgresql.org/docs/9.6/static/sql-vacuum.html

The exact interplay here with BRIN I am unfamiliar with.  Given the natural
correlation that create_at timestamp exhibits I wouldn't imagine that a
brin index on it would degrade that quickly.  But I'm getting out beyond my
experience here.

David J.


Re: [GENERAL] Vacuuming tables with BRIN index and CLUSTER ON index

2017-03-29 Thread Cherio
On Wed, Mar 29, 2017 at 4:58 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Mar 29, 2017 at 1:34 PM, Cherio  wrote:
>
>> I have an insert/select only table (no update/delete expected) and a BRIN
>> index on the timestamp column as follows
>>
>> CREATE TABLE log_table (
>>   id BIGSERIAL NOT NULL,
>>   data TEXT,
>>   created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
>>   CONSTRAINT log_table__pk PRIMARY KEY(id)
>> );
>>
>> CREATE INDEX log_table__created_at__idx ON log_table USING BRIN
>> (created_at);
>>
>> As records are added both "id" and "created_at" should be stored in
>> ascending order. My concern is VACUUMING, whether it will keep physical
>> record order or not. If either VACUUM or VACUUM FULL break the existing
>> physical order I would have to enforce it with CLUSTERing on primary key
>> which I am trying to avoid considering the table is expected to grow very
>> large.
>>
>> If my concern is valid would adding
>>
>> ALTER TABLE log_table CLUSTER ON log_table__pk;
>>
>> alleviate the issue and prompt VACUUM to keep rows ordered?
>>
>>
> ​You should review the three documentation sections below.  The first
> describes what "ALTER TABLE ... CLUSTER ON"​ does.
>
> https://www.postgresql.org/docs/9.6/static/sql-altertable.html
>
> This one explain CLUSTER and the fact it is a one-time operation and that
> repeated use is required in the face of inserts and deletes.
>
> https://www.postgresql.org/docs/9.6/static/sql-cluster.html
>
> And this one explains the difference between VACUUM and VACUUM FULL -
> namely only the former is a maintenance routine.
>
> https://www.postgresql.org/docs/9.6/static/sql-vacuum.html
>
> The exact interplay here with BRIN I am unfamiliar with.  Given the
> natural correlation that create_at timestamp exhibits I wouldn't imagine
> that a brin index on it would degrade that quickly.  But I'm getting out
> beyond my experience here.
>
> David J.
>
>
Thanks David. It is exactly the relationship between BRIN index and VACUUM
that I am concerned about. I would expect it to be covered here
https://www.postgresql.org/docs/9.6/static/brin.html however it has only a
single reference to VACUUM and it doesn't sufficiently elaborate on the
subject.


[GENERAL] Tablespace Default Behavior

2017-03-29 Thread harpagornis

I am somewhat new to Postgresql.  I cant seem to create a table in a
non-default tablespace.  In the postgresql.conf file,  I have: 
default_tablespace = ''".

When I create a table, like this:

CREATE TABLE myschema.mytable (rc_id integer NOT NULL) TABLESPACE
my_space;

and then I do:

select * from pg_tables where schemaname ='myschema';

the tablespace is blank for the new table.  When I look in PgAdmin, the
tablespace shows up as "pg_default."   What do I need to change to create
the table in the desired tablespace?  Thank you.




--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-29 Thread harpagornis
More Info Edit:   Also, the database is in the my_space tablespace.  The
location for the my_space tablespace is a different folder than the $PGDATA
folder.  



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5952917.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-29 Thread Tom Lane
harpagornis  writes:
> When I create a table, like this:
> CREATE TABLE myschema.mytable (rc_id integer NOT NULL) TABLESPACE 
> my_space;
> and then I do:
> select * from pg_tables where schemaname ='myschema';
> the tablespace is blank for the new table.  When I look in PgAdmin, the
> tablespace shows up as "pg_default."   What do I need to change to create
> the table in the desired tablespace?

As you mention in your followup, the database's default tablespace is
'my_space', so the table is actually in the correct tablespace.

For implementation reasons, we force tables that are placed in the
database's default tablespace to be recorded as "stored in database's
tablespace", which shows up as blank in pg_tables.  That means that if you
change the database's tablespace the table will move along with the
rest of the database.  You might consider that either a feature or a
bug depending on your use-case, but that's how it works ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-29 Thread harpagornis
That is what I suspected, and thank you for the explanation.  I think it is
misleading and a bug in PgAdmin for explicitly listing the tablespace as
pg_default.



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5952929.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-29 Thread Shaun Cutts
When being asked to convert a day of the week, the to_date() function returns 
the same day ('0001-01-01 BC’) no matter which day is converted:

# select to_date(‘Monday’, ‘Day’)
'0001-01-01 BC’

# select to_date(‘Tuesday’, ‘Day’)
'0001-01-01 BC’

However, if it were to return a date that was that day of the week, it could be 
inverted:

# select extract(dow from '0001-01-01 BC'::date); — this date should be the 
result of to_date(‘Sunday’, ‘Day’)
6

# select extract(dow from '0001-01-02 BC'::date); — this date should be the 
result of to_date(‘Monday’, ‘Day’)
0

….

David tells this is not a bug, but it still seems like a reasonable requirement 
on to_date() to me. Is there some reason why this isn’t possible?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general