Re: Default for date field: today vs CURRENT_DATE [RESOLVED]

2019-01-02 Thread Rich Shepard
On Wed, 2 Jan 2019, Tom Lane wrote: 'today' is special as a date input string, so you can use it as a literal: regression=# select 'today'::date; date 2019-01-02 (1 row) But it's not a SQL keyword, nor a function name, so you can't write it without quotes. Tom, Now I under

Re: Default for date field: today vs CURRENT_DATE

2019-01-02 Thread Tom Lane
Rich Shepard writes: >Reading the manual I saw that 'today' is a special value, but it did not > work when I used it as a column default; e.g., > start_date date DEFAULT today, >Appending parentheses also failed. But, changing today to CURRENT_DATE > worked. I've not found an explanation a

Default for date field: today vs CURRENT_DATE

2019-01-02 Thread Rich Shepard
Reading the manual I saw that 'today' is a special value, but it did not work when I used it as a column default; e.g., start_date date DEFAULT today, Appending parentheses also failed. But, changing today to CURRENT_DATE worked. I've not found an explanation and would appreciate learning why

Re: Implementing standard SQL's DOMAIN constraint [RESOLVED]

2019-01-02 Thread Rich Shepard
On Wed, 2 Jan 2019, Ron wrote: Note that a CHECK constraint with 50 items is a *Very Bad Idea*, since changing such a constraint is very painful. Use a FK constraint instead. Ron, It's even longer with Canadian provinces included. I gratefully accept your advice and will use a table and for

RE: Relocatable Binaries (RPMs) : custom installation path for PostgreSQL

2019-01-02 Thread Kevin Brannen
From: chiru r > I have installed Community PostgreSQL RPMs and are going into " > /usr/pgsql-11/" by default. > Please let us know how to get the PostgreSQL-11 installed in above custom > paths using RPMs? . I've never tried it, but look at the "--relocate" option for the rpm command. The "

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Tom Lane
Mark writes: > Am I correct in my understanding that any row that has been modified (i.e. > UPDATE) is in state HEAPTUPLE_INSERT_IN_PROGRESS so it will not be included > in the sample? An update will mark the existing tuple as delete-in-progress and then insert a new tuple (row version) that's in

Re: Implementing standard SQL's DOMAIN constraint [RESOLVED]

2019-01-02 Thread Ron
On 1/2/19 12:05 PM, Rich Shepard wrote: On Wed, 2 Jan 2019, David G. Johnston wrote: You add the create domain command once before any objects that make use of it. David,   This is the answer I sought: postgres supports the create domain command. I did not see this in your first response.

Re: Implementing standard SQL's DOMAIN constraint [RESOLVED]

2019-01-02 Thread Rich Shepard
On Wed, 2 Jan 2019, David G. Johnston wrote: You add the create domain command once before any objects that make use of it. David, This is the answer I sought: postgres supports the create domain command. I did not see this in your first response. Thanks very much, Rich

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Mark
Hi Tom, Thanks for your reply. Am I correct in my understanding that any row that has been modified (i.e. UPDATE) is in state HEAPTUPLE_INSERT_IN_PROGRESS so it will not be included in the sample? I'm going to rework the application so there is less time between the DELETE and the COMMIT so I wi

Re: Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread David G. Johnston
On Wednesday, January 2, 2019, Rich Shepard wrote: > On Wed, 2 Jan 2019, David G. Johnston wrote: > > I'm not following you. I have two tables each with a column, >>> state_code char(2) NOT NULL. >>> >> > That is a char(2) column for which ‘??’ is a valid value. The fact that it >> is named st

Re: Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread Rich Shepard
On Wed, 2 Jan 2019, David G. Johnston wrote: I'm not following you. I have two tables each with a column, state_code char(2) NOT NULL. That is a char(2) column for which ‘??’ is a valid value. The fact that it is named state_code is immaterial; the domain that you created doesn’t get used.

Re: Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread David G. Johnston
On Wednesday, January 2, 2019, Rich Shepard wrote: > On Wed, 2 Jan 2019, David G. Johnston wrote: > > There is no magic name logic involved. A domain is just a type with >> inherent constraints that are user definable. You make use of it like any >> other type. >> >> Create table tbl ( >> column_

Re: Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread Rich Shepard
On Wed, 2 Jan 2019, David G. Johnston wrote: There is no magic name logic involved. A domain is just a type with inherent constraints that are user definable. You make use of it like any other type. Create table tbl ( column_name state_code not null ) Values stored in column_name are now of ty

Re: Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread David G. Johnston
On Wednesday, January 2, 2019, Rich Shepard wrote: > > CREATE DOMAIN state_code AS char(2) > DEFAULT '??' > CONSTRAINT valid_state_code > CHECK (value IN ('AL', 'AK', 'AZ', ...)); > > This applies to all tables each having a column named state_code. > There is no magic name logic involved. A do

Implementing standard SQL's DOMAIN constraint

2019-01-02 Thread Rich Shepard
Happy New Year all, My readings taught me that standard SQL has a domain constraint that checks for the same valid characters in a column common to multiple tables. Example: CREATE DOMAIN state_code AS char(2) DEFAULT '??' CONSTRAINT valid_state_code CHECK (value IN ('AL', 'AK', 'AZ', ...)); Th

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Tom Lane
Mark writes: > I have a long running job which deletes all of the common_student table and > then repopulates it. It takes long time to load all the other data and > commit the transaction. I didn't think the delete inside the transaction > would have any effect until it is commited or rolled back

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Mark
Hi Ron, I tried my test_table example below using swapping 'BEGIN' for: => BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; It made no difference to the behaviour. On Wed, 2 Jan 2019 at 15:45 Ron wrote: > Try using SERIALIZABLE transactions instead of the default READ COMMITTED. > > > On 1/2/1

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Ron
Try using SERIALIZABLE transactions instead of the default READ COMMITTED. On 1/2/19 9:28 AM, Mark wrote: Hi Ron, Yes, my process will commit the transaction (or roll it back) eventually. It's the window where one transaction has deleted all the rows (before committing) and an ANALYSE has ran

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Mark
Hi Ron, Yes, my process will commit the transaction (or roll it back) eventually. It's the window where one transaction has deleted all the rows (before committing) and an ANALYSE has ran. The deleted rows won't make it into the sample even though the transaction has NOT been commited. During th

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Ron
But Jeff said "left open indefinitely without either committing or rolling back".  Your process is committing the transaction. On 1/2/19 6:15 AM, Mark wrote: Hi Jeff, Thanks for your help. That is exactly what is happening. I have a long running job which deletes all of the common_student tab

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Mark
Hi Maxim, Thanks for your help. Jeff has pointed me in the right direction here, it seems that the rows=1 is due to a long running transaction which deletes all of common_student for school_id 36 and then repopulates it. I was unaware that the delete inside the transaction would affect the VACUU

Re: Query planner / Analyse statistics bad estimate rows=1 with maximum statistics 10000 on PostgreSQL 10.2

2019-01-02 Thread Mark
Hi Jeff, Thanks for your help. That is exactly what is happening. I have a long running job which deletes all of the common_student table and then repopulates it. It takes long time to load all the other data and commit the transaction. I didn't think the delete inside the transaction would have

Re: Thoughts on row-level security for webapps?

2019-01-02 Thread Achilleas Mantzios
On 31/12/18 6:57 μ.μ., Siegfried Bilstein wrote: Hi all, I'm evaluating using a tool called Postgraphile that generates a GraphSQL server from a postgres setup. The recommended way of handling security is to implement RLS within postgres and simply have the webserver take a cookie or similar an

Re: [GENERAL] Incremental refresh - Materialized view

2019-01-02 Thread Nguyễn Trần Quốc Vinh
> > Dear all, > > We have some result on incremental update for MVs. We generate triggers in > C to do the incremental maintenance. We posted the code to github about 1 > year ago, but unfortunately i posted a not-right version of ctrigger.h > header. The mistake was exposed to me when a person cou