Re: using a function in where

2018-12-03 Thread Alban Hertroys


> On 3 Dec 2018, at 8:06, Glenn Schultz  wrote:
> 
> All,
> I am using the function below to convert a continuous variable to a binned 
> value.  Sometimes a value other than zero is passed through the query. For 
> example -.5 result value is passed to the query result.  The basic of the 
> query is below.  
> 
> select
> incentivebin(wac, rate, .25)
> from
> my_table
> where incentivebin(was, rate, .25) = 0

> CREATE or REPLACE FUNCTION embs_owner.IncentiveBin(IN "Gwac" double 
> precision, 

> AS 'select ceiling(($1 - $2)/$3) *$3';

If the difference of (wac - rate) = -0.5, and you multiply that by 4 (divide by 
.25), ceil will round that correctly to -2. If you then divide by 4 again, you 
get -0.5 back (which is twice your bin size).

The result matches your function definition with the given parameters.


Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Re: postgis after pg_upgrade

2018-12-03 Thread Rene Romero Benavides
Are the postgis functions usable? do they currently work ? any error
messages in the database server logs ? have you installed the postgis
binaries for postgres 9.6? if you follow the given advise by Paul Ramsey,
what happens?

Am So., 2. Dez. 2018 um 14:24 Uhr schrieb Slavcho Trnkovski <
strnkov...@gmail.com>:

> Hi,
>
> This will not resolve the issue I have because extension is already to the
> latest version, but it is using postgres 9.4 and it should use 9.6.
>
> Regards,
> Slavcho
>
> On Fri, Nov 30, 2018 at 6:01 PM Paul Ramsey 
> wrote:
>
>> If you install the new version, and then use ‘ALTER EXTENSION UPDATE’ to
>> update the SQL-side bindings, everything should improve.
>>
>> On Nov 30, 2018, at 5:11 AM, Slavcho Trnkovski 
>> wrote:
>>
>> Hi,
>>
>> I have postgresql 9.4 with postgis extension installed (latest version,
>> 2.4.5). I upgraded postgresql from 9.4 to 9.6.
>> After upgrading to 9.6 I get the following result
>> from PostGIS_full_version()
>>  select PostGIS_full_version();
>>
>>  postgis_full_version
>>
>> ---
>>  POSTGIS="2.4.5 r16765" *PGSQL="94" (procs need upgrade for use with
>> "96") *GEOS="3.6.2-CAPI-1.10.2 4d2925d6" PROJ="Rel. 4.9.3, 15 August
>> 2016" GDAL="GDAL 1.11.4, released 2016/01/25" LIBXML="2.9.1" LIBJSON="0.11"
>> RASTER
>> (1 row)
>>
>> Is there any way to resolve this besides recreating the extension?
>>
>> Regards,
>> Slavcho
>>
>>
>>

-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Re: postgis after pg_upgrade

2018-12-03 Thread Adrian Klaver

On 12/1/18 8:56 AM, Slavcho Trnkovski wrote:

Hi,

This will not resolve the issue I have because extension is already to 
the latest version, but it is using postgres 9.4 and it should use 9.6.


Well according to below:

POSTGIS="2.4.5 r16765"

According to this:

http://postgis.net/source/

postgis-2.4.6.tar.gz

So I am thinking it is not at the latest version.




Regards,
Slavcho

On Fri, Nov 30, 2018 at 6:01 PM Paul Ramsey > wrote:


If you install the new version, and then use ‘ALTER EXTENSION
UPDATE’ to update the SQL-side bindings, everything should improve.


On Nov 30, 2018, at 5:11 AM, Slavcho Trnkovski
mailto:strnkov...@gmail.com>> wrote:

Hi,

I have postgresql 9.4 with postgis extension installed (latest
version, 2.4.5). I upgraded postgresql from 9.4 to 9.6.
After upgrading to 9.6 I get the following result
from PostGIS_full_version()
 select PostGIS_full_version();
 
                                 postgis_full_version


---
 POSTGIS="2.4.5 r16765" *PGSQL="94" (procs need upgrade for use
with "96") *GEOS="3.6.2-CAPI-1.10.2 4d2925d6" PROJ="Rel. 4.9.3, 15
August 2016" GDAL="GDAL 1.11.4, released 2016/01/25"
LIBXML="2.9.1" LIBJSON="0.11" RASTER
(1 row)

Is there any way to resolve this besides recreating the extension?

Regards,
Slavcho





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/2/18 5:24 AM, Igor Korot wrote:

Hi, Adrian,
Sorry for the delay to come back to this. I was busy doing other things.

On Tue, Jul 3, 2018 at 12:32 PM Adrian Klaver  wrote:


On 07/03/2018 10:21 AM, Igor Korot wrote:

Hi, ALL,
Is there any trigger or some other means I can do on the server
which will watch for CREATE/ALTER/DROP TABLE command and after successful
execution of those will issue a NOTIFY statement?


https://www.postgresql.org/docs/10/static/event-triggers.html


So if I understand correctly, I should write the trigger for the event
I am interested in.
And in this trigger I write a little SQL that will write the DDL
command in some temporary table.

I'm just looking for a way to execute this trigger and a function from
my C++ code
on the connection (either ODBC or thru the libpq).


Event triggers are fired by database events not by external prompting, 
so you do not need to have your code execute the trigger. You do have 
the option of disabling/enabling then though:


https://www.postgresql.org/docs/10/sql-altereventtrigger.html



And then in my C++ code I will continuously query this temporary table.


Why a temporary table? They are session specific and if the session 
aborts prematurely you will lose the information.




Or there is a better alternative on the algorithm?

Thank you.




Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



GiST index on INT8, possible bug in query planner?

2018-12-03 Thread Jan Behrens
Dear colleagues,

I have developed two indices using PostgreSQL's awesome GiST support,
one of them available here:

http://www.public-software-group.org/pgLatLon

(which is a lightweight and MIT-licensed alternative to PostGIS for
certain simple tasks involving geographic coordinates on the WGS-84
spheroid)


Recently I had the requirement of creating a multi-column index on
an integer in the first column and a custom data type in the second
column of the index. Since integers are not supported by GiST indices
by default, I used the btree_gist extension by Teodor Sigaev,
Oleg Bartunov, Janko Richter, and Paul Jungwirth, see:
https://www.postgresql.org/docs/10/btree-gist.html

However, the GiST index seems not to work as expected by me when
64-bit integers are involved. I tried to create a minimal
proof-of-concept to demonstrate this. Consider the following setup:

CREATE EXTENSION btree_gist;

CREATE TABLE test4_btree (id SERIAL4, ctx INT4);
CREATE TABLE test8_btree (id SERIAL4, ctx INT8);
CREATE TABLE test4_gist (id SERIAL4, ctx INT4);
CREATE TABLE test8_gist (id SERIAL4, ctx INT8);

I create multi-column indices on all four tables, with "ctx" as primary
and "id" as secondary column:

CREATE INDEX ON test4_btree (ctx, id);
CREATE INDEX ON test8_btree (ctx, id);
CREATE INDEX ON test4_gist USING gist (ctx, id);
CREATE INDEX ON test8_gist USING gist (ctx, id);

Now we add some data:

INSERT INTO test4_btree (ctx) SELECT floor(random()*100)+1 FROM 
generate_series(1, 1);
INSERT INTO test8_btree (ctx) SELECT floor(random()*100)+1 FROM 
generate_series(1, 1);
INSERT INTO test4_gist (ctx) SELECT floor(random()*100)+1 FROM 
generate_series(1, 1);
INSERT INTO test8_gist (ctx) SELECT floor(random()*100)+1 FROM 
generate_series(1, 1);

Only the tables directly using the B-tree index ("test4_btree" and
"test8_btree") and the table where "ctx" is 32-bit wide seem to work
properly:

EXPLAIN SELECT * FROM test4_btree WHERE ctx = 1 AND id = 2;
-- uses Index Cond: ((ctx = 1) AND (id = 2))

EXPLAIN SELECT * FROM test8_btree WHERE ctx = 1 AND id = 2;
-- uses Index Cond: ((ctx = 1) AND (id = 2))

EXPLAIN SELECT * FROM test4_gist WHERE ctx = 1 AND id = 2;
-- uses Index Cond: ((ctx = 1) AND (id = 2))

EXPLAIN SELECT * FROM test8_gist WHERE ctx = 1 AND id = 2;
-- uses Index Cond: (id = 2)

The query planning for the select on table "test8_gist" does not
include "ctx" in the "Index Cond".


To verify that the above problem isn't just an optimization because of
a low row count, I created a larger example with different values:

CREATE EXTENSION btree_gist;

CREATE TABLE test4_btree (ctx INT4, src INT4);
CREATE TABLE test8_btree (ctx INT8, src INT4);
CREATE TABLE test4_gist (ctx INT4, src INT4);
CREATE TABLE test8_gist (ctx INT8, src INT4);

CREATE INDEX ON test4_btree (ctx, src);
CREATE INDEX ON test8_btree (ctx, src);
CREATE INDEX ON test4_gist USING gist (ctx, src);
CREATE INDEX ON test8_gist USING gist (ctx, src);

INSERT INTO test4_btree SELECT floor(random()*1)+1, floor(random()*2)+1 
FROM generate_series(1, 100);
INSERT INTO test8_btree SELECT floor(random()*1)+1, floor(random()*2)+1 
FROM generate_series(1, 100);
INSERT INTO test4_gist SELECT floor(random()*1)+1, floor(random()*2)+1 FROM 
generate_series(1, 100);
INSERT INTO test8_gist SELECT floor(random()*1)+1, floor(random()*2)+1 FROM 
generate_series(1, 100);

EXPLAIN SELECT count(1) FROM test4_btree WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test8_btree WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test4_gist WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test8_gist WHERE ctx = 1 AND src = 2;
-- uses Index Cond: (src = 2)

ANALYZE;

EXPLAIN SELECT count(1) FROM test4_btree WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test8_btree WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test4_gist WHERE ctx = 1 AND src = 2;
-- uses Index Cond: ((ctx = 1) AND (src = 2))

EXPLAIN SELECT count(1) FROM test8_gist WHERE ctx = 1 AND src = 2;
-- does not use Index Cond at all, but Filter: ((ctx = 1) AND (src = 2))

SELECT count(1) FROM test4_btree WHERE ctx = 1 AND src = 2;  -- fast
SELECT count(1) FROM test8_btree WHERE ctx = 1 AND src = 2;  -- fast
SELECT count(1) FROM test4_gist WHERE ctx = 1 AND src = 2;  -- fast
SELECT count(1) FROM test8_gist WHERE ctx = 1 AND src = 2;  -- slow!

The query on "test8_gist" is significantly slower than in all other
three cases.


I wonder if this is a bug in the query planner, in the GiST facilities
of PostgreSQL, a problem of the "btree_gist" extension, or something
else? Can anyone help me?


Kind regards,
Jan Behrens


-- 
Public Software Group e. V.
Johannisstr. 12, 10117 Berlin, Germany

www.public-software-group.o

Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 9:17 AM Adrian Klaver  wrote:
>
> On 12/2/18 5:24 AM, Igor Korot wrote:
> > Hi, Adrian,
> > Sorry for the delay to come back to this. I was busy doing other things.
> >
> > On Tue, Jul 3, 2018 at 12:32 PM Adrian Klaver  
> > wrote:
> >>
> >> On 07/03/2018 10:21 AM, Igor Korot wrote:
> >>> Hi, ALL,
> >>> Is there any trigger or some other means I can do on the server
> >>> which will watch for CREATE/ALTER/DROP TABLE command and after successful
> >>> execution of those will issue a NOTIFY statement?
> >>
> >> https://www.postgresql.org/docs/10/static/event-triggers.html
> >
> > So if I understand correctly, I should write the trigger for the event
> > I am interested in.
> > And in this trigger I write a little SQL that will write the DDL
> > command in some temporary table.
> >
> > I'm just looking for a way to execute this trigger and a function from
> > my C++ code
> > on the connection (either ODBC or thru the libpq).
>
> Event triggers are fired by database events not by external prompting,
> so you do not need to have your code execute the trigger. You do have
> the option of disabling/enabling then though:
>
> https://www.postgresql.org/docs/10/sql-altereventtrigger.html
.
Yes, but the code to the event triogger needs to be written and then executed on
connection, right?

So, this is what I'm thingking:

[code]
ret = SQLExecute( m_hstmt, L"CREATE FUNCTION schema_watch_check()
RETURNS event_trigger AS schema_watch LANGUAGE C", SQL_NTS );
if( ret == SQL_SUCCESS || ret== SQL_SUCCESS_WITH_INFO )
{
ret = SQLExecute( m_hstmt, L"CREATE EVENT TRIGGER schema_watch ON
ddl_command_end EXECUTE PROCEDURE schema_watch_check()", SQL_NTS );
}
[/code]

and something to that extent on the libpq connection.

Am I missing something here?

Now the other question is - how to make it work?
I can write the function code, compile it and place in the folder
where my executable is (or it should be in some postgreSQL dir?) and
then executing above code
will be enough. Is this correct?

>
> >
> > And then in my C++ code I will continuously query this temporary table.
>
> Why a temporary table? They are session specific and if the session
> aborts prematurely you will lose the information.

Is there a better alternative?

Thank you.

>
> >
> > Or there is a better alternative on the algorithm?
> >
> > Thank you.
> >
> 
> >>> Thank you.
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Query never completes with an OR condition

2018-12-03 Thread Kenneth Marshall
Hi,

I was investigating a performance problem and found a query that
never completes in a reasonable amount of time even though my
expectation is that it should. I am running version 9.6.6. I
do not see anything in the 9.6.* release notes that mention this
problem. The individual queries run as expected, but when the
OR condition is added, it never finishes. Here is the information
about the tables and queries. Any ideas about what could be happening
or how to debug it further would be appreciated.

Regards,
Ken





\d cachedgroupmembers
   Table "public.cachedgroupmembers"
  Column   |   Type   |Modifiers

---+--+-
 id| integer  | not null default 
nextval(('cachedgroupmembers_id_seq'::text)::regclass)
 groupid   | integer  | 
 memberid  | integer  | 
 via   | integer  | 
 immediateparentid | integer  | 
 disabled  | smallint | not null default 0
Indexes:
"cachedgroupmembers_pkey" PRIMARY KEY, btree (id)
"cachedgroupmembers1" btree (memberid, immediateparentid)
"cachedgroupmembers4" btree (memberid, groupid, disabled)
"disgroumem" btree (groupid, memberid, disabled)
"shredder_cgm2" btree (immediateparentid, memberid)
"shredder_cgm3" btree (via, id)

\d tickets
Table "public.tickets"
 Column  |Type |  
Modifiers   
-+-+--
 id  | integer | not null default 
nextval(('tickets_id_seq'::text)::regclass)
 effectiveid | integer | not null default 0
 queue   | integer | not null default 0
 type| character varying(16)   | 
 owner   | integer | not null default 0
 subject | character varying(200)  | default '[no 
subject]'::character varying
 initialpriority | integer | not null default 0
 finalpriority   | integer | not null default 0
 priority| integer | not null default 0
 timeestimated   | integer | not null default 0
 timeworked  | integer | not null default 0
 status  | character varying(64)   | 
 timeleft| integer | not null default 0
 told| timestamp without time zone | 
 starts  | timestamp without time zone | 
 started | timestamp without time zone | 
 due | timestamp without time zone | 
 resolved| timestamp without time zone | 
 lastupdatedby   | integer | not null default 0
 lastupdated | timestamp without time zone | 
 creator | integer | not null default 0
 created | timestamp without time zone | 
 ismerged| smallint| 
 sla | character varying(64)   | 
Indexes:
"tickets_pkey" PRIMARY KEY, btree (id)
"tickets1" btree (queue, status)
"tickets2" btree (owner)
"tickets3" btree (effectiveid)
"tickets4" btree (status)
"tickets_merged_helper" btree ((effectiveid - id))

EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(DISTINCT main.id) FROM Tickets main 
JOIN CachedGroupMembers CachedGroupMembers_1  ON ( 
CachedGroupMembers_1.Disabled = '0' ) AND ( CachedGroupMembers_1.MemberId = 
main.Owner ) JOIN CachedGroupMembers CachedGroupMembers_2  ON ( 
CachedGroupMembers_2.Disabled = '0' ) AND ( CachedGroupMembers_2.MemberId = 
main.Owner ) LEFT JOIN Groups Groups_3  ON ( LOWER(Groups_3.Domain) = 
'rt::ticket-role' ) AND ( Groups_3.Instance = main.id ) LEFT JOIN 
CachedGroupMembers CachedGroupMembers_4  ON ( CachedGroupMembers_4.Disabled = 
'0' ) AND ( CachedGroupMembers_4.MemberId = '579312' ) AND ( 
CachedGroupMembers_4.GroupId = Groups_3.id )  WHERE ( ( main.Queue IN ('3', 
'3', '7', '7', '4', '4', '6', '6', '18', '18', '1', '1', '8', '8', '17', '17', 
'11', '11', '15', '15', '23', '23', '20', '20', '22', '22', '21', '21', '12', 
'12', '14', '14', '19', '19', '10', '10', '5', '5', '9', '9', '28', '28', '38', 
'38', '30', '30', '34', '34', '37', '37', '31', '31', '32', '32', '36', '36', 
'33', '33', '35', '35', '41', '41', '40', '40', '42', '42', '43', '43', '39', 
'39', '51', '51', '52', '52', '53', '53', '61', '63', '63', '64', '66', '68', 
'69', '69', '71', '71', '72', '72', '73', '73', '70', '70', '75', '75', '81', 
'81', '82', '82', '83', '83', '84', '84', '86', '86', '87', '89', '89', '90', 
'90', '92', '92', '93', '93', '94', '94', '91', '24', '24', '95', '95', '97', 
'97', '98', '98', '103', '103

Re: GiST index on INT8, possible bug in query planner?

2018-12-03 Thread Tom Lane
Jan Behrens  writes:
> However, the GiST index seems not to work as expected by me when
> 64-bit integers are involved. I tried to create a minimal
> proof-of-concept to demonstrate this. Consider the following setup:
> CREATE TABLE test8_gist (id SERIAL4, ctx INT8);
> CREATE INDEX ON test8_gist USING gist (ctx, id);
> EXPLAIN SELECT * FROM test8_gist WHERE ctx = 1 AND id = 2;
> -- uses Index Cond: (id = 2)
> The query planning for the select on table "test8_gist" does not
> include "ctx" in the "Index Cond".

Probably it would if you'd written "WHERE ctx = 1::int8".  Without
the cast, what you'll have is "int8 = int4", and I suspect that
btree_gist doesn't include cross-type operators in its opclasses.

regards, tom lane



Re: GiST index on INT8, possible bug in query planner?

2018-12-03 Thread Jan Behrens
On Mon, 03 Dec 2018 11:47:17 -0500
Tom Lane  wrote:

> Jan Behrens  writes:
> 
> > However, the GiST index seems not to work as expected by me when
> > 64-bit integers are involved. I tried to create a minimal
> > proof-of-concept to demonstrate this. Consider the following setup:
> > 
> > CREATE TABLE test8_gist (id SERIAL4, ctx INT8);
> > CREATE INDEX ON test8_gist USING gist (ctx, id);
> > EXPLAIN SELECT * FROM test8_gist WHERE ctx = 1 AND id = 2;
> > -- uses Index Cond: (id = 2)
> > 
> > The query planning for the select on table "test8_gist" does not
> > include "ctx" in the "Index Cond".
> 
> Probably it would if you'd written "WHERE ctx = 1::int8".  Without
> the cast, what you'll have is "int8 = int4", and I suspect that
> btree_gist doesn't include cross-type operators in its opclasses.
> 
>   regards, tom lane

You are right! I just tested it and ::int8 does the job.

It might be good to add a short notice or warning in the documentation
at: https://www.postgresql.org/docs/current/btree-gist.html

It might help other people who run into the same problem.


Thanks for helping me,
Jan Behrens

-- 
Public Software Group e. V.
Johannisstr. 12, 10117 Berlin, Germany

www.public-software-group.org
vorst...@public-software-group.org

eingetragen in das Vereinregister
des Amtsgerichtes Charlottenburg
Registernummer: VR 28873 B

Vorstände (einzelvertretungsberechtigt):
Jan Behrens
Axel Kistner
Andreas Nitsche
Björn Swierczek



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 8:16 AM, Igor Korot wrote:

Hi, Adrian,

On Mon, Dec 3, 2018 at 9:17 AM Adrian Klaver  wrote:


On 12/2/18 5:24 AM, Igor Korot wrote:

Hi, Adrian,
Sorry for the delay to come back to this. I was busy doing other things.

On Tue, Jul 3, 2018 at 12:32 PM Adrian Klaver  wrote:


On 07/03/2018 10:21 AM, Igor Korot wrote:

Hi, ALL,
Is there any trigger or some other means I can do on the server
which will watch for CREATE/ALTER/DROP TABLE command and after successful
execution of those will issue a NOTIFY statement?


https://www.postgresql.org/docs/10/static/event-triggers.html


So if I understand correctly, I should write the trigger for the event
I am interested in.
And in this trigger I write a little SQL that will write the DDL
command in some temporary table.

I'm just looking for a way to execute this trigger and a function from
my C++ code
on the connection (either ODBC or thru the libpq).


Event triggers are fired by database events not by external prompting,
so you do not need to have your code execute the trigger. You do have
the option of disabling/enabling then though:

https://www.postgresql.org/docs/10/sql-altereventtrigger.html

.
Yes, but the code to the event triogger needs to be written and then executed on
connection, right?

So, this is what I'm thingking:

[code]
ret = SQLExecute( m_hstmt, L"CREATE FUNCTION schema_watch_check()
RETURNS event_trigger AS schema_watch LANGUAGE C", SQL_NTS );
if( ret == SQL_SUCCESS || ret== SQL_SUCCESS_WITH_INFO )
{
 ret = SQLExecute( m_hstmt, L"CREATE EVENT TRIGGER schema_watch ON
ddl_command_end EXECUTE PROCEDURE schema_watch_check()", SQL_NTS );
}
[/code]

and something to that extent on the libpq connection.

Am I missing something here?


Yes this:

https://www.postgresql.org/docs/9.6/sql-createeventtrigger.html

"CREATE EVENT TRIGGER creates a new event trigger. Whenever the 
designated event occurs and the WHEN condition associated with the 
trigger, if any, is satisfied, the trigger function will be executed. 
For a general introduction to event triggers, see Chapter 38. The user 
who creates an event trigger becomes its owner."


So event triggers are associated with 
events(www.postgresql.org/docs/9.6/event-trigger-matrix.html) in a 
particular database. A rough description is that they are triggers on 
changes to the system catalogs.
You could, I guess, create and drop them for each connection. To me it 
would seem more efficient to create them once. You then have the choice 
of leaving them running or using the ALTER command I posted previously 
to ENABLE/DISABLE them.




Now the other question is - how to make it work?
I can write the function code, compile it and place in the folder
where my executable is (or it should be in some postgreSQL dir?) and
then executing above code
will be enough. Is this correct?





And then in my C++ code I will continuously query this temporary table.


Why a temporary table? They are session specific and if the session
aborts prematurely you will lose the information.


Is there a better alternative?

Thank you.





Or there is a better alternative on the algorithm?

Thank you.




Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
On Mon, Dec 3, 2018 at 11:29 AM Adrian Klaver  wrote:
>
> On 12/3/18 8:16 AM, Igor Korot wrote:
> > Hi, Adrian,
> >
> > On Mon, Dec 3, 2018 at 9:17 AM Adrian Klaver  
> > wrote:
> >>
> >> On 12/2/18 5:24 AM, Igor Korot wrote:
> >>> Hi, Adrian,
> >>> Sorry for the delay to come back to this. I was busy doing other things.
> >>>
> >>> On Tue, Jul 3, 2018 at 12:32 PM Adrian Klaver  
> >>> wrote:
> 
>  On 07/03/2018 10:21 AM, Igor Korot wrote:
> > Hi, ALL,
> > Is there any trigger or some other means I can do on the server
> > which will watch for CREATE/ALTER/DROP TABLE command and after 
> > successful
> > execution of those will issue a NOTIFY statement?
> 
>  https://www.postgresql.org/docs/10/static/event-triggers.html
> >>>
> >>> So if I understand correctly, I should write the trigger for the event
> >>> I am interested in.
> >>> And in this trigger I write a little SQL that will write the DDL
> >>> command in some temporary table.
> >>>
> >>> I'm just looking for a way to execute this trigger and a function from
> >>> my C++ code
> >>> on the connection (either ODBC or thru the libpq).
> >>
> >> Event triggers are fired by database events not by external prompting,
> >> so you do not need to have your code execute the trigger. You do have
> >> the option of disabling/enabling then though:
> >>
> >> https://www.postgresql.org/docs/10/sql-altereventtrigger.html
> > .
> > Yes, but the code to the event triogger needs to be written and then 
> > executed on
> > connection, right?
> >
> > So, this is what I'm thingking:
> >
> > [code]
> > ret = SQLExecute( m_hstmt, L"CREATE FUNCTION schema_watch_check()
> > RETURNS event_trigger AS schema_watch LANGUAGE C", SQL_NTS );
> > if( ret == SQL_SUCCESS || ret== SQL_SUCCESS_WITH_INFO )
> > {
> >  ret = SQLExecute( m_hstmt, L"CREATE EVENT TRIGGER schema_watch ON
> > ddl_command_end EXECUTE PROCEDURE schema_watch_check()", SQL_NTS );
> > }
> > [/code]
> >
> > and something to that extent on the libpq connection.
> >
> > Am I missing something here?
>
> Yes this:
>
> https://www.postgresql.org/docs/9.6/sql-createeventtrigger.html
>
> "CREATE EVENT TRIGGER creates a new event trigger. Whenever the
> designated event occurs and the WHEN condition associated with the
> trigger, if any, is satisfied, the trigger function will be executed.
> For a general introduction to event triggers, see Chapter 38. The user
> who creates an event trigger becomes its owner."
>
> So event triggers are associated with
> events(www.postgresql.org/docs/9.6/event-trigger-matrix.html) in a
> particular database. A rough description is that they are triggers on
> changes to the system catalogs.
> You could, I guess, create and drop them for each connection. To me it
> would seem more efficient to create them once. You then have the choice
> of leaving them running or using the ALTER command I posted previously
> to ENABLE/DISABLE them.

OK, so how do I do it?
There is no "CREATE EVENT TRIGGER IF NOT EXIST".

As I say - I'm trying to make it work from both ODBC and libpq
connection (one at a time).

>
> >
> > Now the other question is - how to make it work?
> > I can write the function code, compile it and place in the folder
> > where my executable is (or it should be in some postgreSQL dir?) and
> > then executing above code
> > will be enough. Is this correct?

Also - what about this?

And why did you say that saving the SQL commend is not a good idea.

What's better?

Thank you.

> >
> >>
> >>>
> >>> And then in my C++ code I will continuously query this temporary table.
> >>
> >> Why a temporary table? They are session specific and if the session
> >> aborts prematurely you will lose the information.
> >
> > Is there a better alternative?
> >
> > Thank you.
> >
> >>
> >>>
> >>> Or there is a better alternative on the algorithm?
> >>>
> >>> Thank you.
> >>>
> >>
> > Thank you.
> >
> 
> 
>  --
>  Adrian Klaver
>  adrian.kla...@aklaver.com
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 9:53 AM, Igor Korot wrote:

So event triggers are associated with
events(www.postgresql.org/docs/9.6/event-trigger-matrix.html) in a
particular database. A rough description is that they are triggers on
changes to the system catalogs.
You could, I guess, create and drop them for each connection. To me it
would seem more efficient to create them once. You then have the choice
of leaving them running or using the ALTER command I posted previously
to ENABLE/DISABLE them.


OK, so how do I do it?
There is no "CREATE EVENT TRIGGER IF NOT EXIST".

As I say - I'm trying to make it work from both ODBC and libpq
connection (one at a time)


Why? Just create the trigger once in a script. Event triggers are an 
attribute of the database and stay with it until they are dropped. If 
you want to turn then on and off use the ALTER  EVENT TRIGGER 
ENABLE/DISABLE. If you insist on recreating them on each connection then:


https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
CREATE EVENT TRIGGER ...







Now the other question is - how to make it work?
I can write the function code, compile it and place in the folder
where my executable is (or it should be in some postgreSQL dir?) and
then executing above code
will be enough. Is this correct?


Also - what about this?

And why did you say that saving the SQL commend is not a good idea.

What's better?


See above.



Thank you.







And then in my C++ code I will continuously query this temporary table.


Why a temporary table? They are session specific and if the session
aborts prematurely you will lose the information.


Is there a better alternative?

Thank you.





Or there is a better alternative on the algorithm?

Thank you.




Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread David G. Johnston
On Mon, Dec 3, 2018 at 10:59 AM Adrian Klaver  wrote:
> > As I say - I'm trying to make it work from both ODBC and libpq
> > connection (one at a time)

IIUC what is being proposed is:

Once, on the server, the DBA issues:
CREATE EVENT TRIGGER (the function it runs issues NOTIFY
'channel-name' - even if no one is listening at any given point in
time)

Upon connection your client application executes:
LISTEN 'channel-name'
And then handles notification events from the channel normally.

The client itself never issues CREATE EVENT - nor, frankly, should it.
It shouldn't have sufficient permissions to do something like that.

David J.



Re: Query never completes with an OR condition

2018-12-03 Thread Andrew Gierth
> "Kenneth" == Kenneth Marshall  writes:

 Kenneth> The individual queries run as expected, but when the OR
 Kenneth> condition is added, it never finishes.

http://blog.rhodiumtoad.org.uk/2017/01/22/performance-issues-with-ored-conditions/

-- 
Andrew (irc:RhodiumToad)



DROP CASCADE transitive dependencies

2018-12-03 Thread C GG
...PostgreSQL 9.5...

`DROP SCHEMA blah;` reports all the dependent objects and advises to `DROP
SCHEMA blah CASCADE;` ...

Will DROP ... CASCADE traverse the entire dependency tree for each of the
dependent objects (potentially dropping something unintended), or will it
stop at the first level and balk at any new transitive dependencies?

All the dependent objects listed by `DROP SCHEMA blah` are fair game to be
dropped, but if I add CASCADE I don't necessarily want to have it go beyond
that initial list without my say-so.

The alternative is to drop each one in the list individually without
CASCADE-- a tedious task I'd rather short-cut, but in a non-reckless manner
;) ...

CG


Re: Query never completes with an OR condition

2018-12-03 Thread Stephen Frost
Greetings,

* Andrew Gierth (and...@tao11.riddles.org.uk) wrote:
> > "Kenneth" == Kenneth Marshall  writes:
> 
>  Kenneth> The individual queries run as expected, but when the OR
>  Kenneth> condition is added, it never finishes.
> 
> http://blog.rhodiumtoad.org.uk/2017/01/22/performance-issues-with-ored-conditions/

There's been work to improve on this and which I know that I, at least,
would love to see make it into PostgreSQL:

https://commitfest.postgresql.org/19/1001/

Would be great if someone could look at that and try to help determine
if it's possible to ensure that the results are entirely correct.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: DROP CASCADE transitive dependencies

2018-12-03 Thread Tom Lane
C GG  writes:
> ...PostgreSQL 9.5...
> `DROP SCHEMA blah;` reports all the dependent objects and advises to `DROP
> SCHEMA blah CASCADE;` ...

> Will DROP ... CASCADE traverse the entire dependency tree for each of the
> dependent objects (potentially dropping something unintended), or will it
> stop at the first level and balk at any new transitive dependencies?

The former.  However, the list of dependencies it's showing you as
potentially dropped already includes transitive dependencies; there
aren't going to be "new" ones unless somebody is adding things
concurrently.

If you're feeling paranoid, you could always do

begin;
drop ... cascade;

and then look at the reported list of objects before deciding whether
to commit or roll back.

regards, tom lane



Re: DROP CASCADE transitive dependencies

2018-12-03 Thread C GG
On Mon, Dec 3, 2018 at 1:26 PM Tom Lane  wrote:

> C GG  writes:
> > ...PostgreSQL 9.5...
> > `DROP SCHEMA blah;` reports all the dependent objects and advises to
> `DROP
> > SCHEMA blah CASCADE;` ...
>
> > Will DROP ... CASCADE traverse the entire dependency tree for each of the
> > dependent objects (potentially dropping something unintended), or will it
> > stop at the first level and balk at any new transitive dependencies?
>
> The former.  However, the list of dependencies it's showing you as
> potentially dropped already includes transitive dependencies; there
> aren't going to be "new" ones unless somebody is adding things
> concurrently.
>

That's good news!


>
> If you're feeling paranoid, you could always do
>
> begin;
> drop ... cascade;
>
> and then look at the reported list of objects before deciding whether
> to commit or roll back.
>

Me, paranoid? Yes. Yes I am.

So I did that--

data=# begin;
BEGIN
data=# DROP SCHEMA blah CASCADE;
NOTICE:  drop cascades to 278 other objects
DETAIL:  drop cascades to type blah.timeclock_compute_hours_type
...
and 178 other objects (see server log for list)
data=# rollback;
ROLLBACK
data=#

and I can't see any of the other 178 objects in the server log. I did see
all the deadlock reports because I had left the transaction hanging open
while I went rubbernecking. ;) Maybe my log level isn't detailed enough.

Also-- it is interesting to note that the list that I was shown when I
executed `DROP SCHEMA blah;` is only 100 objects long. So that tells me
that there's 178 other entries I'm not seeing. Where's that tin-foil hat?

Any suggestions for getting the names of the other 178 dependent objects?


>
> regards, tom lane
>

Thanks Tom. I don't say it enough: I _really_ appreciate you and your
consistent excellent contributions to PostgreSQL and to the PostgreSQL
community.


Re: DROP CASCADE transitive dependencies

2018-12-03 Thread Pavel Stehule
po 3. 12. 2018 v 20:07 odesílatel C GG  napsal:

>
>
> On Mon, Dec 3, 2018 at 1:26 PM Tom Lane  wrote:
>
>> C GG  writes:
>> > ...PostgreSQL 9.5...
>> > `DROP SCHEMA blah;` reports all the dependent objects and advises to
>> `DROP
>> > SCHEMA blah CASCADE;` ...
>>
>> > Will DROP ... CASCADE traverse the entire dependency tree for each of
>> the
>> > dependent objects (potentially dropping something unintended), or will
>> it
>> > stop at the first level and balk at any new transitive dependencies?
>>
>> The former.  However, the list of dependencies it's showing you as
>> potentially dropped already includes transitive dependencies; there
>> aren't going to be "new" ones unless somebody is adding things
>> concurrently.
>>
>
> That's good news!
>
>
>>
>> If you're feeling paranoid, you could always do
>>
>> begin;
>> drop ... cascade;
>>
>> and then look at the reported list of objects before deciding whether
>> to commit or roll back.
>>
>
> Me, paranoid? Yes. Yes I am.
>
> So I did that--
>
> data=# begin;
> BEGIN
> data=# DROP SCHEMA blah CASCADE;
> NOTICE:  drop cascades to 278 other objects
> DETAIL:  drop cascades to type blah.timeclock_compute_hours_type
> ...
> and 178 other objects (see server log for list)
> data=# rollback;
> ROLLBACK
> data=#
>
> and I can't see any of the other 178 objects in the server log. I did see
> all the deadlock reports because I had left the transaction hanging open
> while I went rubbernecking. ;) Maybe my log level isn't detailed enough.
>
> Also-- it is interesting to note that the list that I was shown when I
> executed `DROP SCHEMA blah;` is only 100 objects long. So that tells me
> that there's 178 other entries I'm not seeing. Where's that tin-foil hat?
>
> Any suggestions for getting the names of the other 178 dependent objects?
>

you can use some of mentioned queries
https://wiki.postgresql.org/wiki/Pg_depend_display
https://dba.stackexchange.com/questions/78301/postgresql-dependencies-on-a-schema

dependency is stored in pg_depend query - so you just to iterate over this
table.

Regards

Pavel


>
>>
>> regards, tom lane
>>
>
> Thanks Tom. I don't say it enough: I _really_ appreciate you and your
> consistent excellent contributions to PostgreSQL and to the PostgreSQL
> community.
>
>


Re: DROP CASCADE transitive dependencies

2018-12-03 Thread Alvaro Herrera
On 2018-Dec-03, C GG wrote:

> data=# begin;
> BEGIN
> data=# DROP SCHEMA blah CASCADE;
> NOTICE:  drop cascades to 278 other objects
> DETAIL:  drop cascades to type blah.timeclock_compute_hours_type
> ...
> and 178 other objects (see server log for list)
> data=# rollback;
> ROLLBACK
> data=#
> 
> and I can't see any of the other 178 objects in the server log. I did see
> all the deadlock reports because I had left the transaction hanging open
> while I went rubbernecking. ;) Maybe my log level isn't detailed enough.

If you do DROP without cascade, the objects are going to be listed
in the DETAIL field of the ERROR, so you need to have
log_error_verbosity set to verbose or default; if you have it as terse,
they won't appear in the server log.

If you DROP with CASCADE, the objects would appear in a NOTICE message,
which may be below your log_min_messages.  Try with
SET LOCAL log_min_messages TO notice;
in the same transaction, just before the drop.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 11:59 AM Adrian Klaver  wrote:
>
> On 12/3/18 9:53 AM, Igor Korot wrote:
> >> So event triggers are associated with
> >> events(www.postgresql.org/docs/9.6/event-trigger-matrix.html) in a
> >> particular database. A rough description is that they are triggers on
> >> changes to the system catalogs.
> >> You could, I guess, create and drop them for each connection. To me it
> >> would seem more efficient to create them once. You then have the choice
> >> of leaving them running or using the ALTER command I posted previously
> >> to ENABLE/DISABLE them.
> >
> > OK, so how do I do it?
> > There is no "CREATE EVENT TRIGGER IF NOT EXIST".
> >
> > As I say - I'm trying to make it work from both ODBC and libpq
> > connection (one at a time)
>
> Why? Just create the trigger once in a script. Event triggers are an
> attribute of the database and stay with it until they are dropped. If
> you want to turn then on and off use the ALTER  EVENT TRIGGER
> ENABLE/DISABLE. If you insist on recreating them on each connection then:
>
> https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
> DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
> CREATE EVENT TRIGGER ...

I was hoping to create a software which will be just "install-and-use".
No additional script running or database changes is required.

But I will probably create it on every connection and delete on the
disconnect (see above).

>
> >
> >>
> >>>
> >>> Now the other question is - how to make it work?
> >>> I can write the function code, compile it and place in the folder
> >>> where my executable is (or it should be in some postgreSQL dir?) and
> >>> then executing above code
> >>> will be enough. Is this correct?
> >
> > Also - what about this?
> >
> > And why did you say that saving the SQL commend is not a good idea.
> >
> > What's better?
>
> See above.
>
> >
> > Thank you.
> >
> >>>
> 
> >
> > And then in my C++ code I will continuously query this temporary table.
> 
>  Why a temporary table? They are session specific and if the session
>  aborts prematurely you will lose the information.
> >>>
> >>> Is there a better alternative?
> >>>
> >>> Thank you.
> >>>
> 
> >
> > Or there is a better alternative on the algorithm?
> >
> > Thank you.
> >
> 
> >>> Thank you.
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
> 
> 
>  --
>  Adrian Klaver
>  adrian.kla...@aklaver.com
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Alvaro Herrera
On 2018-Dec-03, Igor Korot wrote:

> But I will probably create it on every connection and delete on the
> disconnect (see above).

This sounds certain to create a mess eventually, when a connection drops
unexpectedly. (Also, what will happens to connections that run
concurrently with yours?)

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 1:53 PM, Igor Korot wrote:

Hi, Adrian,




Why? Just create the trigger once in a script. Event triggers are an
attribute of the database and stay with it until they are dropped. If
you want to turn then on and off use the ALTER  EVENT TRIGGER
ENABLE/DISABLE. If you insist on recreating them on each connection then:

https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
CREATE EVENT TRIGGER ...


I was hoping to create a software which will be just "install-and-use".
No additional script running or database changes is required.


Out of curiosity more then anything else:

The database schema this is running against never changes?

The triggers cannot be included in the initial database setup?




But I will probably create it on every connection and delete on the
disconnect (see above).




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 1:53 PM, Igor Korot wrote:

Hi, Adrian,



Why? Just create the trigger once in a script. Event triggers are an
attribute of the database and stay with it until they are dropped. If
you want to turn then on and off use the ALTER  EVENT TRIGGER
ENABLE/DISABLE. If you insist on recreating them on each connection then:

https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
CREATE EVENT TRIGGER ...


I was hoping to create a software which will be just "install-and-use".
No additional script running or database changes is required.


After I hit reply my subconscious kicked in and pointed out something:)

If there are no database changes why do you need to track schema changes?



But I will probably create it on every connection and delete on the
disconnect (see above).



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 4:10 PM Adrian Klaver  wrote:
>
> On 12/3/18 1:53 PM, Igor Korot wrote:
> > Hi, Adrian,
>
> >> Why? Just create the trigger once in a script. Event triggers are an
> >> attribute of the database and stay with it until they are dropped. If
> >> you want to turn then on and off use the ALTER  EVENT TRIGGER
> >> ENABLE/DISABLE. If you insist on recreating them on each connection then:
> >>
> >> https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
> >> DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
> >> CREATE EVENT TRIGGER ...
> >
> > I was hoping to create a software which will be just "install-and-use".
> > No additional script running or database changes is required.
>
> After I hit reply my subconscious kicked in and pointed out something:)
>
> If there are no database changes why do you need to track schema changes?

That was a bad word selection. ;-)

What I mean to say was "no schema changes/server changes that comes
independently
of the program install". Or something to that extent.

Sorry, ESL person here.

Thank you.

>
> >
> > But I will probably create it on every connection and delete on the
> > disconnect (see above).
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 3:00 PM, Igor Korot wrote:

Hi, Adrian,

On Mon, Dec 3, 2018 at 4:10 PM Adrian Klaver  wrote:


On 12/3/18 1:53 PM, Igor Korot wrote:

Hi, Adrian,



Why? Just create the trigger once in a script. Event triggers are an
attribute of the database and stay with it until they are dropped. If
you want to turn then on and off use the ALTER  EVENT TRIGGER
ENABLE/DISABLE. If you insist on recreating them on each connection then:

https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
CREATE EVENT TRIGGER ...


I was hoping to create a software which will be just "install-and-use".
No additional script running or database changes is required.


After I hit reply my subconscious kicked in and pointed out something:)

If there are no database changes why do you need to track schema changes?


That was a bad word selection. ;-)

What I mean to say was "no schema changes/server changes that comes
independently
of the program install". Or something to that extent.


Which circles back around to the same question:

If there are to be no schema changes after the install why track them on 
subsequent connections?


Or maybe an explanation of what you are trying to achieve would make 
things clearer?






Sorry, ESL person here.

Thank you.





But I will probably create it on every connection and delete on the
disconnect (see above).



--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 5:15 PM Adrian Klaver  wrote:
>
> On 12/3/18 3:00 PM, Igor Korot wrote:
> > Hi, Adrian,
> >
> > On Mon, Dec 3, 2018 at 4:10 PM Adrian Klaver  
> > wrote:
> >>
> >> On 12/3/18 1:53 PM, Igor Korot wrote:
> >>> Hi, Adrian,
> >>
>  Why? Just create the trigger once in a script. Event triggers are an
>  attribute of the database and stay with it until they are dropped. If
>  you want to turn then on and off use the ALTER  EVENT TRIGGER
>  ENABLE/DISABLE. If you insist on recreating them on each connection then:
> 
>  https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
>  DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
>  CREATE EVENT TRIGGER ...
> >>>
> >>> I was hoping to create a software which will be just "install-and-use".
> >>> No additional script running or database changes is required.
> >>
> >> After I hit reply my subconscious kicked in and pointed out something:)
> >>
> >> If there are no database changes why do you need to track schema changes?
> >
> > That was a bad word selection. ;-)
> >
> > What I mean to say was "no schema changes/server changes that comes
> > independently
> > of the program install". Or something to that extent.
>
> Which circles back around to the same question:
>
> If there are to be no schema changes after the install why track them on
> subsequent connections?
>
> Or maybe an explanation of what you are trying to achieve would make
> things clearer?

Ok, it probably will make things clearer.
So I install my program perform some queries and exit.
At the same time if the user will create or delete a table the program should
pick up those changes and act accordingly.

I was hoping to do the watching initialization dynamically, but it looks as it
is more pain and I can probably try to execute the script during the
installation.

So basically I will create the function in C, compile it and then
during the program installation
create a trigger with that function.
Then in my program I will execute "LISTEN" command and act accordingly.

Am I right?

And executing LISTEN will also work for ODBC connection, right?

Thank you.

>
>
>
> >
> > Sorry, ESL person here.
> >
> > Thank you.
> >
> >>
> >>>
> >>> But I will probably create it on every connection and delete on the
> >>> disconnect (see above).
> >>>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Adrian Klaver

On 12/3/18 5:29 AM, Igor Korot wrote:

Hi, Adrian,

On Mon, Dec 3, 2018 at 5:15 PM Adrian Klaver  wrote:


On 12/3/18 3:00 PM, Igor Korot wrote:

Hi, Adrian,





What I mean to say was "no schema changes/server changes that comes
independently
of the program install". Or something to that extent.


Which circles back around to the same question:

If there are to be no schema changes after the install why track them on
subsequent connections?

Or maybe an explanation of what you are trying to achieve would make
things clearer?


Ok, it probably will make things clearer.
So I install my program perform some queries and exit.
At the same time if the user will create or delete a table the program should
pick up those changes and act accordingly.


So do you want the user to change the schema?

Or could you use permissions to stop that?

I am just seeing a difficult problem trying to divine user intent from 
schema changes.




I was hoping to do the watching initialization dynamically, but it looks as it
is more pain and I can probably try to execute the script during the
installation.

So basically I will create the function in C, compile it and then
during the program installation
create a trigger with that function.
Then in my program I will execute "LISTEN" command and act accordingly.

Am I right?


Not sure I have never tried LISTEN on event trigger output.



And executing LISTEN will also work for ODBC connection, right?


Again not something I am familiar with. Someone else on this list may be 
able to answer that. Or there is:


https://www.postgresql.org/list/pgsql-odbc/



Thank you.







Sorry, ESL person here.

Thank you.





But I will probably create it on every connection and delete on the
disconnect (see above).



--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com