Re: Unexpected behavior with transition tables in update statement trigger

2018-02-27 Thread Tom Kazimiers

On Tue, Feb 27, 2018 at 03:58:14PM -0500, Tom Lane wrote:

Thomas Munro  writes:

Here's a new version with tuplestore_select_read_pointer() added in
another place where it was lacking, and commit message.  Moving to
-hackers, where patches go.


Pushed, along with a regression test based on your example.
Unfortunately, this came in a bit too late for this week's releases :-(


Ah, so close. :-) Regardless, thanks both of you for fixing this and 
committing the fix to master. I am looking forward to the release 
including this.


Cheers,
Tom



Re: Unexpected behavior with transition tables in update statement trigger

2018-02-27 Thread Tom Kazimiers

On Wed, Feb 28, 2018 at 10:27:23AM +1300, Thomas Munro wrote:

Tom K, if you need a workaround before 10.4 comes out in May[1], you
could try selecting the whole transition table into a CTE up front.
Something like WITH my_copy AS (SELECT * FROM new_table) SELECT * FROM
my_copy UNION ALL SELECT * FROM my_copy should work.

[1] https://www.postgresql.org/developer/roadmap/


Thanks, that's what I am going to do.

Cheers,
Tom



Updating PK and all FKs to it in CTE

2021-07-29 Thread Tom Kazimiers

Hi all,

I am on Postgres 13 and have a problem with updates in a CTE. While certainly
not generally recommended, I need to update the primary key in a table that is
referenced by a few other tables. The table definition is attached to the end
of this email [2]. I'd like to avoid dropping and recreating the constraints or
even columns, because these tables can become quite large. While I could define
the FK constraints as ON UPDATE CASCADE, I wonder why an alternative solution
using a CTE doesn't work:

According to [1], I should be able to update all FKs and the PK within a 
single CTE. My CTE looks like this (a few tables left out for readability, they

follow the same pattern):

WITH update_catmaid_deep_link AS (
UPDATE catmaid_deep_link
SET active_skeleton_id = 12
WHERE active_skeleton_id = 16150756
), update_class_instance_class_instance_a AS (
UPDATE class_instance_class_instance
SET class_instance_a = 12
WHERE class_instance_a = 16150756
),
[…]
), update_catmaid_skeleton_summary AS (
UPDATE catmaid_skeleton_summary
SET skeleton_id = 12
WHERE skeleton_id = 16150756
AND project_id = 1
)
UPDATE class_instance 
SET id = 12

WHERE id = 16150756
AND project_id = 1;

However, when I try this, I still get an error about a conflict with a FK
constraint in a table updated in the CTE:

ERROR:  23503: update or delete on table "class_instance" violates foreign key constraint 
"catmaid_skeleton_summary_skeleton_id_fkey" on table "catmaid_skeleton_summary"
DETAIL:  Key (id)=(16150756) is still referenced from table 
"catmaid_skeleton_summary".


It seems like the CTE change wasn't reflected when checking the constraint (?).

As can be seen in the table definition [2], the primary key comes with an
INCLUDE statement. Is this potentially a problem? Is such an update maybe not
always possible without DDL?

Thanks for any insight!

Cheers,
Tom

[1] https://stackoverflow.com/a/34383663/1665417
[2] Table definition:

# \d class_instance
 Table "public.class_instance"
Column |   Type   | Collation | Nullable |   Default   
---+--+---+--+-

 id| bigint   |   | not null | 
nextval('concept_id_seq'::regclass)
 user_id   | integer  |   | not null | 
 project_id| integer  |   | not null | 
 creation_time | timestamp with time zone |   | not null | now()

 edition_time  | timestamp with time zone |   | not null | now()
 txid  | bigint   |   | not null | 
txid_current()
 class_id  | bigint   |   | not null | 
 name  | character varying(255)   |   | not null | 
Indexes:

"class_instance_id_pkey" PRIMARY KEY, btree (id) INCLUDE (class_id, 
project_id)
"class_instance_class_id" btree (class_id)
"class_instance_name_trgm_idx" gin (name gin_trgm_ops)
"class_instance_project_id" btree (project_id)
"class_instance_upper_name_idx" btree (upper(name::text))
"class_instance_user_id" btree (user_id)
Foreign-key constraints:
"class_instance_class_id_fkey" FOREIGN KEY (class_id) REFERENCES class(id) 
DEFERRABLE INITIALLY DEFERRED
"class_instance_project_id_fkey" FOREIGN KEY (project_id) REFERENCES 
project(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
"class_instance_user_id_fkey" FOREIGN KEY (user_id) REFERENCES 
auth_user(id) DEFERRABLE INITIALLY DEFERRED
Referenced by:
TABLE "catmaid_deep_link" CONSTRAINT 
"catmaid_deep_link_active_skeleton_id_fkey" FOREIGN KEY (active_skeleton_id) REFERENCES 
class_instance(id) ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED
TABLE "catmaid_sampler" CONSTRAINT "catmaid_sampler_skeleton_id_fkey" 
FOREIGN KEY (skeleton_id) REFERENCES class_instance(id) ON DELETE CASCADE DEFERRABLE INITIALLY 
DEFERRED
TABLE "catmaid_skeleton_summary" CONSTRAINT 
"catmaid_skeleton_summary_skeleton_id_fkey" FOREIGN KEY (skeleton_id) REFERENCES 
class_instance(id) DEFERRABLE INITIALLY DEFERRED
TABLE "class_instance_class_instance" CONSTRAINT 
"class_instance_class_instance_class_instance_a_fkey" FOREIGN KEY (class_instance_a) 
REFERENCES class_instance(id) DEFERRABLE INITIALLY DEFERRED
TABLE "class_instance_class_instance" CONSTRAINT 
"class_instance_class_instance_class_instance_b_fkey" FOREIGN KEY (class_instance_b) 
REFERENCES class_instance(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connector_class_instance" CONSTRAINT 
"connector_class_instance_class_instance_id_fkey" FOREIGN KEY (class_instance_id) 
REFERENCES class_instance(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
TABLE "point_class_instance" CONSTRAIN

Re: Updating PK and all FKs to it in CTE

2021-07-29 Thread Tom Kazimiers

On Thu, Jul 29, 2021 at 10:51:09AM -0400, Tom Lane wrote:

I tried to reproduce your problem and failed; the attached script
acts as I'd expect.  So there must be some moving part you've not
mentioned.  Can yo create a similar self-contained example that
fails?


Thanks very much for trying this. I failed to create a self-contained 
example and just now realized what the problem is: a trigger function 
accidentally created an entry with the old ID (changing the primary key 
was not anticipated until now), throwing off the constraint check. I 
noticed when running this explicitly in a transaction and then 
inspecting the table that triggered the error and also got the same 
error with ON UPDATE CASCADE. My apologies, it just took a while for me 
to realize where this entry came from and that it was not the CTE or ON 
UPDATE CASCADE causing this. It's all resolved now and I am glad the CTE 
behaves as expected.


Kind regards,
Tom




Get IEEE754 bytes of real and double types for msgpack encoding

2018-01-12 Thread Tom Kazimiers

Hi,

I am currently investigating whether I could have Postgres return its 
results in the msgpack [1] format and it seems someone already did most 
of the work: Patrik Simek came up with encoder and decoder functions 
[2]. Unfortunately it is lacking support for numbers in real and double 
format. To add support for this, real and double numbers need to be 
represented as their big-endian IEEE754 bytes [3]. Is there any way in 
Postgres to obtain these relatively quickly?


As an example, this is what [2] does to represent a 16bit integers (here 
available as variable "_numeric"):


 _pack = E'\\315'::bytea
 || set_byte(E' '::bytea, 0, (_numeric::integer >> 8) & 255)
 || set_byte(E' '::bytea, 0, _numeric::integer & 255);

Using the msgpack binary format would help me improve performance of 
some of my heavier back-end calls: I have a aggregation table which acts 
more or less as a materialized view, where each data entry is currently 
stored as jsonb. Per client request I currently look up ~5 MB of data 
(mainly lists of numbers) and send it to the user's browser front-end.  
At the moment I am encoding msgpack from Python (which I use for my 
back-end), but some preliminary tests show that I could speed things up 
quite a bit by preventing the Postgres driver (psycopg2) to parse the 
JSON result and me converting to msgpack in Python.


Thanks,
Tom

[1] https://msgpack.org
[2] https://github.com/patriksimek/msgpack-postgres
[3] https://github.com/msgpack/msgpack/blob/master/spec.md#float-format-family



Unexpected behavior with transition tables in update statement trigger

2018-02-23 Thread Tom Kazimiers

Hi all,

I am on Postgres 10.2 and try to get a statement level trigger to work 
that is executed after UPDATE statements on a particular table. This 
trigger references both the old and new transition table and for some 
reason I am unable to reference each transition table multiple times in 
a CTE or subquery. E.g. forming a UNION ALL with all rows of the new 
transition table with itself, does only use the new table row once. I 
don't understand why and would appreciate some insight.


My problem is probably better described with some SQL, so here is a 
little test setup:


 CREATE TABLE test (id  serial, data int);
 INSERT INTO test VALUES (0, 1);

 CREATE OR REPLACE FUNCTION on_edit() RETURNS trigger
 LANGUAGE plpgsql AS
 $$
 DECLARE
 tmp text;
 BEGIN
 WITH test AS (
 SELECT row_to_json(a)::text FROM new_test a
 UNION ALL
 SELECT ''
 UNION ALL
 SELECT row_to_json(b)::text FROM new_test b
 )
 SELECT array_to_string(array(SELECT row_to_json(t)::text FROM test t), ', 
')::text INTO tmp;

 PERFORM pg_notify('update', tmp::text);

 WITH test AS (
 SELECT row_to_json(a)::text FROM new_test a
 UNION ALL
 SELECT ''
 UNION ALL
 SELECT row_to_json(b)::text FROM old_test b
 )
 SELECT array_to_string(array(SELECT row_to_json(t)::text FROM test t), ', 
')::text INTO tmp;

 PERFORM pg_notify('update', tmp::text);

 RETURN NEW;
 END;
 $$;

 CREATE TRIGGER on_edit AFTER UPDATE ON test
 REFERENCING NEW TABLE AS new_test OLD TABLE as old_test
 FOR EACH STATEMENT EXECUTE PROCEDURE on_edit();

 LISTEN update;

 UPDATE test SET data = 2;

This will create a new table test with one entry, adds the statement 
level trigger, registers a NOTIFY listener and updates the table. The 
trigger will first NOTIFY the result of a UNION ALL with the new 
transition table with itself. The second NOTIFY has the result of the 
UNION ALL with the new and old transition tables as payload. This is the 
output:


 Asynchronous notification "update" with payload "{"id":0,"data":2}, " 
received from server process with PID 6695.
 Asynchronous notification "update" with payload "{"id":0,"data":2}, , 
{"id":0,"data":1}" received from server process with PID 6695.

I would have expected the first line to be

 Asynchronous notification "update" with payload "{"id":0,"data":2}, , 
{"id":0,"data":2}" received from server process with PID 6695.

Why isn't it? It's the same result with a subquery. What do I overlook 
here?


Cheers,
Tom



Re: Unexpected behavior with transition tables in update statement trigger

2018-02-26 Thread Tom Kazimiers

Hi Thomas,

On Mon, Feb 26, 2018 at 11:15:44PM +1300, Thomas Munro wrote:

On Sat, Feb 24, 2018 at 4:47 PM, Tom Kazimiers  wrote:
Thanks for the reproducer.  Yeah, that seems to be a bug.
nodeNamedTuplestorescan.c allocates a new read pointer for each
separate scan of the named tuplestore, but it doesn't call
tuplestore_select_read_pointer() so that the two scans that appear in
your UNION ALL plan are sharing the same read pointer.  At first
glance the attached seems to fix the problem, but I'll need to look
more carefully tomorrow.


Thanks very much for investigating this. I can confirm that applying 
your patch results in the tuples I expected in both my test trigger and 
my actual trigger function.


It would be great if this or a similar fix would make it into the next 
official release.


Cheers,
Tom