One of the issues I have run into in migrating from Oracle to PostgreSQL is the 
difference in temporary table behavior.  I've come up with what I think is a 
cunning plan, but I wonder if anyone with more PostgreSQL understanding can 
shoot holes in it.

Oracle defines their temporary tables statically, such that the first use in a 
session instantiates a private instance of the table.  That implies that you 
need to jump through flaming hoops in order to be able to use temporary tables 
in stored procedures and functions.

My idea was to create a "prototype" table (which will never have any rows in 
it) and, when I need to use the temporary table, create the temporary table 
such that it hides the prototype.  I have a simple example that seems to work, 
although before I invest more effort in using it for real work, I'd like to get 
a second opinion as to whether this approach is going to fail in a more complex 
scenario.


drop function if exists bar;
drop table if exists foo;

create table FOO(X INTEGER not null constraint PTCC_FOO check (X is NULL));
commit;

create function bar() returns INTEGER
as
$$
DECLARE
   result INTEGER;
begin
    select SUM(X) into result from FOO ;
    return result;
end;
$$ language plpgsql;

commit;

create temporary table FOO(X INTEGER) on commit drop;
alter table FOO add constraint PK_FOO primary KEY(X);
insert into FOO(X) VALUES(1);
insert into FOO(X) VALUES(2);
insert into FOO(X) VALUES(3);

select 'Expect to see 6' as Expectation, BAR();
commit;


create temporary table FOO(X INTEGER) on commit drop;
alter table FOO add constraint PK_FOO primary key (X);
insert into FOO(X) VALUES(1);
insert into FOO(X) VALUES(2);
insert into FOO(X) VALUES(3);
insert into FOO(X) VALUES(4);
select 'Expect to see 10' as Expectation, BAR();
rollback;

create temporary table FOO(X INTEGER) on commit drop;
insert into FOO(X) VALUES(1);
insert into FOO(X) VALUES(2);
select 'Expect to see 3' as Expectation, BAR();
commit;


What I am not clear on is what the rules are as to when a function/procedure is 
effectively recompiled.  Is there a danger that. assuming the temporary table 
is created for a session that one session might see another session's data due 
to the procedure having effectively compiled the temporary table into its 
definition?


While this approach does have the disadvantage of requiring the application to 
define the temporary table before using it (which could be as simple as using 
`CREATE TABLE AS SELECT * FROM prototype_table`), it seems simpler and 
potentially more performant than the approach I found here: 
https://www.codeproject.com/Articles/1176045/Oracle-style-global-temporary-tables-for-PostgreSQ

It is also in direct opposition to this post I found: 
https://www.cybertec-postgresql.com/en/using-temporary-tables-the-way-they-should-not-be-used/

So far, I have not found a case where, as long as I don't read or write to the 
permanent table, I get the wrong results from the above approach.  It allows me 
to minimize the impact on my application (basically, it means that at the start 
of any transaction that might need a certain temporary table, I need to 
manually create it.  The number of places I would need to do that is relatively 
finite, so I'm willing to take that hit, in exchange for not having to use 
dynamic SQL to refer to my temporary tables.




Reply via email to