On Sat, Nov 23, 2019 at 4:47 PM Tom Lane <t...@sss.pgh.pa.us> wrote:

> Note that you pay a fairly substantial performance penalty for deferring
> the check, which is why it isn't the default, even though the SQL spec
> says it ought to be.
>

Do you know what the worst case scenario is for the performance of
deferring the check to the end of the statement (with deferred initially
immediate)?  Upon testing, I get a penalty of 2 to 5%, which seems pretty
small, but I might not be testing the most adverse situation.  See attached.

The main "cost" that prevents from using DII routinely is that they can't
receive foreign key constraints.

Cheers,

Jeff
psql <<'END'
create table jj_dii (id bigserial primary key, jj bigint unique deferrable initially immediate);
create table jj_did (id bigserial primary key, jj bigint unique deferrable initially deferred);
create table jj_nd (id bigserial primary key, jj bigint unique not deferrable);
insert into jj_nd select x,x::bigint*1000000 from generate_series(1,100000) f(x);
insert into jj_did select x,x::bigint*1000000 from generate_series(1,100000) f(x);
insert into jj_dii select x,x::bigint*1000000 from generate_series(1,100000) f(x);
END

for f in `seq 1 200`; do 
  for mode in `shuf -e nd dii`; do
	echo "JJ $f $mode"
    pgbench -T30 -f <(echo "update jj_$mode set jj=jj+1") -n -M prepared
  done
done

Reply via email to