Hello!

I try to update the underlying sequence of a SERIAL column, by using a 
RETURNING clause in my INSERT statement, which is checking that the column 
value is greater than the last_value of my sequence, and reset the sequence 
with setval() if needed.

When running several client processes in concurrently, using this INSERT 
statement, I get duplicate primary key errors.

The SQL code of my INSERT looks like this:

        insert into tab1 (name) VALUES ('aaa')
            returning tab1.pkey,
                      (select case
                         when tab1.pkey>=(select last_value from 
public.tab1_pkey_seq)
                              then setval('public.tab1_pkey_seq',tab1.pkey,true)
                         else 0
                       end)

To me something is wrong with the behavior of PostgreSQL:

The whole INSERT statement (including the code in the RETURNING clause), should 
execute in a ATOMIC manner.

Obviously if several processes execute that code in parallel, and if the select 
last_value / setval() get mixed up, we are in trouble...

Can something confirm this is a PostgreSQL bug, or help me to find the right 
code to avoid the issue?

To reproduce:

  1.  Create tab1 table and stored procedure for testing with insert-tab1.sql
  2.  Run several psql processes in parallel, calling the stored procedure: sh 
./start2.sh (Linux/bash)

Thanks!

Seb

Attachment: inserts-tab1.sql
Description: inserts-tab1.sql

Attachment: call-proc.sql
Description: call-proc.sql

Attachment: start2.sh
Description: start2.sh

Reply via email to