On 8/12/06, Jorge Godoy <[EMAIL PROTECTED]> wrote:
I was trying to solve a problem on an old system and realized that there might
be some better approach for doing what I need.

We have some documents that need to be ordered sequentially and without gaps.
I could use a sequence, but if the transaction fails then when I rollback the
sequence will already have been incremented.

So, today I have a control table and I acquire a SHARE ROW EXCLUSIVE lock to
it, read the value, increase it, do what I need and then I COMMIT the
transaction, ensuring that the sequence has no gaps.

Is there a better way to guarantee that there will be no gaps in my sequence
if something goes wrong with my transaction?

Hmm, I would do it this way:

-- First prepare a table for keeping gapless sequence, say:
CREATE TABLE gapless_seq (
   gseq_name varchar(256) PRIMARY KEY,
   gseq_value integer NOT NULL
);
-- ...and populate it:
INSERT INTO gapless_seq VALUES('tax_id', '1');

-- then create a function to retrieve the values:
CREATE FUNCTION gseq_nextval(t text) RETURNS integer AS $$
   DECLARE
      n integer;
   BEGIN
      SELECT INTO n gseq_value+1 FROM gapless_seq WHERE gseq_name = t
FOR UPDATE;
      UPDATE gapless_seq SET gapless_value = n WHERE gseq_name = t;
      RETURN n;
   END;
$$ STABLE LANGUAGE PLpgsql;

-- ...and use it as default in table definiton
CREATE TABLE taxdata (
   tax_id integer PRIMARY KEY DEFAULT gseq_nextval('tax_id'),
   customer text,
   when timestamptz
);

...etc.  SELECT ... FOR UPDATE woud ensure a row lock on "gapless sequence",
a PLpgsql function would make a nice wrapper for it (so it would be usable more
or less similar to real sequences), and it should work.

I did not test the code right now, but I've written something similar to
it some time ago, and it worked fine.  Remember to vacuum gapless_seq
table frequently and don't expect stellar performance from it.

  Regards,
     Dawid

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to