Hi,

Could you please give me some hints how to optimize this in my DB? I have a 
main table and a lookup table as follows:

create table main (
      id bigserial primary key;
   item_id bigint references lookup (item_id);
);

create table lookup (
  item_id bigserial primary key;
  item_name text unique not null;
);

When a new item arrives (in a temp table), I'm checking if it's already in the 
lookup table and insert it if not. I do this with a trigger function using the 
following snippet:
------
  begin
    insert into lookup values (default,NEW.item_name) returning item_id into 
itid;
  exception
    when unique_violation then
    select into itid item_id from lookup where item_name=NEW.item_name;
  end;
  NEW.item_id := itid;
------

The problem is that the uniqueness check apparently increases the serial 
counter and hence I burn through the bigint IDs much faster. It's a waste for 
100m+ records...

An example result for the main table where the second item arrives at the 4th 
record:

id | item_id
----------------
1 | 1
2 | 1
3 | 1
4 | 4
5 | 5
...

the lookup table becomes:

item_id | item_name
----------------------------
1 | apple
4 | orange
5 | banana
...

Any thoughts?

Thanks,
Mark

PS: I'd like to keep the unique property, because it makes the insert check 
fast and simple.



      

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to