Tom Lane <tgl <at> sss.pgh.pa.us> writes: > > I was reminded again today of the problem that once a database has been > in existence long enough for the OID counter to wrap around, people will > get occasional errors due to OID collisions, eg > > http://archives.postgresql.org/pgsql-general/2005-08/msg00172.php >
I'm a coworker of the original reporter. I tracked down the cause of the toast table unique constraint violation to the use of OIDs as chunk_id. After the OID wraparound, it tried to use an already used chunk_id. That table has lots of toast records which greatly increases the probability of a collision for the current section of the OID counter. > Getting rid of OID usage in user tables doesn't really do a darn thing > to fix this. It may delay wrap of the OID counter, but it doesn't stop > it; and what's more, when the problem does happen it will be more > serious (because the OIDs assigned to persistent objects will form a > more densely packed set, so that you have a greater chance of collisions > over a shorter time period). > > We've sort of brushed this problem aside in the past by telling people > they could just retry their transaction ... but why don't we make the > database do the retrying? I'm envisioning something like the attached > quick-hack, which arranges that the pg_class and pg_type rows for tables > will never be given OIDs duplicating an existing entry. It basically > just keeps generating and discarding OIDs until it finds one not in the > table. (This will of course not work for user-table OIDs, since we > don't necessarily have an OID index on them, but it will work for all > the system catalogs that have OIDs.) > This will also be needed for toast tables. They have the necessary index. - Ian ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend