>
>
> Possible drawbacks:
>
> * The ids will grow faster, and they will be large even on small
>tables. It may be a bit irritating if you have a table with just 5
>rows and the ids are 5, 6, 7, 12654, 345953.
> * Bottleneck? Using a single sequence was said to be a performance
>bottle
>
>
> Interesting. Can you nail down the software versions that were in
> use here? That'd be the old PG server version upgraded from, the
> new server version upgraded to, the versions of pg_upgrade and
> pg_dump (these probably should match the new server version, but
> I'm not certain we enfor
>
>
>
>> A possible theory is that pg_largeobject_metadata_oid_index has been
>> corrupt for a long time, allowing a lot of duplicate entries to be made.
>> However, unless pg_largeobject's pg_largeobject_loid_pn_index is *also*
>> corrupt, you'd think that creation of such duplicates would still b
>
> Yipes. Did you verify that the TIDs are all distinct?
>
yes, they were.
> A possible theory is that pg_largeobject_metadata_oid_index has been
> corrupt for a long time, allowing a lot of duplicate entries to be made.
> However, unless pg_largeobject's pg_largeobject_loid_pn_index is *also*
Hi Tom,
thanks for taking a look.
> hmm ... is this a reasonably up-to-date v10?
>
> PostgreSQL 10.18
The latest packaged with Ubuntu 18.04.
> Delete by ctid.
>
> select ctid, oid, * from pg_largeobject_metadata where oid=665238;
> delete from pg_largeobject_metadata where ctid = 'pick one';
>
Well, it seems we have hit a bug in postgresql 10.
We tried running vacuumlo on a database and it complained at some point
with a message
Failed to remove lo 64985186: ERROR: large object 64985186 does not exist
Removal from database "X" failed at object 26 of 100.
Yet, object 64985186 is