On 2021-09-29 10:21:00 -0600, Michael Lewis wrote:
> If your processes somehow allow updates on the wrong table, then fix that.
If your processes somehow allow inserting duplicate keys, then fix that
(so unique key constraints are unnecessary).
If your process somehow allows deletion of records w
If your processes somehow allow updates on the wrong table, then fix that.
If you run out of space in whatever value range you choose initially, the
pain to upgrade to a type that allows larger values would seem to be very
large.
On Wed, 2021-09-29 at 11:26 +0200, Peter J. Holzer wrote:
> I discovered this technique back in my Oracle days but it dropped out of
> my toolbox when I switched to PostgreSQL. Recently I had reason to
> revisit it, so I thought I should share it (trivial though it is).
>
> So the solution is to u
On 2021-09-29 11:42:42 +0200, Tobias Meyer wrote:
> Possible drawbacks:
>
> * The ids will grow faster, and they will be large even on small
> tables. It may be a bit irritating if you have a table with just 5
> rows and the ids are 5, 6, 7, 12654, 345953.
[...]
>
> * you w
>
>
> Possible drawbacks:
>
> * The ids will grow faster, and they will be large even on small
>tables. It may be a bit irritating if you have a table with just 5
>rows and the ids are 5, 6, 7, 12654, 345953.
> * Bottleneck? Using a single sequence was said to be a performance
>bottle
I discovered this technique back in my Oracle days but it dropped out of
my toolbox when I switched to PostgreSQL. Recently I had reason to
revisit it, so I thought I should share it (trivial though it is).
PostgreSQL makes it easy to generate unique ids. Just declare the column
as SERIAL (or IDEN