On Sat, Sep 30, 2023 at 11:37 PM Tom Lane <t...@sss.pgh.pa.us> wrote:

> James Healy <ja...@yob.id.au> writes:
> > However it doesn't really address the question of a gradual migration
> > process that can read 32bit ints but insert/update as 64bit bigints. I
> > remain curious about whether the postgres architecture just makes that
> > implausible, or if it could be done and just hasn't because the
> > options for a more manual migration are Good Enough.
>
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
> That is not happening, because there'd be no way to tell which
> case applies to any particular row.
>

Other databases do allow that sort of gradual migration.  One example
has an internal table of record descriptions indexed the table identifier
and a description number.  Each record includes a header with various
useful bits including its description number. When reading a record,
the system notes the description number and looks up the description
before parsing the record into columns.

The transition is made easier if the database indexes are generic -
for example, numbers rather than decimal[12,6], int32, etc., and string
rather than varchar[12].   That way, increasing a column size doesn't
require re-indexing.

But, those are decision that really had to be made early - making
a major format change 25+ years in would break too much.

Cheers,

Ann

>
>
>
>
>

Reply via email to