On Fri, Jun 5, 2015 at 2:23 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> CAUTION: This is very dangerous and may cause corruption.
>          *** DO THIS IN A TEST DATABASE FIRST ***
>
> --1. Get the oid for int8 (bigint)
> SELECT t.oid
>   FROM pg_type t
> WHERE typname = 'int8';
>
> --2. Get the oid for your table
> SELECT c.oid,
>        c.relname as table,
>        a.attname ,
>        a.atttypid,
>        a.*
>   FROM pg_class c
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_attribute a ON ( a.attrelid = c.oid )
>  WHERE c.relname = 'dogs'
>    AND n.nspname = 'public'
>    AND a.attname = 'good_watchdog'
>    AND NOT attisdropped;
>
> BEGIN;
>
> UPDATE pg_attribute a
>    SET atttypid = <t.oid from 1>
>  WHERE a.attrelid = <c.oid from 2>
>    AND attname = <your column to change>;
>
> COMMIT;
>
>
Thanks for the idea.  Since I'm planning to dump the database first anyway
(using Michael's suggestion) I'm thinking I'll try this on the live
database, after I get it safely dumped.  It seemed to work on a test
database.

Being unfamiliar with the internals, what's the risk here?  If postgres
thinks something is a bigint, but previously stored it as an int, does that
mean it will try to extract data beyond the boundary of some of the (old)
32-bit values and potentially throw off offsets for other values?

Casey

Reply via email to