On 08/10/2012 10:26 PM, Rob Sargent wrote:
Seems like
for i == 1 to 10
select into table-with-bigint from table-with-int where <id mod 10 = i>
delete from table-with-bigint where <id mod 10 = i>

or any such fractionation scheme might do the trick

For that matter, you can ALTER TABLE .. ADD COLUMN the bigint column without a default, then do batches of:

BEGIN;
UPDATE thetable SET thetable.bigintcol = thetable.intcol WHERE id > lastbatchmax AND id < batchmin;
COMMIT;
VACUUM thetable;

Because adding a column without a default or using clause doesn't write anything to the table heap, that'll progressively rewrite the table in-place. You'll can have other read-only clients continuing to use the table so long as their transactions are short. Even read/write clients are OK, and you can add a trigger that sets the bigint column from the int column whenever an update is done for any other reason.

Eventually you'll find you no longer have any null entries in the bigint column so you can ALTER TABLE ... ALTER COLUMN ... SET NOT NULL it, drop the int column, and rename the bigint one within a single transaction. Active clients will only block briefly.

--
Craig Ringer

-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to