> On 15 Aug 2024, at 14:15, Lok P <loknath...@gmail.com> wrote:

(…)

> Hello Greg, 
> 
> In terms of testing on sample data and extrapolating, as i picked the avg 
> partition sizeof the table (which is ~20GB) and i created a non partitioned 
> table with exactly same columns and populated with similar data and also 
> created same set of indexes on it and the underlying hardware is exactly same 
> as its on production. I am seeing it's taking ~5minutes to alter all the four 
> columns on this table. So we have ~90 partitions in production with data in 
> them and the other few are future partitions and are blank. (Note- I executed 
> the alter with "work_mem=4GB, maintenance_work_mem=30gb, 
> max_parallel_worker_per_gather=8, max_parallel_maintenance_worker =16" )
> 
> So considering the above figures , can i safely assume it will take 
> ~90*5minutes= ~7.5hours in production and thus that many hours of downtime 
> needed for this alter OR do we need to consider any other factors or activity 
> here? 

Are all those partitions critical, or only a relative few?

If that’s the case, you could:
        1) detach the non-critical partitions
        2) take the system down for maintenance
        3) update the critical partitions
        4) take the system up again
        5) update the non-critical partitions
        6) re-attach the non-critical partitions

That could shave a significant amount of time off your down-time. I would 
script the detach and re-attach processes first, to save some extra.

Admittedly, I haven’t actually tried that procedure, but I see no reason why it 
wouldn’t work.

Apart perhaps, from inserts happening that should have gone to some of those 
detached partitions. Maybe those could be sent to a ‘default’ partition that 
gets detached at step 7, after which you can insert+select those from the 
default into the appropriate partitions?

But you were going to test that first anyway, obviously.

Alban Hertroys
--
There is always an exception to always.






Reply via email to