On Fri, 5 Jul 2024 at 17:07, Lok P <loknath...@gmail.com> wrote: > Also I understand the technical limitation of the max number of columns per > table is ~1600. But should you advise to restrict/stop us to some low number > long before reaching that limit , such that we will not face any anomalies > when we grow in future. And if we should maintain any specific order in the > columns from start to end column in the specific table?
Something else you may wish to consider, depending on the column types of your 900+ columns is the possibility that some INSERTs may fail due to row length while others with shorter variable length values may be ok. Here's a quick example with psql: select 'create table a (' || string_agg('a'||x||' text not null default $$$$',',') || ')' from generate_series(1,1000)x; \gexec insert into a default values; INSERT 0 1 again but with a larger DEFAULT to make the tuple larger. select 'create table b (' || string_agg('a'||x||' text not null default $$hello world$$',',') || ')' from generate_series(1,1000)x; \gexec insert into b default values; ERROR: row is too big: size 12024, maximum size 8160 There is a paragraph at the bottom of [1] with some warnings about things relating to this. The tuple length would be fixed for fixed-length types defined as NOT NULL. So, if you have that, there should be no such surprises. David [1] https://www.postgresql.org/docs/current/limits.html