On Wed, 3 Mar 2021 at 21:29, Andrey Lepikhov <a.lepik...@postgrespro.ru> wrote:
>
> Playing with a large value of partitions I caught the limit with 65000
> table entries in a query plan:
>
> if (IS_SPECIAL_VARNO(list_length(glob->finalrtable)))
>         ereport(ERROR,
>                 (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
>                 errmsg("too many range table entries")));
>
> Postgres works well with so many partitions.
> The constants INNER_VAR, OUTER_VAR, INDEX_VAR are used as values of the
> variable 'var->varno' of integer type. As I see, they were introduced
> with commit 1054097464 authored by Marc G. Fournier, in 1996.
> Value 65000 was relevant to the size of the int type at that time.
>
> Maybe we will change these values to INT_MAX? (See the patch in attachment).

I don't really see any reason not to increase these a bit, but I'd
rather we kept them at some realistic maximum rather than all-out went
to INT_MAX.

I imagine a gap was left between 65535 and 65000 to allow space for
more special varno in the future.  We did get INDEX_VAR since then, so
it seems like it was probably a good idea to leave a gap.

The problem I see what going close to INT_MAX is that the ERROR you
mention is unlikely to work correctly since a list_length() will never
get close to having INT_MAX elements before palloc() would exceed
MaxAllocSize for the elements array.

Something like 1 million seems like a more realistic limit to me.
That might still be on the high side, but it'll likely mean we'd not
need to revisit this for quite a while.

David


Reply via email to