Hello, My database is experiencing a 'ERROR: too many dynamic shared memory segment' error from time to time. It seems to happen most when traffic is high, and it happens with semi-simple select statements that run a parallel query plan with a parallel bitmap heap scan. It is currently on version 12.4.
Here is an example query: https://explain.depesz.com/s/aatA I did find a message in the archive on this topic and it seems the resolution was to increase the max connection configuration. Since I am running my database on heroku I do not have access to this configuration. It also seems like a bit of an unrelated configuration change in order to not run into this issue. Link to archived discussion: https://www.postgresql.org/message-id/CAEepm%3D2RcEWgES-f%2BHyg4931bOa0mbJ2AwrmTrabz6BKiAp%3DsQ%40mail.gmail.com I think this block of code is determining the size for the control segment: https://github.com/postgres/postgres/blob/REL_13_1/src/backend/storage/ipc/dsm.c#L157-L159 maxitems = 64 + 2 * MaxBackends It seems like the reason increasing the max connections helps is because that number is part of the equation for determining the max segment size. I noticed in version 13.1 there is a commit that changes the multiplier from 2 to 5: https://github.com/postgres/postgres/commit/d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2 maxitems = 64 + 5 * MaxBackends Should this commit be back-ported to earlier versions of postgres to prevent this error in other versions? Thank you, Ben