On Thu, Jan 30, 2020 at 12:26 AM Thomas Munro <thomas.mu...@gmail.com> wrote:
> On Wed, Jan 29, 2020 at 11:24 PM Julian Backes <julianbac...@gmail.com> wrote:
> > we only had the "too many shared too many dynamic shared memory segments" 
> > error but no segmentation faults. The error started occurring after 
> > upgrading from postgres 10 to postgres 12 (server has 24 cores / 48 
> > threads, i.e. many parallel workers). The error itself was not that much of 
> > a problem but /dev/shm started filling up with orphaned files which 
> > probably (?) had not been cleaned up by postgres after the parallel workers 
> > died. In consequence, after some time, /dev/shm was full and everything 
> > crashed.
>
> Oh, thanks for the report.  I think see what was happening there, and
> it's a third independent problem.  The code in dsm_create() does
> DSM_OP_DESTROY (ie cleans up) in the DSM_CREATE_NULL_IF_MAXSEGMENTS
> case, but in the case where you see "ERROR: too many dynamic shared
> memory segments" it completely fails to clean up after itself.  I can
> reproduce that here.  That's a terrible bug, and has been sitting in
> the tree for 5 years.

I committed a fix for that.  It'll be in the new releases that due out
in a couple of weeks.

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=93745f1e019543fe7b742d0c5e971aad8d08fd56

> > Unfortunately, the only "solution" we found so far was to increase max 
> > connections from 100 to 1000. After that (about 2 months ago I think), the 
> > error had gone.
>
> I'll take that as a vote for increasing the number of slots.

I committed something to do this for 13 (due out end of year), but I'm
not game to back-patch it to the release branches.

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2


Reply via email to