On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu <nicola.co...@gmail.com> wrote:
> after a few months, we started having this issue again.
> So we revert the work_mem parameter to 600MB instead of 2GB.
> But the issue is still there. A query went to segmentation fault, the DB went 
> to recovery mode and our app went to read only for a few minutes.

Hi Nicola,
Hmm, a segmentation fault sounds like a different problem.  Can you
please share the exact error messages from PostgreSQL and OS logs?

> I understand we can increase max_connections so we can have many more 
> segments.
>
> My question is : is there a way to understand the number of segments we 
> reached?

If you're on Linux, you can probably see them with "ls /dev/shm".

> Currently we have 220 max_conn so as your formula is 64 + 2* 220 we have 
> about 500 shared segments.
> We would like to increase that number to 300 or 400 but would be great to 
> understand if there is a way to make sure we will solve the issue as it 
> requires a restart of the service.
>
> I know you were also talking about a redesign this part in PostgreSQL. Do you 
> know if anything has changed in any of the newer versions after 11.5?

It's possible that we should increase a couple of constants used the
formula -- I'll look into that again.  But first I'd like to see if
we're even investigating the right problem here.


Reply via email to