Hello,
I also get high amount of "too many dynamic shared memory segments" errors.
Upgraded Postgres version to 12.2, but that did not help.
Server has 64GB Ram/16 CPU. Postgres params:
"max_connections":500,
"shared_buffers":"16GB",
"effective_cache_size":"48GB",
"m
On Fri, Jan 31, 2020 at 11:05 PM Nicola Contu wrote:
> Do you still recommend to increase max_conn?
Yes, as a workaround of last resort. The best thing would be to
figure out why you are hitting the segment limit, and see if there is
something we could tune to fix that. If you EXPLAIN your queri
On Thu, Jan 30, 2020 at 12:26 AM Thomas Munro wrote:
> On Wed, Jan 29, 2020 at 11:24 PM Julian Backes wrote:
> > we only had the "too many shared too many dynamic shared memory segments"
> > error but no segmentation faults. The error started occurring after
> > upgrading from postgres 10 to po
Hi Thomas,
unfortunately I can't find any core dump to help you more.
Thanks for the fix, we are in the process of installing 12.1 in production,
so we can still wait on this release and go live with 12.2
I will let you know at this point if I still get this after installing 12.2
trying to build a
On Wed, Jan 29, 2020 at 11:53 PM Thomas Munro wrote:
> On Wed, Jan 29, 2020 at 10:37 PM Nicola Contu wrote:
> > This is the error on postgres log of the segmentation fault :
> >
> > 2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG: server process
> > (PID 2042) was terminated by signal
On Wed, Jan 29, 2020 at 11:24 PM Julian Backes wrote:
> we only had the "too many shared too many dynamic shared memory segments"
> error but no segmentation faults. The error started occurring after upgrading
> from postgres 10 to postgres 12 (server has 24 cores / 48 threads, i.e. many
> para
On Wed, Jan 29, 2020 at 10:37 PM Nicola Contu wrote:
> This is the error on postgres log of the segmentation fault :
>
> 2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG: server process
> (PID 2042) was terminated by signal 11: Segmentation fault
> 2020-01-21 14:20:29 GMT [] [4]: [1
Hi,
we only had the "too many shared too many dynamic shared memory segments"
error but no segmentation faults. The error started occurring after
upgrading from postgres 10 to postgres 12 (server has 24 cores / 48
threads, i.e. many parallel workers). The error itself was not that much of
a proble
This is the error on postgres log of the segmentation fault :
2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG: server process
(PID 2042) was terminated by signal 11: Segmentation fault
2020-01-21 14:20:29 GMT [] [4]: [109-1] db=,user= DETAIL: Failed
process was running: select pid
On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu wrote:
> after a few months, we started having this issue again.
> So we revert the work_mem parameter to 600MB instead of 2GB.
> But the issue is still there. A query went to segmentation fault, the DB went
> to recovery mode and our app went to read
Hello, may I ask you for a feedback?
Thanks a lot
Il giorno mar 21 gen 2020 alle ore 17:14 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> We also reverted this param :
>
> cmdv3=# show max_parallel_workers_per_gather;
> max_parallel_workers_per_gather
> -
>
We also reverted this param :
cmdv3=# show max_parallel_workers_per_gather;
max_parallel_workers_per_gather
-
2
(1 row)
It was set to 8.
Il giorno mar 21 gen 2020 alle ore 16:06 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> Hey Thomas,
> after a few mon
Hey Thomas,
after a few months, we started having this issue again.
So we revert the work_mem parameter to 600MB instead of 2GB.
But the issue is still there. A query went to segmentation fault, the DB
went to recovery mode and our app went to read only for a few minutes.
I understand we can incre
On Wed, Sep 11, 2019 at 11:20 PM Nicola Contu wrote:
> If the error persist I will try to revert the work_mem.
> Thanks a lot
Hi Nicola,
It's hard to say exactly what the cause of the problem is in your case
and how to avoid it, without knowing what your query plans look like.
PostgreSQL allows
If the error persist I will try to revert the work_mem.
Thanks a lot
Il giorno mer 11 set 2019 alle ore 10:10 Pavel Stehule <
pavel.steh...@gmail.com> ha scritto:
> Hi
>
> st 11. 9. 2019 v 9:48 odesÃlatel Nicola Contu
> napsal:
>
>> Hello,
>> We are running postgres 11.5 and in the last two week
Hello,
We did not see any error in the logs, just that one.
Unfortunately we had problems installing updates in this machine and we are
not installing updates since a few months.
Do you think that can be the issue? We are running Centos 7.
I will look into those parameters as well.
Thanks for yo
Hi
st 11. 9. 2019 v 9:48 odesÃlatel Nicola Contu
napsal:
> Hello,
> We are running postgres 11.5 and in the last two weeks we did :
>
> - upgrade of postgres to 11.5 from 11.4
> - increased shared_buffer to 1/3 of the memory
> - increased effective_cache_size = 160GB from 120
> - increased check
Thank You Thomas!
--
regards,
Jakub Glapa
On Thu, Dec 7, 2017 at 10:30 PM, Thomas Munro wrote:
> On Tue, Dec 5, 2017 at 1:18 AM, Jakub Glapa wrote:
> > I see that the segfault is under active discussion but just wanted to
> ask if
> > increasing the max_connections to mitigate the DSM slots
On Tue, Dec 5, 2017 at 1:18 AM, Jakub Glapa wrote:
> I see that the segfault is under active discussion but just wanted to ask if
> increasing the max_connections to mitigate the DSM slots shortage is the way
> to go?
Hi Jakub,
Yes. In future releases this situation will improve (maybe we'll
fi
I see that the segfault is under active discussion but just wanted to ask
if increasing the max_connections to mitigate the DSM slots shortage is the
way to go?
--
regards,
Jakub Glapa
On Mon, Nov 27, 2017 at 11:48 PM, Thomas Munro <
thomas.mu...@enterprisedb.com> wrote:
> On Tue, Nov 28, 2017
On Tue, Nov 28, 2017 at 9:45 AM, Dilip Kumar wrote:
>> I haven't checked whether this fixes the bug, but if it does, we can
>> avoid introducing an extra branch in BitmapHeapNext.
>
> With my test it's fixing the problem.
I tested it some more and found that, for me, it PARTIALLY fixes the
proble
On Tue, Nov 28, 2017 at 7:13 PM, Robert Haas wrote:
> On Tue, Nov 28, 2017 at 2:32 AM, Dilip Kumar
> wrote:
> > I think BitmapHeapScan check whether dsa is valid or not if DSA is not
> > valid then it should assume it's non-parallel plan.
> >
> > Attached patch should fix the issue.
>
> So, cre
On Tue, Nov 28, 2017 at 2:32 AM, Dilip Kumar wrote:
> I think BitmapHeapScan check whether dsa is valid or not if DSA is not
> valid then it should assume it's non-parallel plan.
>
> Attached patch should fix the issue.
So, create the pstate and then pretend we didn't? Why not just avoid
creati
On Tue, Nov 28, 2017 at 4:18 AM, Thomas Munro wrote:
> On Tue, Nov 28, 2017 at 10:05 AM, Jakub Glapa
> wrote:
> > As for the crash. I dug up the initial log and it looks like a
> segmentation
> > fault...
> >
> > 2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR:
> too
> > many
On Tue, Nov 28, 2017 at 10:05 AM, Jakub Glapa wrote:
> As for the crash. I dug up the initial log and it looks like a segmentation
> fault...
>
> 2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR: too
> many dynamic shared memory segments
Hmm. Well this error can only occur in
Hi Thomas,
doubling the max_connection has the problem gone away for now! Yay!
As for the crash. I dug up the initial log and it looks like a segmentation
fault...
2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR: too
many dynamic shared memory segments
2017-11-23 07:26:53 CET
Thomas Munro writes:
> Ah, so you have many Gather nodes under Append? That's one way to eat
> arbitrarily many DSM slots. We allow for 64 + 2 * max_backends. Does
> it help if you increase max_connections? I am concerned about the
> crash failure mode you mentioned in the first email though:
On Tue, Nov 28, 2017 at 1:13 AM, Jakub Glapa wrote:
> The queries are somehow special.
> We are still using the old style partitioning (list type) but we abuse it a
> bit when querying.
> When querying a set of partitions instead of doing it via parent table we
> stitch together the required table
Hi Thomas,
log excerpt:
...
2017-11-27 12:21:14 CET:192.168.10.83(33424):user@db:[27291]: ERROR: too
many dynamic shared memory segments
2017-11-27 12:21:14 CET:192.168.10.83(33424):user@db:[27291]: STATEMENT:
SELECT << REMOVED>>
2017-11-27 12:21:14 CET:192.168.10.83(35182):user@db:[28281]: ERRO
On Mon, Nov 27, 2017 at 10:54 PM, Jakub Glapa wrote:
> The DB enters recovery mode after that.
That's not good. So it actually crashes? Can you please show the
full error messages?
> 2017-11-23 07:20:39 CET::@:[24823]: ERROR: could not attach to dynamic
> shared area
>From src/backend/utils/
30 matches
Mail list logo