We turned off NUMA in the BIOS on Jul 2nd and haven't seen the issue since -
(though once last week, we had the connections count go up to 1000, but
recovered in few seconds on its own). Will keep you all posted when I have more
updates.
Appreciate everyone's help, comments and suggestions so f
Erik van Zijst writes:
> On Thu, Jun 19, 2014 at 3:57 PM, Merlin Moncure wrote:
>> In your case user% is dominating system load. Along with the high cs
>> this is really suggesting spinlock contention. A 'perf top' is
>> essential for identifying the culprit. It's very possible that 9.4
>> wil
On Fri, Jun 20, 2014 at 12:58 AM, Erik van Zijst
wrote:
> On Thu, Jun 19, 2014 at 10:10 PM, Erik van Zijst
> wrote:
>> On Thu, Jun 19, 2014 at 3:57 PM, Merlin Moncure wrote:
>>> In your case user% is dominating system load. Along with the high cs
>>> this is really suggesting spinlock contentio
On Thu, Jun 19, 2014 at 10:10 PM, Erik van Zijst
wrote:
> On Thu, Jun 19, 2014 at 3:57 PM, Merlin Moncure wrote:
>> In your case user% is dominating system load. Along with the high cs
>> this is really suggesting spinlock contention. A 'perf top' is
>> essential for identifying the culprit. I
On Thu, Jun 19, 2014 at 3:57 PM, Merlin Moncure wrote:
> In your case user% is dominating system load. Along with the high cs
> this is really suggesting spinlock contention. A 'perf top' is
> essential for identifying the culprit. It's very possible that 9.4
> will fix your problem...see:
> ht
We do record perf data. For each incident we've had the data looks about
the same. Unfortunately, I can't read much into it. Besides it getting
stuck on a spinlock. But why and with what?
### from perf report
53.28% postmaster postgres [.] s_lock
6.22% postmaster
Hi Borislav – Thank You for the update and all the information. It does look
like we are on the same boat. And I feel the same too - maxing out on
max_connections is just a symptom. pgbouncer may help alleviate the problem
(though in your case it didn’t) and is definitely good to have either wa
On Thu, Jun 19, 2014 at 5:12 PM, Borislav Ivanov wrote:
> However, most people on our team think that the number of connections is
> purely a symptom of the actual problem. We would love to be wrong about
> this. But for now we feel the high number of connections contributes for
> preserving the p
Hi Ramya,
We experience exactly the same problem here at Bitbucket. From what I can
tell the major difference between your setup and ours is that you run 9.3.3
and we run 9.2.8. Our post for the issues is at
http://www.postgresql.org/message-id/CAJ+wzrb1qhz3xuoeSy5mo8i=E-5OO9Yvm6R+VxLBGaPB=ue...@m
On Thu, Jun 19, 2014 at 2:35 PM, Kevin Grittner wrote:
> "Vasudevan, Ramya" wrote:
>
>> On the waiting queries - When we reached 1500 connections, we
>> had 759 connections that were in active state (116 COMMIT, 238
>> INSERT, UPDATE 176, 57 AUTHENTICATION, 133 BIND. These active
>> INSERTS and
"Vasudevan, Ramya" wrote:
> On the waiting queries - When we reached 1500 connections, we
> had 759 connections that were in active state (116 COMMIT, 238
> INSERT, UPDATE 176, 57 AUTHENTICATION, 133 BIND. These active
> INSERTS and UPDATES also includes the 80-90 waiting sessions (We
> checked
Merlin, Thank you for the response.
On the waiting queries - When we reached 1500 connections, we had 759
connections that were in active state (116 COMMIT, 238 INSERT, UPDATE 176, 57
AUTHENTICATION, 133 BIND. These active INSERTS and UPDATES also includes the
80-90 waiting sessions (We chec
Merlin Moncure wrote:
> we have to be careful to rule out some underlying possible
> contributing factors before switching up things up to much.
Agreed.
> THP compaction in particular has plaguing servers throughout the
> company I work for;
I've seen many support tickets where turning off Tra
On Thu, Jun 12, 2014 at 3:32 PM, Kevin Grittner wrote:
> Merlin Moncure wrote:
>
>> or something else entirely.
>
>
> It strikes me that this might be relevant:
Agreed. The stock advice to many, many problems of this sort is 'use
pgbouncer' but it can be hard to work in a lot of code bases and
Merlin Moncure wrote:
> or something else entirely.
It strikes me that this might be relevant:
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsq
quickly realized that we already had a
high number (1500)
Thank You
Ramya
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Wednesday, June 11, 2014 4:24 PM
To: Vasudevan, Ramya
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_connections reached in
@postgresql.org
Subject: Re: [GENERAL] max_connections reached in postgres 9.3.3
On Thu, Jun 12, 2014 at 1:23 PM, Vasudevan, Ramya
wrote:
> Thank you for the response.
>
> On further investigation, we found out that select statements were happening
> normally. But DMLs (writes to
On Thu, Jun 12, 2014 at 1:51 PM, Vasudevan, Ramya
wrote:
> Thanks Merlin.
>
> We did look at the locks in the DB and all we saw were RowExclusiveLock,
> AccessShareLock, RowShareLock, AccessExclusiveLock. The ExclusiveLocks we saw
> were all in the virtualxids.
>
> I think the max_connections ma
On Thu, Jun 12, 2014 at 1:23 PM, Vasudevan, Ramya
wrote:
> Thank you for the response.
>
> On further investigation, we found out that select statements were happening
> normally. But DMLs (writes to the DB) were hung for minutes at a time, and
> some of them went through. And we had 2 checkpoin
On Wed, Jun 11, 2014 at 1:24 PM, Vasudevan, Ramya
wrote:
> Our set up:
>
> · Db version: postgres 9.3.3
>
> · OS: CentOS 6.5
>
> · kernel Version - Linux 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3
> 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
> · cpu - 24 pro
On 06/11/2014 11:24 AM, Vasudevan, Ramya wrote:
Our set up:
·Db version: postgres 9.3.3
·OS: CentOS 6.5
·kernel Version - Linux 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3
21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
·cpu - 24 proc
·memory - 768 GB
·The disks are SAN fiber.
·We have str
21 matches
Mail list logo