On Mon, Feb 11, 2013 at 4:29 PM, Will Platnick wrote:
> We will probably tweak this knob some more -- i.e., what is the sweet spot
> between 1 and 100? Would it be higher than 50 but less than 100? Or is it
> somewhere lower than 50?
>
> I would love to know the answer to this as well. We have a s
> We will probably tweak this knob some more -- i.e., what is the sweet spot
> between 1 and 100? Would it be higher than 50 but less than 100? Or is it
> somewhere lower than 50?
>
>
I would love to know the answer to this as well. We have a similar situation,
pgbouncer with transaction log
On Mon, Feb 11, 2013 at 7:57 AM, Charles Gomes wrote:
>
>
>> Date: Sat, 9 Feb 2013 14:03:35 -0700
>
>> Subject: Re: [PERFORM] postgresql.conf recommendations
>> From: scott.marl...@gmail.com
>> To: jeff.ja...@gmail.com
>> CC: charle...@outlook.com; st
On Sat, Feb 9, 2013 at 2:37 PM, Josh Krupka wrote:
> Johnny,
>
> Sure thing, here's the system tap script:
>
>
Thank you for this!
> - I think you already started looking at this, but the linux dirty memory
> settings may have to be tuned as well (see Greg's post
> http://notemagnet.blogspot.c
> Date: Sat, 9 Feb 2013 14:03:35 -0700
> Subject: Re: [PERFORM] postgresql.conf recommendations
> From: scott.marl...@gmail.com
> To: jeff.ja...@gmail.com
> CC: charle...@outlook.com; strahin...@nordeus.com; kgri...@ymail.com;
> johnnyd...@gmail.com; a...@hsk.hk; jk
On Saturday, February 9, 2013, Scott Marlowe wrote:
> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes
> >
> wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe
> > >
> wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes
> >> >
> wrote:
> >>> I've benchmarked shared_buffers with high and l
On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes wrote:
> On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe wrote:
>> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
>>> I've benchmarked shared_buffers with high and low settings, in a server
>>> dedicated to postgres with 48GB my settings are:
>>> sh
On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe wrote:
> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
>> I've benchmarked shared_buffers with high and low settings, in a server
>> dedicated to postgres with 48GB my settings are:
>> shared_buffers = 37GB
>> effective_cache_size = 38GB
>>
>>
Johnny,
Sure thing, here's the system tap script:
#! /usr/bin/env stap
global pauses, counts
probe begin {
printf("%s\n", ctime(gettimeofday_s()))
}
probe kernel.function("compaction_alloc@mm/compaction.c").return {
elapsed_time = gettimeofday_us() - @entry(gettimeofday_us())
key = spri
On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
> I've benchmarked shared_buffers with high and low settings, in a server
> dedicated to postgres with 48GB my settings are:
> shared_buffers = 37GB
> effective_cache_size = 38GB
>
> Having a small number and depending on OS caching is unpredict
On Thu, Feb 7, 2013 at 11:16 PM, Tony Chan wrote:
> Hi,
>
> May I know what is your setting for OS cache?
>
>
Tony:
Wasn't sure if you were asking me, but here's the output from "free":
# free
total used free sharedbuffers cached
Mem: 198333224 187151280
Josh:
Are you able to share your systemtap script? Our problem will be to try and
regenerate the same amount of traffic/load that we see in production. We
could replay our queries, but we don't even capture a full set because it'd
be roughly 150GB per day.
johnny
On Thu, Feb 7, 2013 at 12:49 PM
Just as an update from my angle on the THP side... I put together a
systemtap script last night and so far it's confirming my theory (at least
in our environment). I want to go through some more data and make some
changes on our test box to see if we can make it go away before declaring
success -
I appreciate all the responses on this thread, even though some are
conflicting :). We are going to try these one at a time, but we'll likely
need a day or so inbetween each to see what impact (if any), so it will
take time. But I will post back here our findings.
We'll start with dirty_background
7 Feb 2013 13:06:53 +0100
Subject: Re: [PERFORM] postgresql.conf recommendations
To: kgri...@ymail.com
CC: johnnyd...@gmail.com; a...@hsk.hk; jkru...@gmail.com;
a...@paperlesspost.com; pgsql-performance@postgresql.org
As others suggested having shared_buffers = 48GB is to large. You should never
As others suggested having shared_buffers = 48GB is to large. You should
never need to go above 8GB. I have a similar server and mine has
shared_buffers = 8GB
checkpoint_completion_target = 0.9
This looks like a problem of dirty memory being flushed to the disk. You
should set your monitoring to
Johnny Tan wrote:
> Wouldn't this be controlled by our checkpoint settings, though?
Spread checkpoints made the issue less severe, but on servers with
a lot of RAM I've had to make the above changes (or even go lower
with shared_buffers) to prevent a burst of writes from overwhelming
the RAID c
I originally got started down that trail because running perf top while
having some of the slow query issues showed compaction_alloc at the top of
the list. That function is the THP page compaction which lead me to some
pages like:
http://www.olivierdoucet.info/blog/2012/05/19/debugging-a-mysql-st
On Tue, Feb 5, 2013 at 2:02 PM, Johnny Tan wrote:
> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
I always set this to 0.9. I don't know why the default is 0.5.
> But periodically, there are spikes in our app's db response time. Normally,
> the app's db response t
On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka wrote:
> I've been looking into something on our system that sounds similar to what
> you're seeing. I'm still researching it, but I'm suspecting the memory
> compaction that runs as part of transparent huge pages when memory is
> allocated... yet to
On Wed, Feb 6, 2013 at 2:13 PM, David Whittaker wrote:
> We disabled THP all together, with the thought that we might re-enable
> without defrag if we got positive results. At this point I don't think THP
> is the root cause though, so I'm curious to see if anyone else gets
> positive results fr
On Wed, Feb 6, 2013 at 7:49 AM, Kevin Grittner wrote:
> "a...@hsk.hk" wrote:
> > Johnny Tan wrote:
>
> >>shared_buffers = 48GB# min 128kB
>
> > From the postgresql.conf, I can see that the shared_buffers is
> > set to 48GB which is not small, it would be possible that the
> > large buffer cach
Hi Josh,
On Wed, Feb 6, 2013 at 1:20 PM, Josh Krupka wrote:
> David,
>
> Interesting observations. I had not been tracking the interrupts but
> perhaps I should take a look. How are you measuring them over a period of
> time, or are you just getting them real time?
>
I initially saw it happen
David,
Interesting observations. I had not been tracking the interrupts but
perhaps I should take a look. How are you measuring them over a period of
time, or are you just getting them real time?
Did you turn off THP all together or just the THP defrag?
On Wed, Feb 6, 2013 at 10:42 AM, David
Josh/Johnny,
We've been seeing a similar problem as well, and had also figured THP was
involved. We found this in syslog:
https://gist.github.com/davewhittaker/4723285, which led us to disable THP
2 days ago. At first the results seemed good. In particular, our issues
always seemed interrupt re
"a...@hsk.hk" wrote:
> Johnny Tan wrote:
>>shared_buffers = 48GB# min 128kB
> From the postgresql.conf, I can see that the shared_buffers is
> set to 48GB which is not small, it would be possible that the
> large buffer cache could be "dirty", when a checkpoint starts, it
> would cause a check
On 6 Feb 2013, at 12:23 PM, Josh Krupka wrote:
> On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan wrote:
> shared_buffers = 48GB # min 128kB
>
>
Hi,
From the postgresql.conf, I can see that the shared_buffers is set to 48GB
which is not small, it would be possible that the large
On Wed, Feb 6, 2013 at 3:32 AM, Johnny Tan wrote:
>
> maintenance_work_mem = 24GB # min 1MB
I'm quite astonished by this setting. Not that it explains the problem
at hand, but I wonder if this is a plain mistake in configuration.
Thanks,
Pavan
--
Pavan Deolasee
http://www.linkedin.com/in/pavan
I've been looking into something on our system that sounds similar to what
you're seeing. I'm still researching it, but I'm suspecting the memory
compaction that runs as part of transparent huge pages when memory is
allocated... yet to be proven. The tunable you mentioned controls the
compaction
# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
[always] never
On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka wrote:
> Just out of curiosity, are you using transparent huge pages?
> On Feb 5, 2013 5:03 PM, "Johnny Tan" wrote:
>
>> Server specs:
>> Dell R610
>> dual E5645 hex-core 2.4GHz
>
Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" wrote:
> Server specs:
> Dell R610
> dual E5645 hex-core 2.4GHz
> 192GB RAM
> RAID 1: 2x400GB SSD (OS + WAL logs)
> RAID 10: 4x400GB SSD (/var/lib/pgsql)
>
>
> /etc/sysctl.conf:
> kernel.msgmnb = 655
Server specs:
Dell R610
dual E5645 hex-core 2.4GHz
192GB RAM
RAID 1: 2x400GB SSD (OS + WAL logs)
RAID 10: 4x400GB SSD (/var/lib/pgsql)
/etc/sysctl.conf:
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.overcommit_memory = 0
vm.swappiness = 0
vm
32 matches
Mail list logo