>> This is poor design on Linux's part, since even the distant RAM is
>> faster than disk. For now, we've been disabling zone_reclaim entirely.
>
> I haven't run into this, but we were running ubuntu 10.04 LTS. What
> kernel were you running when this happened? I'd love to see a test
> case on
On Fri, Aug 3, 2012 at 4:30 PM, Josh Berkus wrote:
> On 7/30/12 10:09 AM, Scott Marlowe wrote:
>> I think the zone_reclaim gets turned on with a high ratio. If the
>> inter node costs were the same, and the intranode costs dropped in
>> half, zone reclaim would likely get turned on at boot time.
On 7/30/12 10:09 AM, Scott Marlowe wrote:
> I think the zone_reclaim gets turned on with a high ratio. If the
> inter node costs were the same, and the intranode costs dropped in
> half, zone reclaim would likely get turned on at boot time.
We've been seeing a major problem with zone_reclaim and
> node distances:
> node 0 1 2 3
> 0: 10 11 11 11
> 1: 11 10 11 11
> 2: 11 11 10 11
> 3: 11 11 11 10
>
> When considering a hardware purchase, it might be wise to pay close
> attention to how "far" a core may need to go to get to the most
> "distant" RAM.
Yik
On Mon, Jul 30, 2012 at 10:43 AM, Kevin Grittner
wrote:
> node distances:
> node 0 1 2 3
> 0: 10 11 11 11
> 1: 11 10 11 11
> 2: 11 11 10 11
> 3: 11 11 11 10
>
> When considering a hardware purchase, it might be wise to pay close
> attention to how "far" a core may n
Greg Smith wrote:
> You can tell if this is turned on like this:
>
> echo /proc/sys/vm/zone_reclaim_mode
As a data point, the benchmarks I did for some of the 9.2
scalability features does not appear to have this turned on:
# cat /proc/sys/vm/zone_reclaim_mode
0
Our Linux version:
Linux
On Tue, Jul 17, 2012 at 09:52:11PM -0400, Greg Smith wrote:
> I've taken to disabling /proc/sys/vm/zone_reclaim_mode on any Linux
> system where it's turned on now. I'm still working through whether
> it also makes sense in all cases to use the more complicated m
On Tue, Jul 24, 2012 at 6:23 PM, John Lister wrote:
> Cheers, I'll give it a go, I wonder if this is likely to be integrated into
> the main code? As has been mentioned here before, postgresql isn't as badly
> affected as mysql for example, but I'm wondering if the trend to larger
> memory and mor
My experience is that disabling swap and turning off zone_reclaim_mode
gets rid of any real problem for a large memory postgresql database
server. While it would be great to have a NUMA aware pgsql, I
question the solidity and reliability of the current linux kernel
implementation in a NUMA eviron
On Tue, Jul 24, 2012 at 6:23 PM, John Lister wrote:
> Cheers, I'll give it a go, I wonder if this is likely to be integrated into
> the main code? As has been mentioned here before, postgresql isn't as badly
> affected as mysql for example, but I'm wondering if the trend to larger
> memory and mor
On 24/07/2012 21:12, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:36 PM, John Lister wrote:
Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and
PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrappe
On Tue, Jul 24, 2012 at 5:12 PM, Claudio Freire wrote:
> Something like the attached patch (untested)
Sorry, on that patch, MPOL_INTERLEAVE should be MPOL_DEFAULT
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postg
On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire wrote:
> On Tue, Jul 24, 2012 at 3:36 PM, John Lister
> wrote:
>> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and
>> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper script
>> which starts the postmas
On Tue, Jul 24, 2012 at 3:36 PM, John Lister wrote:
> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and
> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper script
> which starts the postmaster using a numactl wrapper, but all subsequent
> client process
On Tue, Jul 18, 2012 at 2:38 AM, Claudio Freire wrote:
>It must have been said already, but I'll repeat it just in case:
>I think postgres has an easy solution. Spawn the postmaster with
>"interleave", to allocate shared memory, and then switch to "local" on
>the backends.
Do you have a suggesti
On Tue, Jul 17, 2012 at 11:00 PM, Scott Marlowe wrote:
>
> Thanks for the link, I'll read up on it. I do have access to large
> (24 to 40 core) NUMA machines so I might try some benchmarking on them
> to see how they work.
It must have been said already, but I'll repeat it just in case:
I think
On the larger, cellular Itanium systems with multiple motherboards (rx6600
to Superdome) Oracle has done a lot of tuning with the HP-UX kernel calls
to optimize for NUMA issues. Will be interesting to see what they bring to
Linux.
On Jul 17, 2012 9:01 PM, "Scott Marlowe" wrote:
> On Tue, Jul 17,
On Tue, Jul 17, 2012 at 7:52 PM, Greg Smith wrote:
> Newer Linux systems with lots of cores have a problem I've been running into
> a lot more lately I wanted to share initial notes on. My "newer" means
> running the 2.6.32 kernel or later, since I mostly track "enterprise" Linux
> distributions
18 matches
Mail list logo