On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden
wrote:
> That really depends on the chipset/server. The current intel E56xx-chips
> (and previous E55xx) basically just expect groups of 3 modules per
> processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4 in
> terms of perfor
On 11-10-2011 20:05 Claudio Freire wrote:
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem
On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital
wrote:
> The initial change (add more memory) are maded by a technical person of Dell
> and him told us that he use the same especification in memory chips.
> But, you know how "it works"... ;)
Yeah, but different size == different specs
Em 11-10-2011 15:05, Claudio Freire escreveu:
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsy
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
wrote:
> 2) Change all memory chips to new others, instead of maintain the old (16
> GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem correspondingly slower.
By a lot.
--
Sen
Hi,
About 3 hours ago, the client contacted the Dell and they suggested 2
things:
1) Update the baseboard firmware (the only component that haven't
updated yesterday).
2) Change all memory chips to new others, instead of maintain the old
(16 GB) + new (32 GB).
After do this, until now, the
On 10/11/2011 04:57 AM, Leonardo Francalanci wrote:
In fact, shouldn't those things be explained in the "WAL
Configuration" section
of the manual? It looks as important as configuring Postgresql itself...
And: that applies to Linux. What about other OS, such as Solaris and FreeBSD?
There's
On Mon, Oct 10, 2011 at 3:26 PM, alexandre - aldeia digital
wrote:
> Hi,
>
> Yesterday, a customer increased the server memory from 16GB to 48GB.
A shot in the dark... what is the content of /proc/mtrr?
Luca
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Em 11-10-2011 03:42, Greg Smith escreveu:
On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:
I drop checkpoint_timeout to 1min and turn on log_checkpoint:
<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers
(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled
Em 10-10-2011 23:19, Claudio Freire escreveu:
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
wrote:
The original question doesn't actually say that performance has gone down,
only that cpu utilization has gone up. Presumably, with lots more RAM, it is
blocking on I/O a lot less, so it isn't
On 11/10/2011 00:02, Samuel Gendler wrote:
> The original question doesn't actually say that performance has gone down,
> only that cpu utilization has gone up. Presumably, with lots more RAM, it is
> blocking on I/O a lot less, so it isn't necessarily surprising that CPU
> utilization has gone up
> checkpoint_completion_targets spreads out the writes to disk. PostgreSQL
> doesn't make any attempt yet to spread out the sync calls. On a busy
> server, what can happen is that the whole OS write cache fills with dirty
> data--none of which is written out to disk because of the high kernel
On 10/10/2011 12:14 PM, Leonardo Francalanci wrote:
database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data
straight to disk. Without that background trickle, you now have a flood that
only the highest-end disk controller or a backing-store full of SSDs or PCIe
NVRAM cou
On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:
I drop checkpoint_timeout to 1min and turn on log_checkpoint:
<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885
buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1
recycled; write=29.862 s, sync=28.466 s, total=5
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
wrote:
> The original question doesn't actually say that performance has gone down,
> only that cpu utilization has gone up. Presumably, with lots more RAM, it is
> blocking on I/O a lot less, so it isn't necessarily surprising that CPU
> utilizatio
On Mon, Oct 10, 2011 at 1:52 PM, Kevin Grittner wrote:
> alexandre - aldeia digital wrote:
>
> > I came to the list to see if anyone else has experienced the same
> > problem
>
> A high load average or low idle CPU isn't a problem, it's a
> potentially useful bit of information in diagnosing a p
alexandre - aldeia digital wrote:
> I came to the list to see if anyone else has experienced the same
> problem
A high load average or low idle CPU isn't a problem, it's a
potentially useful bit of information in diagnosing a problem. I
was hoping to hear what the actual problem was, since I'
Em 10-10-2011 16:39, Kevin Grittner escreveu:
alexandre - aldeia digital wrote:
From the point of view of the client, the question is simple:
until the last friday (with 16 GB of RAM), the load average of
server rarely surpasses 4. Nothing change in normal database use.
Really? The applica
On 10/10/2011 12:31 PM, alexandre - aldeia digital wrote:
<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers
(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;
write=29.862 s, sync=28.466 s, total=58.651 s
28.466s sync time?! That's horrifying. At this point, I
alexandre - aldeia digital wrote:
> From the point of view of the client, the question is simple:
> until the last friday (with 16 GB of RAM), the load average of
> server rarely surpasses 4. Nothing change in normal database use.
Really? The application still performs as well or better, and
Em 10-10-2011 14:46, Kevin Grittner escreveu:
alexandre - aldeia digital wrote:
Notice that we have no idle % in cpu column.
So they're making full use of all the CPUs they paid for. That in
itself isn't a problem. Unfortunately you haven't given us nearly
enough information to know whethe
alexandre - aldeia digital wrote:
> Notice that we have no idle % in cpu column.
So they're making full use of all the CPUs they paid for. That in
itself isn't a problem. Unfortunately you haven't given us nearly
enough information to know whether there is indeed a problem, or if
so, what.
Em 10-10-2011 11:04, Shaun Thomas wrote:
That's not entirely surprising. The problem with having lots of memory
is... that you have lots of memory. The operating system likes to cache,
and this includes writes. Normally this isn't a problem, but with 48GB
of RAM, the defaults (for CentOS 5.5 in p
> Then the
> database makes the fsync call, and suddenly the OS wants to flush 2-6GB of
> data
> straight to disk. Without that background trickle, you now have a flood that
> only the highest-end disk controller or a backing-store full of SSDs or PCIe
> NVRAM could ever hope to absorb.
Isn
On 10/10/2011 10:04 AM, Shaun Thomas wrote:
The problem with having lots of memory is... that you have lots of
memory. The operating system likes to cache, and this includes writes.
Normally this isn't a problem, but with 48GB of RAM, the defaults (for
CentOS 5.5 in particular) are to use up to
On 10/10/2011 10:14 AM, Leonardo Francalanci wrote:
I don't understand: don't you want postgresql to issue the fsync
calls when it "makes sense" (and configure them), rather than having
the OS decide when it's best to flush to disk? That is: don't you
want all the memory to be used for caching,
> That's not entirely surprising. The problem with having lots of memory is...
> that you have lots of memory. The operating system likes to cache, and this
> includes writes. Normally this isn't a problem, but with 48GB of RAM, the
> defaults (for CentOS 5.5 in particular) are to use up to 40
On 10/10/2011 08:26 AM, alexandre - aldeia digital wrote:
Yesterday, a customer increased the server memory from 16GB to 48GB.
Today, the load of the server hit 40 ~ 50 points.
With 16 GB, the load not surpasses 5 ~ 8 points.
That's not entirely surprising. The problem with having lots of mem
alexandre - aldeia digital wrote:
> Yesterday, a customer increased the server memory from 16GB to
> 48GB.
That's usually for the better, but be aware that on some hardware
adding RAM beyond a certain point causes slower RAM access. Without
knowing more details, it's impossible to say whether
29 matches
Mail list logo