On 04.11.2011 10:43, Albe Laurenz wrote:
Marti Raudsepp wrote:
Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.
If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs
On Fri, Nov 4, 2011 at 3:54 PM, Robert Haas wrote:
> On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire wrote:
>> I don't think 1 second can be such a big difference for the bgwriter,
>> but I might be wrong.
>
> Well, the default value is 200 ms. And I've never before heard of
> anyone tuning it u
On 11/04/2011 01:45 PM, Claudio Freire wrote:
I think you're misinterpreting the value.
It's in microseconds, that's 10 *milli*seconds
Wow. My brain totally skimmed over that section. Everything else is in
milliseconds, so I never even considered it. Sorry about that!
I stand by everything
On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire wrote:
> I don't think 1 second can be such a big difference for the bgwriter,
> but I might be wrong.
Well, the default value is 200 ms. And I've never before heard of
anyone tuning it up, except maybe to save on power consumption on a
system with
On Fri, Nov 4, 2011 at 12:14 PM, Sorbara, Giorgio (CIOK)
wrote:
>> How fast do you expect this to run? It's aggregating 125 million
>> rows, so that's going to take some time no matter how you slice it.
>> Unless I'm misreading this, it's actually taking only about 4
>> microseconds per row, whic
On Fri, Nov 4, 2011 at 3:26 PM, Shaun Thomas wrote:
> On 11/04/2011 12:22 PM, Claudio Freire wrote:
>
>> bgwriter_delay = 1000ms
>> wal_writer_delay=2000ms
>> commit_delay=1
>
> !?
>snip
> "Setting commit_delay can only help when there are many concurrently
> committing transactions, and it is
On 11/04/2011 12:22 PM, Claudio Freire wrote:
bgwriter_delay = 1000ms
wal_writer_delay=2000ms
commit_delay=1
!?
Maybe someone can back me up on this, but my interpretation of these
settings suggests they're *way* too high. That commit_delay especially
makes me want to cry. From the manu
On Fri, Nov 4, 2011 at 2:07 PM, Kevin Grittner
wrote:
> Before anything else, you might want to make sure you've spread your
> checkpoint activity as much as possible by setting
> checkpoint_completion_target = 0.9.
We have
shared_buffers = 2G
bgwriter_delay = 1000ms
effective_io_concurrency=8
s
Claudio Freire wrote:
> On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
> wrote:
>> As already pointed out, SELECT FOR UPDATE will require a disk
>> write of the tuple(s) read. If these are glutting, increasing
>> shared_buffers would tend to make things worse.
>
> I thought shared_buffers impro
On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
wrote:
> As already pointed out, SELECT FOR UPDATE will require a disk write
> of the tuple(s) read. If these are glutting, increasing
> shared_buffers would tend to make things worse.
I thought shared_buffers improved write caching.
We do tend to w
Claudio Freire wrote:
> Now, I'm thinking those writes are catching the DB at a bad moment
-
> we do have regular very write-intensive peaks.
>
> Maybe I should look into increasing shared buffers?
As already pointed out, SELECT FOR UPDATE will require a disk write
of the tuple(s) read. If t
> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: 04 November 2011 5:07 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: Tomas Vondra; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Strange query plan
>
> On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CI
On Fri, Nov 4, 2011 at 12:07 PM, Claudio Freire wrote:
> What are those writes about? HOT vacuuming perhaps?
Every tuple lock requires dirtying the page. Those writes are all
those dirty pages getting flushed out to disk. It's possible that the
OS is allowing the writes to happen asynchronously
On Thu, Nov 3, 2011 at 8:45 PM, Tom Lane wrote:
> But before pursuing that idea, probably first you should
> back up and confirm whether the process is actually waiting, or running,
> or just really slow due to CPU contention. It might be useful to see
> what strace has to say about it.
Thanks
On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
wrote:
> Group (cost=0.00..4674965.80 rows=200 width=17) (actual
> time=13.375..550943.592 rows=1 loops=1)
> -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual
> time=13.373..524324.817 rows=125595932 loops=1)
>
Bhakti Ghatkar writes:
> Hi ,
> While performing full vacuum we encountered the error below:
> INFO: vacuuming "pg_catalog.pg_index"
> vacuumdb: vacuuming of database "" failed: ERROR: duplicate key value
> violates unique constraint "c"
> DETAIL: Key (indexrelid)=(2678) already exist
Hi,
May be corrupt index, have you tried REINDEX?
(btw I failed to see how it is related to performance)
Bhakti Ghatkar writes:
> Hi ,
>
> While performing full vacuum we encountered the error below:
>
>
> INFO: vacuuming "pg_catalog.pg_index"
> vacuumdb: vacuuming of database "" failed:
Hi ,
While performing full vacuum we encountered the error below:
INFO: vacuuming "pg_catalog.pg_index"
vacuumdb: vacuuming of database "" failed: ERROR: duplicate key value
violates unique constraint "c"
DETAIL: Key (indexrelid)=(2678) already exists.
We are using Postgres 9.0.1
Ca
Marti Raudsepp wrote:
>> Disabling OpenSSL compression in the source (which
>> is possible since OpenSSL 1.0.0) does not give me any performance
>> improvement.
>
> If it doesn't give you any performance improvement then you haven't
> disabled compression. Modern CPUs can easily saturate 1 GbitE w
19 matches
Mail list logo