Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-04 Thread Albe Laurenz
Marti Raudsepp wrote:
>> Disabling OpenSSL compression in the source (which
>> is possible since OpenSSL 1.0.0) does not give me any performance
>> improvement.
> 
> If it doesn't give you any performance improvement then you haven't
> disabled compression. Modern CPUs can easily saturate 1 GbitE with
> AES256-encrypted connections. Compression is usually the bottleneck,
> at 20-30 MB/s.

Hmm, my knowledge of OpenSSL is so little that it is well possible that
I did it wrong. I have attached the small patch I used; can you see
where I went wrong?

Yours,
Laurenz Albe


ssl.patch
Description: ssl.patch

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Error while vacuuming

2011-11-04 Thread Bhakti Ghatkar
Hi ,

While performing full vacuum we encountered the error below:


INFO:  vacuuming "pg_catalog.pg_index"
vacuumdb: vacuuming of database "" failed: ERROR:  duplicate key value
violates unique constraint "c"
DETAIL:  Key (indexrelid)=(2678) already exists.

We are using Postgres 9.0.1

Can you please help us out in understanding the cause of this error?

Regards,
Bhakti


Re: [PERFORM] Error while vacuuming

2011-11-04 Thread Guillaume Cottenceau
Hi,

May be corrupt index, have you tried REINDEX?

(btw I failed to see how it is related to performance)


Bhakti Ghatkar  writes:

> Hi ,
>
> While performing full vacuum we encountered the error below:
>
>
> INFO:  vacuuming "pg_catalog.pg_index"
> vacuumdb: vacuuming of database "" failed: ERROR:  duplicate key value
> violates unique constraint "c"
> DETAIL:  Key (indexrelid)=(2678) already exists.
>
> We are using Postgres 9.0.1
>
> Can you please help us out in understanding the cause of this error?
>
> Regards,
> Bhakti
>

-- 
Guillaume Cottenceau

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Error while vacuuming

2011-11-04 Thread Tom Lane
Bhakti Ghatkar  writes:
> Hi ,
> While performing full vacuum we encountered the error below:


> INFO:  vacuuming "pg_catalog.pg_index"
> vacuumdb: vacuuming of database "" failed: ERROR:  duplicate key value
> violates unique constraint "c"
> DETAIL:  Key (indexrelid)=(2678) already exists.

> We are using Postgres 9.0.1

> Can you please help us out in understanding the cause of this error?

Try updating ... that looks suspiciously like a bug that was fixed a few
months ago.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange query plan

2011-11-04 Thread Robert Haas
On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
 wrote:
> Group  (cost=0.00..4674965.80 rows=200 width=17) (actual 
> time=13.375..550943.592 rows=1 loops=1)
>   ->  Append  (cost=0.00..4360975.94 rows=125595945 width=17) (actual 
> time=13.373..524324.817 rows=125595932 loops=1)
>         ->  Index Scan using f_suipy_pkey on f_suipy  (cost=0.00..5.64 rows=1 
> width=58) (actual time=0.019..0.019 rows=0 loops=1)
>               Index Cond: ((fk_theme)::text = 'main_py_six_scxc'::text)
>         ->  Seq Scan on f_suipy_main_py_six_scxc f_suipy  
> (cost=0.00..4360970.30 rows=125595944 width=17) (actual 
> time=13.352..495259.117 rows=125595932 loops=1)
>               Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)
>  Total runtime: 550943.699 ms

How fast do you expect this to run?  It's aggregating 125 million
rows, so that's going to take some time no matter how you slice it.
Unless I'm misreading this, it's actually taking only about 4
microseconds per row, which does not obviously suck.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Claudio Freire
On Thu, Nov 3, 2011 at 8:45 PM, Tom Lane  wrote:
>  But before pursuing that idea, probably first you should
> back up and confirm whether the process is actually waiting, or running,
> or just really slow due to CPU contention.  It might be useful to see
> what strace has to say about it.

Thanks for the tip, it seems locks had nothing to do with it.

strace suggests those queries get blocked on write().

This is an explain analyze without for update:

"Index Scan using idx_track_logs_track_id on track_logs
(cost=0.00..26.75 rows=1 width=8) (actual time=0.056..38.119 rows=1
loops=1)"
"  Index Cond: (track_id = )"
"  Filter: ((track_status_id = 1) AND (date >= (now() - '01:00:00'::interval)))"
"  Buffers: shared hit=140 read=3127"
"Total runtime: 38.147 ms"

This is with for update that goes fast:

"LockRows  (cost=0.00..26.76 rows=1 width=14) (actual
time=0.075..37.420 rows=1 loops=1)"
"  Buffers: shared hit=63 read=3205"
"  ->  Index Scan using idx_track_logs_track_id on track_logs
(cost=0.00..26.75 rows=1 width=14) (actual time=0.058..37.402 rows=1
loops=1)"
"Index Cond: (track_id = )"
"Filter: ((track_status_id = 1) AND (date >= (now() -
'01:00:00'::interval)))"
"Buffers: shared hit=62 read=3205"
"Total runtime: 37.462 ms"

I cannot hit one that goes slow yet, but when I did (and didn't
capture the output :( ) it was kinda like:


"LockRows  (cost=0.00..26.76 rows=1 width=14) (actual
time=0.075..37.420 rows=1 loops=1)"
"  Buffers: shared hit=63 read=3205"
"  ->  Index Scan using idx_track_logs_track_id on track_logs
(cost=0.00..26.75 rows=1 width=14) (actual time=0.058..37.402 rows=1
loops=1)"
"Index Cond: (track_id = )"
"Filter: ((track_status_id = 1) AND (date >= (now() -
'01:00:00'::interval)))"
"Buffers: shared hit=62 read=3205 written=135"
"Total runtime: 37000.462 ms"

Now, I'm thinking those writes are catching the DB at a bad moment -
we do have regular very write-intensive peaks.

Maybe I should look into increasing shared buffers?
Checkpoints are well spread and very fast

What are those writes about? HOT vacuuming perhaps?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Robert Haas
On Fri, Nov 4, 2011 at 12:07 PM, Claudio Freire  wrote:
> What are those writes about? HOT vacuuming perhaps?

Every tuple lock requires dirtying the page.  Those writes are all
those dirty pages getting flushed out to disk.  It's possible that the
OS is allowing the writes to happen asynchronously for a while, but
then when you get too much dirty data in the cache, it starts
blocking.

The only thing I'm fuzzy about is why it's locking so many rows, given
that the output says rows=1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange query plan

2011-11-04 Thread Sorbara, Giorgio (CIOK)

> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: 04 November 2011 5:07 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: Tomas Vondra; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Strange query plan
> 
> On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
>  wrote:
> > Group  (cost=0.00..4674965.80 rows=200 width=17) (actual
> time=13.375..550943.592 rows=1 loops=1)
> >   ->  Append  (cost=0.00..4360975.94 rows=125595945 width=17) (actual
> time=13.373..524324.817 rows=125595932 loops=1)
> >         ->  Index Scan using f_suipy_pkey on f_suipy
>  (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0
> loops=1)
> >               Index Cond: ((fk_theme)::text =
> 'main_py_six_scxc'::text)
> >         ->  Seq Scan on f_suipy_main_py_six_scxc f_suipy
>  (cost=0.00..4360970.30 rows=125595944 width=17) (actual
> time=13.352..495259.117 rows=125595932 loops=1)
> >               Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)
> >  Total runtime: 550943.699 ms
> 
> How fast do you expect this to run?  It's aggregating 125 million
> rows, so that's going to take some time no matter how you slice it.
> Unless I'm misreading this, it's actually taking only about 4
> microseconds per row, which does not obviously suck.

Well, the problem is not how fast it takes to process one row rather the best 
query plan I am supposed to get. I don't mean the planer is wrong but I was 
expecting a feature is not there (yet).
We don't have pure index scan. Fair enough. so I have approached the problem in 
a different way: getting rid of the degenerated dimensions and exploiting 
"useless" dimension table.
It's a workaround but it actually seems to work :) now I have a ~350 millions 
fact table and no partition but I am happy to get the data I want in 1 sec or 
less.


> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Kevin Grittner
Claudio Freire  wrote:
 
> Now, I'm thinking those writes are catching the DB at a bad moment
-
> we do have regular very write-intensive peaks.
> 
> Maybe I should look into increasing shared buffers?
 
As already pointed out, SELECT FOR UPDATE will require a disk write
of the tuple(s) read.  If these are glutting, increasing
shared_buffers would tend to make things worse.  You might want to
make the background writer more aggressive or *reduce*
shared_buffers to better spread the output.
 
Hopefully you have a BBU RAID controller configured for write-back. 
If not you should.  If you do, another thing which might help is
increasing the cache on that controller.  Or you could move WAL to a
separate file system on a separate controller with its own BBU
write-back cache.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Claudio Freire
On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
 wrote:
> As already pointed out, SELECT FOR UPDATE will require a disk write
> of the tuple(s) read.  If these are glutting, increasing
> shared_buffers would tend to make things worse.

I thought shared_buffers improved write caching.
We do tend to write onto the same rows over and over.

> Hopefully you have a BBU RAID controller configured for write-back.
> If not you should.  If you do, another thing which might help is
> increasing the cache on that controller.

Write-back on the RAID is another issue.
The controller is old and has a very small cache, we're already in the
process of securing a replacement.

> Or you could move WAL to a
> separate file system on a separate controller with its own BBU
> write-back cache.

WAL is already on a separate controller and array.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Kevin Grittner
Claudio Freire  wrote:
> On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
>  wrote:
>> As already pointed out, SELECT FOR UPDATE will require a disk
>> write of the tuple(s) read.  If these are glutting, increasing
>> shared_buffers would tend to make things worse.
> 
> I thought shared_buffers improved write caching.
> We do tend to write onto the same rows over and over.
 
PostgreSQL is very aggressive about holding on to dirty buffers as
long as possible, in hopes of reducing duplicate page writes.  This
can work against the goal of consistent latency.  In our shop we
needed to make the background writer more aggressive and keep shared
buffers in the 0.5GB to 2GB range (depending on hardware and
workload) to prevent write gluts leading to latency spikes.  In the
mildly surprising department, the OS seemed to do a decent job of
spotting pages receiving repeated writes and hold back on an OS
level write, while pushing other pages out in a more timely fashion
-- there was no discernible increase in OS write activity from
making these changes.  I know other people have had other
experiences, based on their workloads (and OS versions?).
 
Before anything else, you might want to make sure you've spread your
checkpoint activity as much as possible by setting
checkpoint_completion_target = 0.9.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Claudio Freire
On Fri, Nov 4, 2011 at 2:07 PM, Kevin Grittner
 wrote:
> Before anything else, you might want to make sure you've spread your
> checkpoint activity as much as possible by setting
> checkpoint_completion_target = 0.9.

We have

shared_buffers = 2G
bgwriter_delay = 1000ms
effective_io_concurrency=8
synchronous_commit=off
wal_buffers=16M
wal_writer_delay=2000ms
commit_delay=1
checkpoint_segments=72
checkpoint_timeout=60min
checkpoint_completion_target=0.8

I'm thinking bgwriter_delay and wal_writer_delay might not be working
as I expected, and that maybe checkpoint_segments=72 is a bit too
high, but we were having much worse I/O storms before I pushed it that
high. Looking at checkpoint logging for the last few days, it goes
almost always like:

checkpoint complete: wrote 589 buffers (3.6%); 0 transaction log
file(s) added, 0 removed, 8 recycled; write=590.325 s, sync=0.055 s,
total=590.417 s

590s seems an awful lot for 589 buffers.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Shaun Thomas

On 11/04/2011 12:22 PM, Claudio Freire wrote:


bgwriter_delay = 1000ms
wal_writer_delay=2000ms
commit_delay=1


!?

Maybe someone can back me up on this, but my interpretation of these 
settings suggests they're *way* too high. That commit_delay especially 
makes me want to cry. From the manual:


"Setting commit_delay can only help when there are many concurrently 
committing transactions, and it is difficult to tune it to a value that 
actually helps rather than hurt throughput."


Meaning it may halt all of your commits up to *ten seconds* if it 
doesn't think there was enough activity to warrant a write. Ouch.


Your bgwriter_delay and wal_writer_delay settings are equally odd. 
You've made the background writer so passive that when it does finally 
run, it's going to have a ton of work to do, causing giant write spikes. 
I'm not sure whether or not this also causes compounding problems with 
locks and backend write delays.



checkpoint complete: wrote 589 buffers (3.6%); 0 transaction log
file(s) added, 0 removed, 8 recycled; write=590.325 s, sync=0.055 s,
total=590.417 s

590s seems an awful lot for 589 buffers.


You're misinterpreting this. The checkpoint completion target is 
multiplied against your checkpoint timeout. 590 seconds is roughly ten 
minutes, and for 589 buffers, it wrote one per second. That's about as 
slow as it can possibly write that many buffers. It had *up to* 24 
minutes, and if it had more buffers available to write, it would have 
written them. The number you really care about is "sync=0.055 s" which 
is how much time the controller spent syncing that data to disk.


If you're having real problems writing or lock delays due to IO stalls, 
you'll see that sync parameter shoot way up. This can also be elevated 
in certain NVRAM-based solutions. Once you start seeing whole seconds, 
or minutes, it might actually matter.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See http://www.peak6.com/email-disclaimer/ for terms and conditions related to 
this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Claudio Freire
On Fri, Nov 4, 2011 at 3:26 PM, Shaun Thomas  wrote:
> On 11/04/2011 12:22 PM, Claudio Freire wrote:
>
>> bgwriter_delay = 1000ms
>> wal_writer_delay=2000ms
>> commit_delay=1
>
> !?
>snip
> "Setting commit_delay can only help when there are many concurrently
> committing transactions, and it is difficult to tune it to a value that
> actually helps rather than hurt throughput."
>
> Meaning it may halt all of your commits up to *ten seconds* if it doesn't
> think there was enough activity to warrant a write. Ouch.

I think you're misinterpreting the value.
It's in microseconds, that's 10 *milli*seconds

> Your bgwriter_delay and wal_writer_delay settings are equally odd. You've
> made the background writer so passive that when it does finally run, it's
> going to have a ton of work to do, causing giant write spikes. I'm not sure
> whether or not this also causes compounding problems with locks and backend
> write delays.

I don't think 1 second can be such a big difference for the bgwriter,
but I might be wrong.

The wal_writer makes me doubt, though. If logged activity was higher
than 8MB/s, then that setting would block it all.
I guess I really should lower it.

> The number you
> really care about is "sync=0.055 s" which is how much time the controller
> spent syncing that data to disk.
>
> If you're having real problems writing or lock delays due to IO stalls,
> you'll see that sync parameter shoot way up. This can also be elevated in
> certain NVRAM-based solutions. Once you start seeing whole seconds, or
> minutes, it might actually matter.

Nice to know, I thought so, now I know so.
:-)

So... I'm leaning towards lowering wal_writer_delay and see how it goes.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange query plan

2011-11-04 Thread Robert Haas
On Fri, Nov 4, 2011 at 12:14 PM, Sorbara, Giorgio (CIOK)
 wrote:
>> How fast do you expect this to run?  It's aggregating 125 million
>> rows, so that's going to take some time no matter how you slice it.
>> Unless I'm misreading this, it's actually taking only about 4
>> microseconds per row, which does not obviously suck.
>
> Well, the problem is not how fast it takes to process one row rather the best 
> query plan I am supposed to get. I don't mean the planer is wrong but I was 
> expecting a feature is not there (yet).
> We don't have pure index scan. Fair enough. so I have approached the problem 
> in a different way: getting rid of the degenerated dimensions and exploiting 
> "useless" dimension table.
> It's a workaround but it actually seems to work :) now I have a ~350 millions 
> fact table and no partition but I am happy to get the data I want in 1 sec or 
> less.

Am I misreading the EXPLAIN ANALYZE output?  I'm reading that to say
that there were 125 million rows in the table that matched your filter
condition.  If that's correct, I don't think index-only scans (which
will be in 9.2) are going to help you much - it might be faster, but
it's definitely not going to be anything like instantaneous.

On the flip side, if I *am* misreading the output and the number of
rows needed to compute the aggregate is actually some very small
number, then you ought to be getting an index scan even in older
versions.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Robert Haas
On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire  wrote:
> I don't think 1 second can be such a big difference for the bgwriter,
> but I might be wrong.

Well, the default value is 200 ms.   And I've never before heard of
anyone tuning it up, except maybe to save on power consumption on a
system with very low utilization.  Nearly always you want to reduce
it.

> The wal_writer makes me doubt, though. If logged activity was higher
> than 8MB/s, then that setting would block it all.
> I guess I really should lower it.

Here again, you've set it to ten times the default value.  That
doesn't seem like a good idea.  I would start with the default and
tune down.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Shaun Thomas

On 11/04/2011 01:45 PM, Claudio Freire wrote:


I think you're misinterpreting the value.
It's in microseconds, that's 10 *milli*seconds


Wow. My brain totally skimmed over that section. Everything else is in 
milliseconds, so I never even considered it. Sorry about that!


I stand by everything else though. ;)


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See http://www.peak6.com/email-disclaimer/ for terms and conditions related to 
this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-04 Thread Claudio Freire
On Fri, Nov 4, 2011 at 3:54 PM, Robert Haas  wrote:
> On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire  wrote:
>> I don't think 1 second can be such a big difference for the bgwriter,
>> but I might be wrong.
>
> Well, the default value is 200 ms.   And I've never before heard of
> anyone tuning it up, except maybe to save on power consumption on a
> system with very low utilization.  Nearly always you want to reduce
> it.

Will try

>> The wal_writer makes me doubt, though. If logged activity was higher
>> than 8MB/s, then that setting would block it all.
>> I guess I really should lower it.
>
> Here again, you've set it to ten times the default value.  That
> doesn't seem like a good idea.  I would start with the default and
> tune down.

Already did that. Waiting to see how it turns out.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-04 Thread Heikki Linnakangas

On 04.11.2011 10:43, Albe Laurenz wrote:

Marti Raudsepp wrote:

Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.


If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs can easily saturate 1 GbitE with
AES256-encrypted connections. Compression is usually the bottleneck,
at 20-30 MB/s.


Hmm, my knowledge of OpenSSL is so little that it is well possible that
I did it wrong. I have attached the small patch I used; can you see
where I went wrong?


That only works with OpenSSL 1.0.0 - did you upgrade? I thought you were 
using 0.9.7a earlier.


FWIW, it would be better to test "#ifdef SSL_OP_NO_COMPRESSION" 
directly, rather than the version number.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance