[PERFORM] caching written values?

2009-01-22 Thread Thomas Finneid

Hi

A quick question, when pg receives data to be written to a table, does 
it cache that data in memory in case a subsequent request/query would 
need it?


As I understand it, data is stored in pages and those pages have to be 
retrieved in order to write or read data from them. So my assumption is 
that a page used to write data would not be replaced until memory is low 
and different pages needs to be retrieved. Is this approximately correct?


Thomas

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Pavan Deolasee
On Thu, Jan 22, 2009 at 4:42 PM, Thomas Finneid  wrote:

>
> As I understand it, data is stored in pages and those pages have to be
> retrieved in order to write or read data from them. So my assumption is that
> a page used to write data would not be replaced until memory is low and
> different pages needs to be retrieved. Is this approximately correct?
>

Yes. That's how it works.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Glyn Astill
> 
> A quick question, when pg receives data to be written to a
> table, does it cache that data in memory in case a
> subsequent request/query would need it?
> 

Afaik all pages are modified in memory, so the modified data would still be 
cached.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Thomas Finneid

Pavan Deolasee wrote:

On Thu, Jan 22, 2009 at 4:42 PM, Thomas Finneid  wrote:


As I understand it, data is stored in pages and those pages have to be
retrieved in order to write or read data from them. So my assumption is that
a page used to write data would not be replaced until memory is low and
different pages needs to be retrieved. Is this approximately correct?



Yes. That's how it works.


Is there any possibilites of telling pg to save to disk that memory 
cached data and state when the server is shutdown, so that when the 
server starts up again, itreads it back into the memory?


regards

thomas

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Thomas Finneid

(Sorry, did not include the list in the reply)

Pavan Deolasee wrote:


Yes. That's how it works.


Is that how it works for an index as well? I just found out that I have 
 an index that is 35GB, and the table is 85GB.  ( I will look into the 
index, it works fine, but an index that is almost one third of the size 
of the table, seems a little bit strange. )
So if it works the same way and the index uses a B-tree, I assume it 
only loads the pages that contains the subpart of the index that are 
relevant, is this correct?


thomas

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching indexes and pages?

2009-01-22 Thread Robert Haas
> I tried work_mem and maintenance_work_mem but it does not seem to make much
> difference yet. Admittedly I had set it to 100M and 80M, so after reading a
> little bit more I have found that I could easily set it to several GBs. But
> I am not sure those are the correct config parameters to use for this. I
> havent found any others that are relevant so far.

You probably want to increase shared_buffers by quite a large amount
and maybe make work_mem not quite so big.

If you have 2GB of memory you might set shared_buffers to 1GB,
work_mem 16MB, maintenance_work_mem 64MB?

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Robert Haas
> Is that how it works for an index as well? I just found out that I have  an
> index that is 35GB, and the table is 85GB.  ( I will look into the index, it
> works fine, but an index that is almost one third of the size of the table,
> seems a little bit strange. )
> So if it works the same way and the index uses a B-tree, I assume it only
> loads the pages that contains the subpart of the index that are relevant, is
> this correct?

Yes.

See shared_buffers:

http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Slow HashAggregate : How to optimize ?

2009-01-22 Thread Bruno Baguette

Hello !

I'm having a problem with a query that takes more or less 3.2 seconds to 
be executed.


This query uses a view which encapsulates some calculations (in order to 
avoid duplicating theses calculations at several places in the project).


In order to keep that post readable, I've put the view details on that 
page : 


Here's the EXPLAIN ANALYZE of a query using that view : 



As I read the query plan, the HashAggregate takes near than 3 seconds, 
which represent 90% of the query duration time.


How can I see which part of the query causes the HashAggregate to be so 
slow ?


How can I optimize that view to reduce the execution duration time ?

To be accurate, I'm working on PostgreSQL 8.3.5.

Many thanks in advance for any tips about that ! :-)

Best Regards,

--
Bruno Baguette - bruno.bague...@gmail.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Ibrahim Harrani
Hi,

I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
Xeon(R) CPU 3065  @ 2.33GHz, 2GB RAM and Seagate Technology -
Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).

I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
$ pgbench -i pgbench -s 50 -U pgsql

[pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench

transaction type: TPC-B (sort of)
scaling factor: 10
number of clients: 200
number of transactions per client: 2
number of transactions actually processed: 400/400
tps = 39.044088 (including connections establishing)
tps = 41.528795 (excluding connections establishing)

[pg...@$ pgbench -c 100 -t 5 -U pgsql -d pgbench

transaction type: TPC-B (sort of)
scaling factor: 10
number of clients: 100
number of transactions per client: 5
number of transactions actually processed: 500/500
tps = 30.162271 (including connections establishing)
tps = 30.643256 (excluding connections establishing)

Is this rate is normal or not? What can I do to improve tps and insert
performance?

Here is some changes made in postgresql.conf and sysctl.conf

/etc/sysctl.conf
#1024 MB shmmax
kern.ipc.shmmax=1073741824
#shmall = shmmax / 4096 (page size)
kern.ipc.shmall=262144
kern.ipc.semmsl=512
kern.ipc.semmap=256

postgresql.conf

shared_buffers = 800MB  # min 128kB or max_connections*16kB
work_mem = 2MB  # min 64kB
maintenance_work_mem = 32MB # min 1MB
max_connections = 600   # (change requires restart)
max_fsm_relations = 2000# min 100, ~70 bytes each
synchronous_commit = off
wal_buffers = 1024kB# min 32kB
checkpoint_segments = 32# in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min  # range 30s-1h
checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s   # 0 is off

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow HashAggregate : How to optimize ?

2009-01-22 Thread Robert Haas
> I'm having a problem with a query that takes more or less 3.2 seconds to be
> executed.
>
> This query uses a view which encapsulates some calculations (in order to
> avoid duplicating theses calculations at several places in the project).
>
> In order to keep that post readable, I've put the view details on that page
> : 
>
> Here's the EXPLAIN ANALYZE of a query using that view :
> 
>
> As I read the query plan, the HashAggregate takes near than 3 seconds, which
> represent 90% of the query duration time.

Well, it IS aggregating almost 80,000 rows.  That doesn't sound that
bad to me.  What kind of hardware are you running this on?

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Claus Guttesen
> I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
> Xeon(R) CPU 3065  @ 2.33GHz, 2GB RAM and Seagate Technology -
> Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
>
> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
>
> [pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench
>
> transaction type: TPC-B (sort of)
> scaling factor: 10
> number of clients: 200
> number of transactions per client: 2
> number of transactions actually processed: 400/400
> tps = 39.044088 (including connections establishing)
> tps = 41.528795 (excluding connections establishing)
>
> [pg...@$ pgbench -c 100 -t 5 -U pgsql -d pgbench
>
> transaction type: TPC-B (sort of)
> scaling factor: 10
> number of clients: 100
> number of transactions per client: 5
> number of transactions actually processed: 500/500
> tps = 30.162271 (including connections establishing)
> tps = 30.643256 (excluding connections establishing)
>
> Is this rate is normal or not? What can I do to improve tps and insert
> performance?

You add more and faster disks.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.

Shakespeare

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Joshua D. Drake
On Thu, 2009-01-22 at 17:47 +0200, Ibrahim Harrani wrote:
> Hi,
> 
> I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
> Xeon(R) CPU 3065  @ 2.33GHz, 2GB RAM and Seagate Technology -
> Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
> 
> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
> 
> [pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench
> 
> transaction type: TPC-B (sort of)
> scaling factor: 10
> number of clients: 200
> number of transactions per client: 2
> number of transactions actually processed: 400/400
> tps = 39.044088 (including connections establishing)
> tps = 41.528795 (excluding connections establishing)
> 
> [pg...@$ pgbench -c 100 -t 5 -U pgsql -d pgbench
> 
> transaction type: TPC-B (sort of)
> scaling factor: 10
> number of clients: 100
> number of transactions per client: 5
> number of transactions actually processed: 500/500
> tps = 30.162271 (including connections establishing)
> tps = 30.643256 (excluding connections establishing)
> 
> Is this rate is normal or not? What can I do to improve tps and insert
> performance?

Run a real benchmark. Running 400/500 transactions doesn't give you any
real indication of what is going on. Run 5 or so and see how it
looks.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] linux, memory (mis)accounting/reporting, and the planner/optimizer

2009-01-22 Thread Greg Smith

On Wed, 21 Jan 2009, M. Edward (Ed) Borasky wrote:


Re the OOM killer -- maybe a patch to the kernel could make things
"better"??


People have tried to raise awareness of it; sample:

http://lkml.org/lkml/2007/2/9/275

without much success.  The Linux kernel hackers dislike the whole approach 
PostgreSQL uses to allocate shared memory anyway--witness the backlash 
against any attempt to raise SHMMAX.


I found the long thread that beats this issue to death in the archives 
again:


http://archives.postgresql.org/pgsql-hackers/2008-02/msg00026.php

That discussion should get raised to a higher profile eventually, maybe a 
summary on the wiki.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Merlin Moncure
On 1/22/09, Ibrahim Harrani  wrote:
>
>  Is this rate is normal or not? What can I do to improve tps and insert
>  performance?
>
>  postgresql.conf
>
>  shared_buffers = 800MB  # min 128kB or max_connections*16kB
>  work_mem = 2MB  # min 64kB
>  maintenance_work_mem = 32MB # min 1MB

I would raise maintenance_work_mem, although it's unrelated to your issue.

>  max_connections = 600   # (change requires restart)
>  max_fsm_relations = 2000# min 100, ~70 bytes each
>  synchronous_commit = off

Something is very wrong.  40tps is low, even for sata raid 1, but
absolutely terrible with synchronous_commit = off.  This suggests you
are problems are read related and not sync related.  Can you rerun
pgbench using the custom sql feature, passing insert statements?

Are you sure nothing else is going on/wrong with the box? Can we see a
bonnie++ run?

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Simon Riggs

On Thu, 2009-01-22 at 13:11 +0100, Thomas Finneid wrote:

> Is there any possibilites of telling pg to save to disk that memory 
> cached data and state when the server is shutdown, so that when the 
> server starts up again, itreads it back into the memory?

It's possible, but not by any directly supported mechanism.

You have to consider whether the data you saved would still be required
when the server restarts.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] linux, memory (mis)accounting/reporting, and the planner/optimizer

2009-01-22 Thread M. Edward (Ed) Borasky
Greg Smith wrote:
> On Wed, 21 Jan 2009, M. Edward (Ed) Borasky wrote:
> 
>> Re the OOM killer -- maybe a patch to the kernel could make things
>> "better"??
> 
> People have tried to raise awareness of it; sample:
> 
> http://lkml.org/lkml/2007/2/9/275
> 
> without much success.  The Linux kernel hackers dislike the whole
> approach PostgreSQL uses to allocate shared memory anyway--witness the
> backlash against any attempt to raise SHMMAX.
> 
> I found the long thread that beats this issue to death in the archives
> again:
> 
> http://archives.postgresql.org/pgsql-hackers/2008-02/msg00026.php
> 
> That discussion should get raised to a higher profile eventually, maybe
> a summary on the wiki.
> 
> -- 
> * Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
> 
Yes, please collect as much detail as you can in some centralized place.
For recent kernels (2.6.25+) the memory accounting is much better, and
if nothing else, there might be some things PostgreSQL could do to
minimize the probability of getting hit, at the cost of some
platform-dependent (/proc reading) code. The problem is that
"enterprise" Linux distros aren't running 2.6.25+ yet. :(

-- 
M. Edward (Ed) Borasky

I've never met a happy clam. In fact, most of them were pretty steamed.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Alvaro Herrera
Ibrahim Harrani escribió:

> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
> 
> [pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench

Try with 1000 transactions per client or more, instead of 2.

Also, I think you should set the "scale" in the prepare step (-i) at
least as high as the number of clients you're going to use.  (I dimly
recall some recent development in this area that might mean I'm wrong.)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Question about clustering indexes and restores

2009-01-22 Thread Harold A . Giménez Ch .
Hi list,

Clustering my indexes dramatically improves the query performance of many of
my queries. Also, the actual clustering takes a very long time for big
databases, roughly 20 hours. I have two questions about how to improve this:

1. I've tweaked maintenance_mem_max and effective_cache_size to a point
where the cluster operation uses a good chunk of my physical RAM, and the OS
does not start swapping. Is there any other parameter I should look at?

2. Reading the documentation for cluster at
http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html, I see that
all clustering does is reorder the data on disk to 'match' the order of the
clustered index. My question is, if I dump a clustered database using
pg_dump in custom format, is it necessary to cluster after restoring it? Or
does a dump/restore not guarantee that the order of the data restored is the
same as the original dumped database?

3. Somewhat related to #2, what is the best way to move data from a staging
database on one server, to the production environment on a different server?
I've been using pg_dump/pg_restore, but there must be a better way...


Thanks for any pointers,

-Harold


Re: [PERFORM] Question about clustering indexes and restores

2009-01-22 Thread Kenneth Marshall
On Thu, Jan 22, 2009 at 02:52:12PM -0500, Harold A. Gim?nez Ch. wrote:
> Hi list,
> 
> Clustering my indexes dramatically improves the query performance of many of
> my queries. Also, the actual clustering takes a very long time for big
> databases, roughly 20 hours. I have two questions about how to improve this:
> 
> 1. I've tweaked maintenance_mem_max and effective_cache_size to a point
> where the cluster operation uses a good chunk of my physical RAM, and the OS
> does not start swapping. Is there any other parameter I should look at?
> 
> 2. Reading the documentation for cluster at
> http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html, I see that
> all clustering does is reorder the data on disk to 'match' the order of the
> clustered index. My question is, if I dump a clustered database using
> pg_dump in custom format, is it necessary to cluster after restoring it? Or
> does a dump/restore not guarantee that the order of the data restored is the
> same as the original dumped database?
> 
> 3. Somewhat related to #2, what is the best way to move data from a staging
> database on one server, to the production environment on a different server?
> I've been using pg_dump/pg_restore, but there must be a better way...
> 
> 
> Thanks for any pointers,
> 
> -Harold

Harold,

There have been discussions on the hackers list about the pessimal
cluster performance. Here is a pointer to the discussion, it seems
that a faster way is to build a new table with the desired orderwith
"CREATE TABLE AS ... ORDER BY ...":

http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg121205.html

Cheers,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Question about clustering indexes and restores

2009-01-22 Thread Harold A . Giménez Ch .
Many thanks for your answer. I did see a comment about this in the
documentation on the link I posted below.

My main question remains though: Is it necessary to cluster after a restore?

Thanks again!

On Thu, Jan 22, 2009 at 2:58 PM, Kenneth Marshall  wrote:

> On Thu, Jan 22, 2009 at 02:52:12PM -0500, Harold A. Gim?nez Ch. wrote:
> > Hi list,
> >
> > Clustering my indexes dramatically improves the query performance of many
> of
> > my queries. Also, the actual clustering takes a very long time for big
> > databases, roughly 20 hours. I have two questions about how to improve
> this:
> >
> > 1. I've tweaked maintenance_mem_max and effective_cache_size to a point
> > where the cluster operation uses a good chunk of my physical RAM, and the
> OS
> > does not start swapping. Is there any other parameter I should look at?
> >
> > 2. Reading the documentation for cluster at
> > http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html, I see
> that
> > all clustering does is reorder the data on disk to 'match' the order of
> the
> > clustered index. My question is, if I dump a clustered database using
> > pg_dump in custom format, is it necessary to cluster after restoring it?
> Or
> > does a dump/restore not guarantee that the order of the data restored is
> the
> > same as the original dumped database?
> >
> > 3. Somewhat related to #2, what is the best way to move data from a
> staging
> > database on one server, to the production environment on a different
> server?
> > I've been using pg_dump/pg_restore, but there must be a better way...
> >
> >
> > Thanks for any pointers,
> >
> > -Harold
>
> Harold,
>
> There have been discussions on the hackers list about the pessimal
> cluster performance. Here is a pointer to the discussion, it seems
> that a faster way is to build a new table with the desired orderwith
> "CREATE TABLE AS ... ORDER BY ...":
>
> http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg121205.html
>
> Cheers,
> Ken
>


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Ibrahim Harrani
Hi Merlin,

Here is the bonnie++ and new pgbench result with high transaction numbers.


$ pgbench -i -s 30 -U pgsql pgbench
$ pbench -c 100 -t 1000 -U pgsql -d pgbench

transaction type: TPC-B (sort of)
scaling factor: 30
number of clients: 100
number of transactions per client: 1000
number of transactions actually processed: 10/10
tps = 45.145051 (including connections establishing)
tps = 45.162367 (excluding connections establishing)

$ bonnie++
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.

Version 1.93d   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
myserver 300M   391  97  9619   1  8537   2   673  99 + +++  1196  16
Latency   211ms 388ms 325ms   27652us 722us6720ms
Version 1.93d   --Sequential Create-- Random Create
myserver-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  9004  25 + +++ + +++  8246  20 + +++ + +++
Latency   592ms 208us 102us 673ms 179us 100us
1.93c,1.93d,myserver,1,1232710758,300M,,391,97,9619,1,8537,2,673,99,+,+++,1196,16,16,9004,25,+,+++,+,+++,8246,20,+,+++,+,+++,211ms,388ms,325ms,27652us,722us,6720ms,592ms,208us,102us,673ms,179us,100us

When I compare my  bonnie++ result with the one at
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm,
It seems that there is something wrong with the disks!?

On Thu, Jan 22, 2009 at 7:14 PM, Merlin Moncure  wrote:
> On 1/22/09, Ibrahim Harrani  wrote:
>>
>>  Is this rate is normal or not? What can I do to improve tps and insert
>>  performance?
>>
>>  postgresql.conf
>>
>>  shared_buffers = 800MB  # min 128kB or max_connections*16kB
>>  work_mem = 2MB  # min 64kB
>>  maintenance_work_mem = 32MB # min 1MB
>
> I would raise maintenance_work_mem, although it's unrelated to your issue.
>
>>  max_connections = 600   # (change requires restart)
>>  max_fsm_relations = 2000# min 100, ~70 bytes each
>>  synchronous_commit = off
>
> Something is very wrong.  40tps is low, even for sata raid 1, but
> absolutely terrible with synchronous_commit = off.  This suggests you
> are problems are read related and not sync related.  Can you rerun
> pgbench using the custom sql feature, passing insert statements?
>
> Are you sure nothing else is going on/wrong with the box? Can we see a
> bonnie++ run?
>
> merlin
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Ibrahim Harrani
This is the another bonnie++ test result with version 1.03

Delete files in random order...done.
Version 1.03e   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
myserver 300M 13150   7 12713   1 13067   4 72426  53 + +++ + +++
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  1048   2 + +++  2322   3   985   2 + +++  1797   3
myserver,300M,13150,7,12713,1,13067,4,72426,53,+,+++,+,+++,16,1048,2,+,+++,2322,3,985,2,+,+++,1797,3

Also I attached bon_csv2html output for both version of bonnie++

Thanks in advance.


On Thu, Jan 22, 2009 at 7:14 PM, Merlin Moncure  wrote:
> On 1/22/09, Ibrahim Harrani  wrote:
>>
>>  Is this rate is normal or not? What can I do to improve tps and insert
>>  performance?
>>
>>  postgresql.conf
>>
>>  shared_buffers = 800MB  # min 128kB or max_connections*16kB
>>  work_mem = 2MB  # min 64kB
>>  maintenance_work_mem = 32MB # min 1MB
>
> I would raise maintenance_work_mem, although it's unrelated to your issue.
>
>>  max_connections = 600   # (change requires restart)
>>  max_fsm_relations = 2000# min 100, ~70 bytes each
>>  synchronous_commit = off
>
> Something is very wrong.  40tps is low, even for sata raid 1, but
> absolutely terrible with synchronous_commit = off.  This suggests you
> are problems are read related and not sync related.  Can you rerun
> pgbench using the custom sql feature, passing insert statements?
>
> Are you sure nothing else is going on/wrong with the box? Can we see a
> bonnie++ run?
>
> merlin
>
Title: Bonnie++ Benchmark results
Version 1.93dSequential OutputSequential InputRandomSeeksSequential CreateRandom Create
ConcurrencySizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDeleteK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
myserver1300M3919796191853726739911961616900425824620
myserverLatency211ms388ms325ms27652us722us6720msLatency592ms208us102us673ms179us100us


Title: Bonnie++ V1.03e Benchmark results







Sequential Output
Sequential Input
RandomSeeks

Sequential Create
Random Create

Size:Chunk SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDeleteK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU
myserver300M1315071271311306747242653161048223223985217973




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread David Rees
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
 wrote:
> Version 1.93d   --Sequential Output-- --Sequential Input- 
> --Random-
> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> myserver 300M   391  97  9619   1  8537   2   673  99 + +++  1196  16
> Latency   211ms 388ms 325ms   27652us 722us6720ms
> Version 1.93d   --Sequential Create-- Random 
> Create
> myserver-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> 16  9004  25 + +++ + +++  8246  20 + +++ + +++
> Latency   592ms 208us 102us 673ms 179us 100us

You should be testing bonnie with a file size that is at least double
the amount of memory in your machine - in this case, 4GB files, not
300MB files.

> When I compare my  bonnie++ result with the one at
> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm,
> It seems that there is something wrong with the disks!?

Yes, your machine appears to be very slow.  You should be able to
write in the order of 30-50MB/s+ and read in the order of 40-80MB/s+.
Random IO should be in the 200 tps range for a 7200rpm SATA RAID1.

-Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching indexes and pages?

2009-01-22 Thread Thomas Finneid

Thomas Markus wrote:

try to reorganize your data with CLUSTER and create appropriate indixes 
(dont forget to check statistics).


One question. Assume I have clustered and new data has been added after 
that, according to the docs that data is added "outside" of the 
clustered data. What happens when I run cluster again? I would assume 
its smart and to only clusteres the new data, i.e. adding it to the 
already created clusters, as apporpriate, so the execution time would be 
a lot lower, right? or would it run through and recluster everything 
from scratch again?


thomas


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching indexes and pages?

2009-01-22 Thread Kenneth Marshall
On Thu, Jan 22, 2009 at 10:58:25PM +0100, Thomas Finneid wrote:
> Thomas Markus wrote:
>
>> try to reorganize your data with CLUSTER and create appropriate indixes 
>> (dont forget to check statistics).
>
> One question. Assume I have clustered and new data has been added after 
> that, according to the docs that data is added "outside" of the clustered 
> data. What happens when I run cluster again? I would assume its smart and 
> to only clusteres the new data, i.e. adding it to the already created 
> clusters, as apporpriate, so the execution time would be a lot lower, 
> right? or would it run through and recluster everything from scratch again?
>
> thomas
>
It reclusters again from scratch. You do get better performance on the
reads from the data that is already clustered.

Cheers,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Ibrahim Harrani
Hi David,

$ I run the test again with the following options. Also I added the
html output of the result.

$ bonnie++  -u pgsql -n 128 -r 2048 -s 4096 -x 1
Using uid:70, gid:70.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
myserver,4G,8028,5,8118,1,5079,1,36055,28,32950,3,128.4,0,128,5620,11,142084,99,88739,99,12880,26,109150,99,90362,99

What about this result?
This is a intel server with onboard raid. I will check raid
configuration again tomorrow. Especially  Write Cache and Read Ahead
values mentioned at
http://www.intel.com/support/motherboards/server/sb/CS-021019.htm

On Thu, Jan 22, 2009 at 11:41 PM, David Rees  wrote:
> On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
>  wrote:
>> Version 1.93d   --Sequential Output-- --Sequential Input- 
>> --Random-
>> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
>> --Seeks--
>> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
>> %CP
>> myserver 300M   391  97  9619   1  8537   2   673  99 + +++  1196  16
>> Latency   211ms 388ms 325ms   27652us 722us6720ms
>> Version 1.93d   --Sequential Create-- Random 
>> Create
>> myserver-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>>  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
>> %CP
>> 16  9004  25 + +++ + +++  8246  20 + +++ + 
>> +++
>> Latency   592ms 208us 102us 673ms 179us 100us
>
> You should be testing bonnie with a file size that is at least double
> the amount of memory in your machine - in this case, 4GB files, not
> 300MB files.
>
>> When I compare my  bonnie++ result with the one at
>> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm,
>> It seems that there is something wrong with the disks!?
>
> Yes, your machine appears to be very slow.  You should be able to
> write in the order of 30-50MB/s+ and read in the order of 40-80MB/s+.
> Random IO should be in the 200 tps range for a 7200rpm SATA RAID1.
>
> -Dave
>
Title: Bonnie++ V1.03e Benchmark results







Sequential Output
Sequential Input
RandomSeeks

Sequential Create
Random Create

Size:Chunk SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDeleteK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU
myserver4G8028581181507913605528329503128.401285620111420849988739991288026109150999036299




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Scott Marlowe
On Thu, Jan 22, 2009 at 3:29 PM, Ibrahim Harrani
 wrote:

> This is a intel server with onboard raid. I will check raid
> configuration again tomorrow. Especially  Write Cache and Read Ahead
> values mentioned at
> http://www.intel.com/support/motherboards/server/sb/CS-021019.htm

It would be good to use software RAID to see what kind of numbers you
get without the built in RAID.  Most built in RAID solutions are only
suitable for holding the OS and such.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Craig James

David Rees wrote:

On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
 wrote:

Version 1.93d   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
myserver 300M   391  97  9619   1  8537   2   673  99 + +++  1196  16
Latency   211ms 388ms 325ms   27652us 722us6720ms
Version 1.93d   --Sequential Create-- Random Create
myserver-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
16  9004  25 + +++ + +++  8246  20 + +++ + +++
Latency   592ms 208us 102us 673ms 179us 100us


You should be testing bonnie with a file size that is at least double
the amount of memory in your machine - in this case, 4GB files, not
300MB files.


When I compare my  bonnie++ result with the one at
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm,
It seems that there is something wrong with the disks!?


Yes, your machine appears to be very slow.  You should be able to
write in the order of 30-50MB/s+ and read in the order of 40-80MB/s+.
Random IO should be in the 200 tps range for a 7200rpm SATA RAID1.


Have you tried the really basic speed test?

 time (dd if=/dev/zero of=bigfile bs=8192 count=100; sync)

 time dd if=bigfile of=/dev/null bs=8192

Divide 8.2GB by the times reported.  On a single 10K SATA drive, I get about 
55MB/sec write and 61 MB/sec read.

If you can't get similar numbers, then something is wrong.

Craig

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2009-01-22 Thread Mark Wong
On Mon, Dec 22, 2008 at 12:59 AM, Greg Smith  wrote:
> On Sat, 20 Dec 2008, Mark Wong wrote:
>
>> Here are links to how the throughput changes when increasing
>> shared_buffers: http://pugs.postgresql.org/node/505 My first glance takes
>> tells me that the system performance is quite erratic when increasing the
>> shared_buffers.
>
> If you smooth that curve out a bit, you have to throw out the 22528MB figure
> as meaningless--particularly since it's way too close to the cliff where
> performance dives hard.  The sweet spot looks to me like 11264MB to 17408MB.
>  I'd say 14336MB is the best performing setting that's in the middle of a
> stable area.
>
>> And another series of tests to show how throughput changes when
>> checkpoint_segments are increased: http://pugs.postgresql.org/node/503 I'm
>> also not what to gather from increasing the checkpoint_segments.
>
> What was shared_buffers set to here?  Those two settings are not completely
> independent, for example at a tiny buffer size it's not as obvious there's a
> win in spreading the checkpoints out more.  It's actually a 3-D graph, with
> shared_buffers and checkpoint_segments as two axes and the throughput as the
> Z value.
>
> Since that's quite time consuming to map out in its entirety, the way I'd
> suggest navigating the territory more efficiently is to ignore the defaults
> altogether.  Start with a configuration that someone familiar with tuning
> the database would pick for this hardware:  8192MB for shared_buffers and
> 100 checkpoint segments would be a reasonable base point.  Run the same
> tests you did here, but with the value you're not changing set to those much
> larger values rather than the database defaults, and then I think you'd end
> with something more interesting. Also, I think the checkpoint_segments
> values >500 are a bit much, given what level of recovery time would come
> with a crash at that setting. Smaller steps from a smaller range would be
> better there I think.

Sorry for the long delay.  I have a trio of results (that I actually
ran about four weeks ago) setting the shared_buffers to 7680MB (I
don't know remember why it wasn't set to 8192MB :( ) and
checkpoint_segments to 100:

http://pugs.postgresql.org/node/517

I'm also capturing the PostgreSQL parameters as suggested so we can
see what's set in the config file, default, command line etc.  It's
the "Settings" link in the "System Summary" section on the report web
page.

So about a 7% change for this particular workload:

http://pugs.postgresql.org/node/502

We're re-running some filesystem tests for an upcoming conference, so
we'll get back to it shortly...

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2009-01-22 Thread Greg Smith

On Thu, 22 Jan 2009, Mark Wong wrote:


I'm also capturing the PostgreSQL parameters as suggested so we can
see what's set in the config file, default, command line etc.  It's
the "Settings" link in the "System Summary" section on the report web
page.


Those look good, much easier to pick out the stuff that's been customized. 
I note that the Linux "Settings" links seems to be broken though.


To recap a summary here, what you had before were:

shared_buffers=24MB checkpoint_segments=100 notpm=7527
shared_buffers=8192MB checkpoint_segments=3 notpm=7996

And the new spots show:
shared_buffers=7680MB checkpoint_segments=100 notpm=9178

What's neat about your graphs now is that I think you can see the 
checkpoints happening in the response time graphs.  For example, if you 
look at 
http://207.173.203.223/~markwkm/community6/dbt2/pgtune.1000.100.1/report/rt_d.png 
and you focus on what happens just before each 10 minute mark, I'm 
guessing that response time spike is the fsync phase at the end of the 
checkpoint.  That's followed by a period where response time is really 
fast.  That's because those writes are all pooling into the now cleared 
out Linux buffer cache, but pdflush isn't really being aggressive about 
writing them out yet.  On your server that can absorb quite a few writes 
before clients start blocking on them, which is when response time climbs 
back up.


A particularly interesting bit is to compare against the result with the 
peak notpm you had in your earlier tests, where shared_buffers=15360MB: 
http://207.173.203.223/~markwkm/community6/dbt2/shared_buffers/shared_buffers.15360MB/report/rt_d.png


While the average speed was faster on that one, the worst-case response 
time was much worse.  You can really see this by comparing the response 
time distribution.


Big shared buffers but low checkpoint_segments:
http://207.173.203.223/~markwkm/community6/dbt2/shared_buffers/shared_buffers.15360MB/report/dist_d.png

Medium shared buffers and medium checkpoint_segments:
http://207.173.203.223/~markwkm/community6/dbt2/pgtune.1000.100.1/report/dist_d.png

The checkpoint spreading logic is making a lot more transactions suffer 
moderate write delays in order to get a big improvement in worst-case 
behavior.


The next fine-tuning bit I'd normally apply in this situation is to see if 
increasing checkpoint_completion_target from the default (0.5) to 0.9 does 
anything to flatten out that response time graph.  I've seen a modest 
increase in wal_buffers (from the default to, say, 1MB) help smooth out 
the rough spots too.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Greg Smith

On Thu, 22 Jan 2009, Alvaro Herrera wrote:


Also, I think you should set the "scale" in the prepare step (-i) at
least as high as the number of clients you're going to use.  (I dimly
recall some recent development in this area that might mean I'm wrong.)


The idea behind that maxim (clients>=scale) is that locking on the smaller 
tables will bottleneck resuls if you don't follow that advice.  It's a bit 
messier than that though.  Increasing the scale will also make the 
database larger, and once it gets bigger than available RAM your results 
are going to dive hard because of that, more so than the locking would 
have held you back.


All kind of irrelevant for Ibrahim's case, because if you're not getting 
more than 50MB/s out of your disks the pgbench results are kind of moot 
anyway--there's a larger problem to sort out first.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching indexes and pages?

2009-01-22 Thread Craig Ringer
Thomas Finneid wrote:
> Thomas Markus wrote:
> 
>> try to reorganize your data with CLUSTER and create appropriate
>> indixes (dont forget to check statistics).
> 
> One question. Assume I have clustered and new data has been added after
> that, according to the docs that data is added "outside" of the
> clustered data.

Check into FILLFACTOR (on both tables and indexes).

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Ibrahim Harrani
Hi Craig,

Here is the result. It seems that disk write is terrible!.

r...@myserver /usr]#  time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)


100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)

real15m46.206s
user0m0.368s
sys 0m15.560s
[r...@myserver /usr]#

[r...@myserver /usr]#  time dd if=bigfile of=/dev/null bs=8192
100+0 records in
100+0 records out
819200 bytes transferred in 174.646798 secs (46906099 bytes/sec)

real2m54.663s
user0m0.246s
sys 0m9.307s


On Fri, Jan 23, 2009 at 12:44 AM, Craig James
 wrote:
> David Rees wrote:
>>
>> On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
>>  wrote:
>>>
>>> Version 1.93d   --Sequential Output-- --Sequential Input-
>>> --Random-
>>> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>> --Seeks--
>>> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>>>  /sec %CP
>>> myserver 300M   391  97  9619   1  8537   2   673  99 + +++  1196  16
>>> Latency   211ms 388ms 325ms   27652us 722us
>>>  6720ms
>>> Version 1.93d   --Sequential Create-- Random
>>> Create
>>> myserver-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>>> files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>>> %CP
>>>16  9004  25 + +++ + +++  8246  20 + +++ +
>>> +++
>>> Latency   592ms 208us 102us 673ms 179us
>>> 100us
>>
>> You should be testing bonnie with a file size that is at least double
>> the amount of memory in your machine - in this case, 4GB files, not
>> 300MB files.
>>
>>> When I compare my  bonnie++ result with the one at
>>> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm,
>>> It seems that there is something wrong with the disks!?
>>
>> Yes, your machine appears to be very slow.  You should be able to
>> write in the order of 30-50MB/s+ and read in the order of 40-80MB/s+.
>> Random IO should be in the 200 tps range for a 7200rpm SATA RAID1.
>
> Have you tried the really basic speed test?
>
>  time (dd if=/dev/zero of=bigfile bs=8192 count=100; sync)
>
>  time dd if=bigfile of=/dev/null bs=8192
>
> Divide 8.2GB by the times reported.  On a single 10K SATA drive, I get about
> 55MB/sec write and 61 MB/sec read.
>
> If you can't get similar numbers, then something is wrong.
>
> Craig
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2009-01-22 Thread Mark Wong
On Thu, Jan 22, 2009 at 7:44 PM, Greg Smith  wrote:
> On Thu, 22 Jan 2009, Mark Wong wrote:
>
>> I'm also capturing the PostgreSQL parameters as suggested so we can
>> see what's set in the config file, default, command line etc.  It's
>> the "Settings" link in the "System Summary" section on the report web
>> page.
>
> Those look good, much easier to pick out the stuff that's been customized. I
> note that the Linux "Settings" links seems to be broken though.

Oh fudge, I think I see where my scripts are broken.  We're running
with a different Linux kernel now than before so I don't want to grab
the parameters yet.  I'll switch to the previous kernel to get the
parameters after the current testing is done, and fix the scripts in
the meantime.

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Craig James

Ibrahim Harrani wrote:

Hi Craig,

Here is the result. It seems that disk write is terrible!.

r...@myserver /usr]#  time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)


100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)

real15m46.206s
user0m0.368s
sys 0m15.560s


So it's nothing to do with Postgres.  I'm no expert solving this sort of 
problem, but I'd start by looking for:

 - a rogue process that's using disk bandwidth (use vmstat when the system is 
idle)
 - system logs, maybe there are a zillion error messages
 - if you have a second disk, try its performance
 - if you don't have a second disk, buy one, install it, and try it
 - get another SATA controller and try that

Or do the reverse: Put the disk in a different computer (one that you've tested 
beforehand and verified is fast) and see if the problem follows the disk.  Same 
for the SATA card.

It could be your SATA controller, the disk, some strange hdparm setting ... who 
knows?

I ran into this once a LONG time ago with a kernal that didn't recognize the 
disk or driver or something, and disabled the DMA (direct-memory access) 
feature, which meant the CPU had to handle every single byte coming from the 
disk, which of course meant SLOW, plus you couldn't even type while the disk 
was busy.  A simple manual call to hdparm(1) to force DMA on fixed it.  Weird 
stuff like that can be very hard to find.

I also saw very low write speed once on a RAID device with a battery-backed 
cache, when the battery went dead.  The RAID controller went into its 
conservative mode, which for some reason was much slower than the disk's raw 
performance.

Craig

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance