Re: [PERFORM] Low CPU Usage

2007-09-20 Thread brauagustin-susc
Hola Beto.
I have no idea where to look for that configuration or settings.
Yesterday I red about some drivers problems with SATA disk working togheter 
with IDE devices with DMA.

Mi server server is a Pentium VI 3.3 with hyper threading (enabled in BIOS), HP 
Proliant ML 110.

Then I entered to the BIOS and saw in IDE Configuration: 
  ATA/IDE Configuration[Enhanced]
  Configure SATA as   [IDE]  => it has RAID option too

I have any idea how to continue!!! I don't know if this a SATA problem, a 
configuration problem or what else. I have installed several servers beggining 
with postgres 6.4 and I've neved had this kind of problems (always with IDE 
disks). I think this is a problem with SATA disk i/o, but I don't see how to 
measure that (I have already set postgresql.conf).

Regards
Agustin


- Mensaje original 
De: Norberto Meijome <[EMAIL PROTECTED]>
Para: [EMAIL PROTECTED]
CC: pgsql-performance@postgresql.org
Enviado: jueves 20 de septiembre de 2007, 7:53:05
Asunto: Re: [PERFORM] Low CPU Usage

On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)
[EMAIL PROTECTED] wrote:

> max_stack_depth = 7MB #in the old server is 8MB but if I set in here give me 
> the ulimit error

Hola Agustin :)
otro argentino en el extranjero x aca ;) 

anyway, back to English ;)

a long shot but...

check if you have any limits set on the host for CPU usage... you may be
limited to x number of secs / % by the OS scheduler. When you query your CPU,
it will say u are only using 5% or so... 

chau,
Beto

_
Norberto Meijome
Octantis Pty Ltd

Intelligence: Finding an error in a Knuth text.
Stupidity: Cashing that $2.56 check you got.




NOTICE: The contents of this email and its attachments are confidential and
intended only for the individuals or entities named above. If you have received
this message in error, please advise the sender by reply email and immediately
delete the message and any attachments without using, copying or disclosing the
contents. Thank you.







  Los referentes más importantes en compra/ venta de autos se juntaron:
Demotores y Yahoo!
Ahora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/

Re: [PERFORM] Low CPU Usage

2007-09-20 Thread Jean-David Beyer
[EMAIL PROTECTED] wrote:
> Hola Beto.
> I have no idea where to look for that configuration or settings.

In postgreSQL, the main settings are in .../pgsql/data/postgresql.conf

> Yesterday I red about some drivers problems with SATA disk working
> togheter with IDE devices with DMA.
> 
> Mi server server is a Pentium VI 3.3 with hyper threading (enabled in
> BIOS), HP Proliant ML 110.
> 
> Then I entered to the BIOS and saw in IDE Configuration:
>   ATA/IDE Configuration[Enhanced]
>   Configure SATA as   [IDE]  => it has RAID
> option too
> 
> I have any idea how to continue!!! I don't know if this a SATA problem,
> a configuration problem or what else. I have installed several servers
> beggining with postgres 6.4 and I've neved had this kind of problems
> (always with IDE disks). I think this is a problem with SATA disk i/o,
> but I don't see how to measure that (I have already set postgresql.conf).

Are you sure you are really having a problem with insufficient CPU time
being devoted to your program(s)? When I run postgreSQL and do the initial
populating of my database, which takes several hours due to the nature of
the input data, it runs just 25% to 50% of one CPU, even though I have two
3.06 GHz hyperthreaded Xeon processors and six 10,000 rpm Ultra/320 SCSI
hard drives on two SCSI controllers. If I look at the results of the Linux
top command, and iostat and vmstat, I see that I am in io-wait state 100% of
the time. The transfer rate to the hard drives averages about 2
Megabytes/second even though I have seen 90 Megabytes/second at times (when
doing a database restore). So the IO system can be quite fast when it is not
waiting (for seeks, no doubt). If the postgreSQL processes wanted more CPU
time, they could have it as the machine does not do much else most of the
time. Actually, it runs a four BOINC processes, but they run at nice level
19, so they run only if no other process wants processing time. When I do a
database backup, it will run more than 100% of a CPU (remember I have two or
four processors, depending on how you count them) for extended periods, so
the OS is certainly capable of supplying CPU power when I need it. And
postgreSQL runs multiple processes at once, so in theory, they could gert
400% of a processor if they needed it. They do not seem to need to do this
for me.
> 
> Regards
> Agustin
> 
> 
> - Mensaje original 
> De: Norberto Meijome <[EMAIL PROTECTED]>
> Para: [EMAIL PROTECTED]
> CC: pgsql-performance@postgresql.org
> Enviado: jueves 20 de septiembre de 2007, 7:53:05
> Asunto: Re: [PERFORM] Low CPU Usage
> 
> On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)
> [EMAIL PROTECTED] wrote:
> 
>> max_stack_depth = 7MB #in the old server is 8MB but if I set in here
> give me the ulimit error
> 
> Hola Agustin :)
> otro argentino en el extranjero x aca ;)
> 
> anyway, back to English ;)
> 
> a long shot but...
> 
> check if you have any limits set on the host for CPU usage... you may be
> limited to x number of secs / % by the OS scheduler. When you query your
> CPU,
> it will say u are only using 5% or so...
> 


-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A Registered Machine   241939.
 /( )\ Shrewsbury, New Jerseyhttp://counter.li.org
 ^^-^^ 08:15:01 up 6 days, 42 min, 1 user, load average: 4.24, 4.25, 4.14

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Tablespaces and NFS

2007-09-20 Thread Carlos Moreno




About 5 months ago, I did an experiment serving tablespaces out of
AFS, another shared file system.

You can read my full post at
http://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php


Thanks for the pointer!   I had done a search on the archives, but
didn't find this one  (strange, since I included the keywords
tablespace and NFS, both of which show up in your message).

Anyway...  One detail I don't understand --- why do you claim that
"You can't take advantage of the shared file system because you can't
share tablespaces among clusters or servers" ???

With NFS, I could mount, say, /mnt/nfs/fs1 to be served by NFS
server #1, and then create tablespace nfs1 location '/mnt/nfs/fs1' ...
Why wouldn't that work??  (or was the comment specific to AFS?)

BTW, I'm not too worried by the lack of security with NFS, since
both the "main" postgres machine and the potential NFS servers
that I would use would be completely "private" machines (in that
there are no users and no other services are running in there).
I would set up a strict firewall policy so that the NFS server
only accepts connections from the main postgres machine.

Back to your comment:


On the whole, you're not going to see a performance improvement
running tablespaces on NFS (unless the disk system on the NFS server
is a lot faster) 


This seems to be the killer point --- mainly because the network
connection is a 100Mbps  (around 10 MB/sec --- less than 1/4 of
the performance we'd expect from an internal hard drive).  If at
least it was a Gigabit connection, I might still be tempted to
retry the experiment.  I was thinking that *maybe* the latencies
and contention due to heads movements (in the order of the millisec)
would take precedence and thus, a network-distributed cluster of
hard drives would end up winning.



P.S. Why not just set up those servers you're planning on using as NFS
shares as your postgres server(s)?


We're clear that that would be the *optimal* solution --- problem
is, there's a lot of client-side software that we would have to
change;  I'm first looking for a "transparent" solution in which
I could distribute the load at a hardware level, seeing the DB
server as a single entity --- the ideal solution, of course,
being the use of tablespaces with 4 or 6 *internal* hard disks
(but that's not an option with our current web hoster).

Anyway, I'll keep working on alternative solutions --- I think
I have enough evidence to close this NFS door.

Thanks!


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[PERFORM] REPOST: Nested loops row estimates always too high

2007-09-20 Thread Carlo Stonebanks

(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)

I am noticing that my queries are spending a lot of time in nested loops. 
The table/index row estimates are not bad, but the nested loops can be off 
by a factor of 50. In any case, they are always too high.


Are the over-estimations below significant, and if so, is this an indication 
of a general configuration problem?


Carlo


select
  pp.provider_id,
  pp.provider_practice_id,
  nearby.distance
from mdx_core.provider_practice as pp
join mdx_core.facility as f
on f.facility_id = pp.facility_id
join (select * from mdx_core.zips_in_mile_range('08820', 10) where zip > '') 
as nearby

on f.default_country_code = 'US'
  and f.default_postal_code = nearby.zip
  and pp.facility_address_id is NULL
union select
  pp.provider_id,
  pp.provider_practice_id,
  nearby.distance
from mdx_core.provider_practice as pp
join mdx_core.facility_address as fa
on fa.facility_address_id = pp.facility_address_id
join mdx_core.address as a
on a.address_id = fa.address_id
join (select * from mdx_core.zips_in_mile_range('08820', 10) where zip > '') 
as nearby

on a.country_code = 'US'
and a.postal_code = nearby.zip

Unique  (cost=67605.91..67653.18 rows=4727 width=16) (actual 
time=8634.618..8637.918 rows=907 loops=1)
 ->  Sort  (cost=67605.91..67617.73 rows=4727 width=16) (actual 
time=8634.615..8635.651 rows=907 loops=1)

   Sort Key: provider_id, provider_practice_id, distance
   ->  Append  (cost=0.00..67317.41 rows=4727 width=16) (actual 
time=176.056..8632.429 rows=907 loops=1)
 ->  Nested Loop  (cost=0.00..38947.07 rows=3143 width=16) 
(actual time=176.054..7867.962 rows=872 loops=1)
   ->  Nested Loop  (cost=0.00..11520.79 rows=8121 
width=12) (actual time=169.372..3041.010 rows=907 loops=1)
 ->  Function Scan on zips_in_mile_range 
(cost=0.00..15.00 rows=333 width=40) (actual time=151.479..151.671 rows=66 
loops=1)

   Filter: (zip > ''::text)
 ->  Index Scan using 
facility_country_postal_code_idx on facility f  (cost=0.00..34.25 rows=24 
width=15) (actual time=4.969..43.740 rows=14 loops=66)
   Index Cond: ((f.default_country_code = 
'US'::bpchar) AND ((f.default_postal_code)::text = zips_in_mile_range.zip))
   ->  Index Scan using provider_practice_facility_idx on 
provider_practice pp  (cost=0.00..3.36 rows=1 width=12) (actual 
time=4.915..5.316 rows=1 loops=907)

 Index Cond: (f.facility_id = pp.facility_id)
 Filter: (facility_address_id IS NULL)
 ->  Nested Loop  (cost=0.00..28323.07 rows=1584 width=16) 
(actual time=170.310..762.472 rows=35 loops=1)
   ->  Nested Loop  (cost=0.00..7791.77 rows=1579 width=12) 
(actual time=170.289..612.579 rows=36 loops=1)
 ->  Nested Loop  (cost=0.00..2595.96 rows=712 
width=12) (actual time=167.017..354.261 rows=29 loops=1)
   ->  Function Scan on zips_in_mile_range 
(cost=0.00..15.00 rows=333 width=40) (actual time=150.188..150.312 rows=66 
loops=1)

 Filter: (zip > ''::text)
   ->  Index Scan using 
address_country_postal_code_address_idx on address a  (cost=0.00..7.73 
rows=2 width=17) (actual time=2.483..3.086 rows=0 loops=66)
 Index Cond: ((a.country_code = 
'US'::bpchar) AND ((a.postal_code)::text = zips_in_mile_range.zip))
 ->  Index Scan using facility_address_address_idx 
on facility_address fa  (cost=0.00..7.15 rows=12 width=8) (actual 
time=7.652..8.901 rows=1 loops=29)

   Index Cond: (a.address_id = fa.address_id)
   ->  Index Scan using 
provider_practice_facility_address_idx on provider_practice pp 
(cost=0.00..12.80 rows=16 width=12) (actual time=4.156..4.158 rows=1 
loops=36)
 Index Cond: (fa.facility_address_id = 
pp.facility_address_id)

Total runtime: 8639.066 ms



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[PERFORM] REPOST: Performance improves only after repeated VACUUM/ANALYZE

2007-09-20 Thread Carlo Stonebanks

(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)

My client "publishes" an "edition" of their DB from his production site to 
his hosted web/db server. This is done by FTPing a backup of the DB to his 
hosting provider.


Immediately after a "publication" (restore to web/db server) we immediately 
run VACUUM ANALYZE to make sure the statistics and row estimates are 
correct.


The problem is, after this initial VACUUM ANALYZE, the row estimates in 
query plans are off by several orders of magnitude. For example, a 
disastrous plan was created because the planner estimated 4K rows when in 
fact it returned 980K rows.


Sometimes - a day or two later - the plans return to "normal" and row 
estimates are closer to realistic values. Guessing that there may be 
background events that are correcting the row estimates over time, I ran an 
ANALYZE on the DB - and sure enough - the row estimates corrected 
themselves. The puzzling thing is, there have been no writes of any sort to 
the data - there is no reason for the stats to have changed.


I believe that a VACUUM may not be necessary for a newly restored DB, but I 
assumed that VACUUM ANALYZE and ANALYZE have the same net result. Am I 
wrong?


If I am not wrong (i.e. VACUUM ANALYZE and ANALYZE should produce the same 
results) why would the performance improve on a DB that has seen no 
transactional activity only after the SECOND try?


PG 8.2.4 on RH LINUX 1GB RAM SCSI RAID 1

Carlo


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Low CPU Usage

2007-09-20 Thread brauagustin-susc
My new server postgresql.conf is equal to the old one. I'm doubting this is a 
hardware issue.
Googling with my hard HP Proliant ML 110 G3 I saw that IHC7 controller has some 
problems, but looking and testing with hdparm it looks ok.
hdparm -tT /dev/sdaç
Timing cached reads: 1722 MB in 2.00 seconds = 860.38 MB/sec
Timing buffered disks reads: 164 MB in 3.01 seconds = 54.53 MB/sec

Doing hdparm -I /dev/sda
DMA has * in udma5
Which other test can I do to find if this is a hardware, kernel o postgres 
issue?

Regards
Agustin

- Mensaje original 
De: Jean-David Beyer <[EMAIL PROTECTED]>
Para: pgsql-performance@postgresql.org
Enviado: jueves 20 de septiembre de 2007, 9:31:36
Asunto: Re: [PERFORM] Low CPU Usage

[EMAIL PROTECTED] wrote:
> Hola Beto.
> I have no idea where to look for that configuration or settings.

In postgreSQL, the main settings are in .../pgsql/data/postgresql.conf

> Yesterday I red about some drivers problems with SATA disk working
> togheter with IDE devices with DMA.
> 
> Mi server server is a Pentium VI 3.3 with hyper threading (enabled in
> BIOS), HP Proliant ML 110.
> 
> Then I entered to the BIOS and saw in IDE Configuration:
>   ATA/IDE Configuration[Enhanced]
>   Configure SATA as   [IDE]  => it has RAID
> option too
> 
> I have any idea how to continue!!! I don't know if this a SATA problem,
> a configuration problem or what else. I have installed several servers
> beggining with postgres 6.4 and I've neved had this kind of problems
> (always with IDE disks). I think this is a problem with SATA disk i/o,
> but I don't see how to measure that (I have already set postgresql.conf).

Are you sure you are really having a problem with insufficient CPU time
being devoted to your program(s)? When I run postgreSQL and do the initial
populating of my database, which takes several hours due to the nature of
the input data, it runs just 25% to 50% of one CPU, even though I have two
3.06 GHz hyperthreaded Xeon processors and six 10,000 rpm Ultra/320 SCSI
hard drives on two SCSI controllers. If I look at the results of the Linux
top command, and iostat and vmstat, I see that I am in io-wait state 100% of
the time. The transfer rate to the hard drives averages about 2
Megabytes/second even though I have seen 90 Megabytes/second at times (when
doing a database restore). So the IO system can be quite fast when it is not
waiting (for seeks, no doubt). If the postgreSQL processes wanted more CPU
time, they could have it as the machine does not do much else most of the
time. Actually, it runs a four BOINC processes, but they run at nice level
19, so they run only if no other process wants processing time. When I do a
database backup, it will run more than 100% of a CPU (remember I have two or
four processors, depending on how you count them) for extended periods, so
the OS is certainly capable of supplying CPU power when I need it. And
postgreSQL runs multiple processes at once, so in theory, they could gert
400% of a processor if they needed it. They do not seem to need to do this
for me.
> 
> Regards
> Agustin
> 
> 
> - Mensaje original 
> De: Norberto Meijome <[EMAIL PROTECTED]>
> Para: [EMAIL PROTECTED]
> CC: pgsql-performance@postgresql.org
> Enviado: jueves 20 de septiembre de 2007, 7:53:05
> Asunto: Re: [PERFORM] Low CPU Usage
> 
> On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)
> [EMAIL PROTECTED] wrote:
> 
>> max_stack_depth = 7MB #in the old server is 8MB but if I set in here
> give me the ulimit error
> 
> Hola Agustin :)
> otro argentino en el extranjero x aca ;)
> 
> anyway, back to English ;)
> 
> a long shot but...
> 
> check if you have any limits set on the host for CPU usage... you may be
> limited to x number of secs / % by the OS scheduler. When you query your
> CPU,
> it will say u are only using 5% or so...
> 


-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A Registered Machine   241939.
 /( )\ Shrewsbury, New Jerseyhttp://counter.li.org
 ^^-^^ 08:15:01 up 6 days, 42 min, 1 user, load average: 4.24, 4.25, 4.14

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq







  Seguí de cerca a la Selección Argentina de Rugby en el Mundial de Francia 
2007.
http://ar.sports.yahoo.com/mundialderugby

Re: [PERFORM] Tablespaces and NFS

2007-09-20 Thread Peter Koczan
> Anyway...  One detail I don't understand --- why do you claim that
> "You can't take advantage of the shared file system because you can't
> share tablespaces among clusters or servers" ???

I say that because you can't set up two servers to point to the same
tablespace (i.e. you can't have server A and server B both point to
the tablespace in /mnt/nfs/postgres/), which basically defeats one of
the main purposes of using a shared file system, seeing, using, and
editing files from anywhere.

This is ill-advised and probably won't work for 2 reasons.

- Postgres tablespaces require empty directories to for
initialization. If you create a tablespace on server A, it puts files
in the previously empty directory. If you then try to create a
tablespace on server B pointing to the same location, it won't work
since the directory is no longer empty. You can get around this, in
theory, but you'd either have to directly mess with system tables or
fool Postgres into thinking that each server independently created
that tablespace (to which anyone will say, NO).

- If you do manage to fool postgres into having two servers pointing
at the same tablespace, the servers really, REALLY won't play nice
with these shared resources, since they have no knowledge of each
other (i mean, two clusters on the same server don't play nice with
memory). Basically, if they compete for the same file, either I/O will
be EXTREMELY slow because of file-locking mechanisms in the file
system, or you open things up to race conditions and data corruption.
In other words: BAD

I know this doesn't fully apply to you, but I thought I should explain
my points betters since you asked so nicely :-)

> This seems to be the killer point --- mainly because the network
> connection is a 100Mbps  (around 10 MB/sec --- less than 1/4 of
> the performance we'd expect from an internal hard drive).  If at
> least it was a Gigabit connection, I might still be tempted to
> retry the experiment.  I was thinking that *maybe* the latencies
> and contention due to heads movements (in the order of the millisec)
> would take precedence and thus, a network-distributed cluster of
> hard drives would end up winning.

If you get decently fast disks, or put some slower disks in RAID 10,
you'll easily get >100 MB/sec (and that's a conservative estimate).
Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's
assuming that the NFS'd disks aren't a bottleneck.

> We're clear that that would be the *optimal* solution --- problem
> is, there's a lot of client-side software that we would have to
> change;  I'm first looking for a "transparent" solution in which
> I could distribute the load at a hardware level, seeing the DB
> server as a single entity --- the ideal solution, of course,
> being the use of tablespaces with 4 or 6 *internal* hard disks
> (but that's not an option with our current web hoster).

I sadly don't know enough networking to tell you tell the client
software "no really, I'm over here." However, one of the things I'm
fond of is using a module to store connection strings, and dynamically
loading said module on the client side. For instance, with Perl I
use...

use DBI;
use DBD::Pg;
use My::DBs;

my $dbh = DBI->connect($My::DBs::mydb);

Assuming that the module and its entries are kept up to date, it will
"just work." That way, there's only 1 module to change instead of n
client apps. I can have a new server with a new name up without
changing any client code.

> Anyway, I'll keep working on alternative solutions --- I think
> I have enough evidence to close this NFS door.

That's probably for the best.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread smiley2211

Hello all,

Old servers that housed 7.4 performed better than 8.1.4 version...are there
any MAJOR performance hits with this version???

I set the postgresql.conf setting to equal that of 7.4 and queries still run
SLOW on 8.1.4...

I have perform maintenance tonight on the 8.1.4 server - any ideas what
actions I should take???

default stats set to 50 (in postgresql.conf)

1) Restart instance
2) Dump \ reload database
3) vacuum analyze
4) rebuild index database

I keep doing these same steps and nothing seems to work...I've read where
some are saying to VACUUM several times - then reindex (???)

Can someone tell me what they do during a NORMAL maintenance window on their
servers???

All this is NEW to me.

Thanks,
Michelle.
-- 
View this message in context: 
http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12803859
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Tablespaces and NFS

2007-09-20 Thread Carlos Moreno


Thanks again, Peter, for expanding on these points.

Peter Koczan wrote:

Anyway...  One detail I don't understand --- why do you claim that
"You can't take advantage of the shared file system because you can't
share tablespaces among clusters or servers" ???



I say that because you can't set up two servers to point to the same
tablespace 


My bad!  Definitely --- I was only looking at it through the point of 
view of my
current problem at hand, so I misinterpreted what you said;  it is clear 
and
unambiguous, and I agree that there is little debate about it;  in my 
mind, since
I'm talking about *one* postgres server spreading its storage across 
several
filesystems, I didn't understand why you seemed to be claiming that that 
can

not be combined with tablespaces ...


I know this doesn't fully apply to you, but I thought I should explain
my points betters since you asked so nicely :-)
  


:-)   It's appreaciated!


If you get decently fast disks, or put some slower disks in RAID 10,
you'll easily get >100 MB/sec (and that's a conservative estimate).
Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's
assuming that the NFS'd disks aren't a bottleneck.
  


But still, with 128MB/sec  (modulo some possible NFS bottlenecks), I would
be a bit more optimistic, and would actually be tempted to retry your 
experiment
with my setup.  After all, with the setup that we have *today*, I don't 
think I
get a sustained transfer rate above 80 or 90MB/sec from the hard drives  
(as

far as I know, they're plain vanilla Enterpreise-Grade SATA2 servers, which
I believe don't get further than 90MB/sec S.T.R.)


I sadly don't know enough networking to tell you tell the client
software "no really, I'm over here." However, one of the things I'm
fond of is using a module to store connection strings, and dynamically
loading said module on the client side. For instance, with Perl I
use...

use DBI;
use DBD::Pg;
use My::DBs;

my $dbh = DBI->connect($My::DBs::mydb);

Assuming that the module and its entries are kept up to date, it will
"just work." That way, there's only 1 module to change instead of n
client apps. 


Oh no, but the problem we'd have would be at the level of the database 
design
and access --- for instance, some of the tables that I think are 
bottlenecking (the
ones I would like to spread with tablespaces) are quite interconnected 
to each
other --- foreign keys come and go;  on the client applications, many 
transaction
blocks include several of those tables --- if I were to spread those 
tables across

several backends, I'm not sure the changes would be easy  :-( )


I can have a new server with a new name up without
changing any client code.
  


But then, you're talking about replicating data so that multiple 
client-apps
can pick one out the several available "quasi-read-only" servers, I'm 
guessing?



Anyway, I'll keep working on alternative solutions --- I think
I have enough evidence to close this NFS door.



That's probably for the best.
  
Yep --- still closing that door!!  The points I'm arguing in this 
message is
just in the spirit of discussing and better understanding the issue.  
I'm still

convinced with your evidence.

Thanks,

Carlos
--




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread Carlos Moreno

smiley2211 wrote:

Hello all,

Old servers that housed 7.4 performed better than 8.1.4 version...are there
any MAJOR performance hits with this version???
  


Are you using the default UNICODE encoding for your databases??
This could potentially translate into a performance hit  (considerable?
Maybe, depending on what your applications do)

A related question:  why not update to the latest, 8.2.x ??


I set the postgresql.conf setting to equal that of 7.4 and queries still run
SLOW on 8.1.4...
  


Hmmm, I don't think the settings should be the same --- search the
archives for discussions on performance tuning and an informal
documentation of the postgresql.conf file.


3) vacuum analyze
  


Am I understanding correctly that you did this??  Just to double check,
yes, it is *very* important that you analyze the database *after loading 
it*.


You could probably check the postgres log file to see if there are any
obvious red flags in there.

HTH,

Carlos
--


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread smiley2211

No, I didn't UPGRADE it but that's what I inherited :( ...not sure of the
code page stuff because I am not the one who did the upgrade...I'm not sure
I know ENOUGH about POSTGRESQL to mess around with the codepage...

Yes, I use vacuum analyze...

Yes, I used the postgresql.conf of 7.4 and tried to match the 8.1.4 to
that...I didn't know where else to start...The users have been complaining
since DAY1 as I am told...

Thanks,
Michelle

-- 
View this message in context: 
http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12805270
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread Scott Marlowe
On 9/20/07, smiley2211 <[EMAIL PROTECTED]> wrote:
>
> No, I didn't UPGRADE it but that's what I inherited :( ...not sure of the
> code page stuff because I am not the one who did the upgrade...I'm not sure
> I know ENOUGH about POSTGRESQL to mess around with the codepage...
>
> Yes, I use vacuum analyze...
>
> Yes, I used the postgresql.conf of 7.4 and tried to match the 8.1.4 to
> that...I didn't know where else to start...The users have been complaining
> since DAY1 as I am told...

OK, a few things you need to look into.

Do you have horrendous bloating in the db.  run vacuum verbose on your
db and see what it says.  You should probably turn on the autovacuum
daemon either way.  If your database has gotten bloated you may need
to vacuum full / reindex to get your space back.

What queries are slow, specifically.  you can set the server to log
long running servers in postgresql.conf.  Find the longest running
ones and run them by hand with explain analyze at the front, like:

explain analyze select .

lastly, run

vmstat 10

from the command line while the machine is running slow and see where
effort is going.  I'm guessing you'll see a lot of id in there.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread Scott Marlowe
On 9/20/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:

> effort is going.  I'm guessing you'll see a lot of id in there.

sorry, meant wa (wait)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] query io stats and finding a slow query

2007-09-20 Thread Kamen Stanev
Hi everybody,

Is there a way to find which query is doing large io operations and/or which
is using cached data and which is reading from disk. I need to see this on a
production server to localize a slow and resource eating query. The
pg_statio* tables are very handy, but don't help me at all in finding the
most disk intensive** query, just the most used table which I already knew.

And besides that, please share your experience on how do you decide which
queries to optimize and how to reorganize your database? Is there any tools
that you use to profile your database.

Regards,
Kamen Stanev


[PERFORM] Linux mis-reporting memory

2007-09-20 Thread Decibel!
Sorry, I know this is probably more a linux question, but I'm guessing
that others have run into this...

I'm finding this rather interesting report from top on a Debian box...

Mem:  32945280k total, 32871832k used,73448k free,   247432k buffers
Swap:  1951888k total,42308k used,  1909580k free, 30294300k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
12492 postgres  15   0 8469m 8.0g 8.0g S0 25.6   3:52.03 postmaster
 7820 postgres  16   0 8474m 4.7g 4.7g S0 15.1   1:23.72 postmaster
21863 postgres  15   0 8472m 3.9g 3.9g S0 12.4   0:30.61 postmaster
19893 postgres  15   0 8471m 2.4g 2.4g S0  7.6   0:07.54 postmaster
20423 postgres  17   0 8472m 1.4g 1.4g S0  4.4   0:04.61 postmaster
26395 postgres  15   0 8474m 1.1g 1.0g S1  3.4   0:02.12 postmaster
12985 postgres  15   0 8472m 937m 930m S0  2.9   0:05.50 postmaster
26806 postgres  15   0 8474m 787m 779m D4  2.4   0:01.56 postmaster

This is a machine that's been up some time and the database is 400G, so
I'm pretty confident that shared_buffers (set to 8G) should be
completely full, and that's what that top process is indicating.

So how is it that linux thinks that 30G is cached?
-- 
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828


pgp9JORJbJrah.pgp
Description: PGP signature


Re: [PERFORM] Linux mis-reporting memory

2007-09-20 Thread Tom Lane
Decibel! <[EMAIL PROTECTED]> writes:
> I'm finding this rather interesting report from top on a Debian box...

> Mem:  32945280k total, 32871832k used,73448k free,   247432k buffers
> Swap:  1951888k total,42308k used,  1909580k free, 30294300k cached

> So how is it that linux thinks that 30G is cached?

Why would you think that a number reported by the operating system has
something to do with Postgres' shared memory?

I might be mistaken, but I think that in this report "cached" indicates
the amount of memory in use for kernel disk cache.  (No idea what the
separate "buffers" entry means, but it's obviously not all of the disk
buffers the kernel has got.)  It appears that the kernel is doing
exactly what it's supposed to do and using any not-currently-called-for
memory for disk cache ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread smiley2211

How do I know if there is BLOATING??? I just ran vacuum verbose; 

Yes, autovacuum is on.

Thanks...Michelle

-- 
View this message in context: 
http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12807959
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Linux mis-reporting memory

2007-09-20 Thread Adam Tauno Williams
> Sorry, I know this is probably more a linux question, but I'm guessing
> that others have run into this...
> I'm finding this rather interesting report from top on a Debian box...
> Mem:  32945280k total, 32871832k used,73448k free,   247432k buffers
> Swap:  1951888k total,42308k used,  1909580k free, 30294300k cached
> This is a machine that's been up some time and the database is 400G, so
> I'm pretty confident that shared_buffers (set to 8G) should be
> completely full, and that's what that top process is indicating.

Nope,  use "ipcs" to show allocated shared memory segments.

One of the better articles on LINUX & memory management -
http://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html

-- 
Adam Tauno Williams, Network & Systems Administrator
Consultant - http://www.whitemiceconsulting.com
Developer - http://www.opengroupware.org


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread Scott Marlowe
On 9/20/07, smiley2211 <[EMAIL PROTECTED]> wrote:
>
> How do I know if there is BLOATING??? I just ran vacuum verbose;
>
> Yes, autovacuum is on.

Post the last 4 or 5 lines from vacuum verbose.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread smiley2211

Here are the requested lines...

There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  free space map contains 786 pages in 297 relations
DETAIL:  A total of 5408 page slots are in use (including overhead).
5408 page slots are required to track all free space.
Current limits are:  4 page slots, 1000 relations, using 341 KB.
VACUUM


-- 
View this message in context: 
http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12810028
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!

2007-09-20 Thread db
> Old servers that housed 7.4 performed better than 8.1.4 version...are
> there any MAJOR performance hits with this version???
>
> I set the postgresql.conf setting to equal that of 7.4 and queries still
> run
> SLOW on 8.1.4...

We need to find a specific query that is slow now that was fast before,
and see the EXPLAIN ANALYZE of that query.

If you have the old server still around then showing the EXPLAIN ANALYZE
of the same query on that server would be a lot of help.

/Dennis



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate