[PERFORM] SQL Monitoring

2007-10-09 Thread Campbell, Lance
PGSQL 8.2.4

 

I have noticed a slight spike in the amount of CPU usage in the last few
weeks.  I am sure it has to do with a change or two that was made to
some queries.  What is the best way to log the SQL that is being
executed?  I would prefer to limit the size of the log file to 2 G.  Is
there a way to do this?

 

Thanks for any help,

 

Lance Campbell

Project Manager/Software Architect

Web Services at Public Affairs

University of Illinois

217.333.0382

http://webservices.uiuc.edu

 



Re: [PERFORM] SQL Monitoring

2007-10-09 Thread Marcin Stępnicki
On 10/9/07, Campbell, Lance <[EMAIL PROTECTED]> wrote:
> I have noticed a slight spike in the amount of CPU usage in the last few
> weeks.  I am sure it has to do with a change or two that was made to some
> queries.  What is the best way to log the SQL that is being executed?  I
> would prefer to limit the size of the log file to 2 G.  Is there a way to do
> this?
>

Use http://pgfouine.projects.postgresql.org/.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] SQL Monitoring

2007-10-09 Thread Heikki Linnakangas
Campbell, Lance wrote:
> I have noticed a slight spike in the amount of CPU usage in the last few
> weeks.  I am sure it has to do with a change or two that was made to
> some queries.  What is the best way to log the SQL that is being
> executed?  

Take a look at statement_timeout and log_statement configuration variables.

> I would prefer to limit the size of the log file to 2 G.  Is
> there a way to do this?

log_rotation_size, together with an external tool to delete old log
files. Or use log_truncate_on_rotation and log_rotation_age instead of
log_rotation_size.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] SQL Monitoring

2007-10-09 Thread Gregory Stark

"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

> Campbell, Lance wrote:
>> I have noticed a slight spike in the amount of CPU usage in the last few
>> weeks.  I am sure it has to do with a change or two that was made to
>> some queries.  What is the best way to log the SQL that is being
>> executed?  
>
> Take a look at statement_timeout and log_statement configuration variables.

I suspect he meant log_min_duration_statement which lets you log only queries
which take too long and not statement_timeout which would actually kill your
query if it took too long.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] hashjoin chosen over 1000x faster plan

2007-10-09 Thread Kevin Grittner
I have a situation where a query is running much slower than I would
expect.  The ANALYZE showed that it is hashing some information which
is rarely needed.  When I set enable_hashjoin = off for the
connection the query run in 1/1000 the time.
 
This isn't a debilitating level of performance, but it would be nice
to clean it up, and we haven't yet come up with a viable solution.
 
The runs below are after several identical runs to create a fully
cached situation.  Autovacuum is aggressive and there is a nightly
vacuum analyze of the whole database.  This box has 4 x 2 GHz Xeon
CPUs, 6 GB RAM, RAID 5 with 13 spindles on 256 MB BBU controller.
 
I simplified the original a bit; sorry it's still kinda big.
 
-Kevin
 
listen_addresses = '*' 
max_connections = 200
shared_buffers = 160MB
temp_buffers = 50MB
work_mem = 10MB
maintenance_work_mem = 160MB
max_fsm_pages = 80
bgwriter_lru_percent = 20.0
bgwriter_lru_maxpages = 200
bgwriter_all_percent = 10.0
bgwriter_all_maxpages = 600
wal_buffers = 160kB
checkpoint_segments = 10
random_page_cost = 2.0
effective_cache_size = 5GB
redirect_stderr = on
log_line_prefix = '[%m] %p %q<%u %d %r> '
stats_block_level = on
stats_row_level = on
autovacuum = on
autovacuum_naptime = 10s
autovacuum_vacuum_threshold = 1
autovacuum_analyze_threshold = 1
datestyle = 'iso, mdy'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
escape_string_warning = off
standard_conforming_strings = on
sql_inheritance = off
 
bigbird=> select version();
   version
-
 PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE 
Linux)
(1 row)

bigbird=> explain analyze
SELECT
"CH"."caseNo",
"CH"."countyNo",
"CH"."chargeNo",
"CH"."statuteCite",
"CH"."sevClsCode",
"CH"."modSevClsCode",
"CH"."descr",
"CH"."offenseDate",
"CH"."pleaCode",
"CH"."pleaDate",
"CH"."chargeSeqNo",
"CHST"."eventDate" AS "reopEventDate",
"CTHE"."descr" AS "reopEventDescr"
  FROM "Charge" "CH"
  LEFT OUTER JOIN "CaseHist" "CHST"
  ON ( "CHST"."countyNo" = "CH"."countyNo"
   AND "CHST"."caseNo" = "CH"."caseNo"
   AND "CHST"."histSeqNo" = "CH"."reopHistSeqNo"
 )
  LEFT OUTER JOIN "CaseTypeHistEvent" "CTHE"
  ON ( "CHST"."eventType" = "CTHE"."eventType"
   AND "CHST"."caseType" = "CTHE"."caseType"
   AND "CHST"."countyNo" = "CTHE"."countyNo"
 )
  WHERE (
("CH"."caseNo" = '2005CF01')
AND ("CH"."countyNo" = 13))
  ORDER BY
"chargeNo",
"chargeSeqNo"
;

   QUERY PLAN
-
 Sort  (cost=2554.50..2554.52 rows=7 width=146) (actual time=443.068..443.070 
rows=3 loops=1)
   Sort Key: "CH"."chargeNo", "CH"."chargeSeqNo"
   ->  Hash Left Join  (cost=2318.91..2554.40 rows=7 width=146) (actual 
time=443.004..443.039 rows=3 loops=1)
 Hash Cond: ((("CHST"."eventType")::bpchar = 
("CTHE"."eventType")::bpchar) AND (("CHST"."caseType")::bpchar = 
("CTHE"."caseType")::bpchar))
 ->  Nested Loop Left Join  (cost=0.00..208.13 rows=7 width=131) 
(actual time=0.062..0.093 rows=3 loops=1)
   ->  Index Scan using "Charge_pkey" on "Charge" "CH"  
(cost=0.00..15.37 rows=7 width=112) (actual time=0.052..0.059 rows=3 loops=1)
 Index Cond: ((("countyNo")::smallint = 13) AND 
(("caseNo")::bpchar = '2005CF01'::bpchar))
   ->  Index Scan using "CaseHist_pkey" on "CaseHist" "CHST"  
(cost=0.00..27.46 rows=6 width=41) (actual time=0.002..0.002 rows=0 loops=3)
 Index Cond: ((("CHST"."countyNo")::smallint = 13) AND 
(("CHST"."caseNo")::bpchar = '2005CF01'::bpchar) AND 
(("CHST"."histSeqNo")::smallint = ("CH"."reopHistSeqNo")::smallint))
 ->  Hash  (cost=2084.80..2084.80 rows=15607 width=98) (actual 
time=442.919..442.919 rows=15607 loops=1)
   ->  Subquery Scan "CTHE"  (cost=1630.43..2084.80 rows=15607 
width=98) (actual time=331.665..411.390 rows=15607 loops=1)
 ->  Merge Right Join  (cost=1630.43..1928.73 rows=15607 
width=89) (actual time=331.661..391.999 rows=15607 loops=1)
   Merge Cond: (((d."countyNo")::smallint = 
"inner"."?column9?") AND ((d."caseType")::bpchar = "inner"."?column10?") AND 
((d."eventType")::bpchar = "inner"."?column11?"))
   ->  Index Scan using "CaseTypeHistEventD_pkey" on 
"CaseTypeHistEventD" d  (cost=0.00..87.77 rows=2051 width=21) (actual 
time=0.026..0.730 rows=434 loops=1)
   ->  Sort  (cost=1630.43..1669.45 rows=15607 
width=76) (actual time=331.022

[PERFORM] Shared Buffer setting in postgresql.conf

2007-10-09 Thread Radhika S
Hi,
Along with my previous message (slow postgres), I notice the shared buffer
setting for our production database is set to 1000.
How much higher can I go?  I don't know how much my kernel can take?

I am running postgres 7.4.6 on Redhat enterprise 3 server.

Thanks,
Radhika

-- 
It is all a matter of perspective. You choose your view by choosing where to
stand. --Larry Wall


Re: [PERFORM] Postgres running Very slowly

2007-10-09 Thread Bill Moran
In response to "Radhika S" <[EMAIL PROTECTED]>:

> Hi -,
> I have a very peculiar situation.
> 
> I am running a postgres 7.4.6 database. It is running slow... .

7.4.6 is very old.  You're lucky it hasn't corrupted your data.  At
least upgrade to the latest 7.4.18 (yes, that's 12 patches ahead of
you).  Optimally, upgrade to 8.2.5, which has a huge number of
performance improvements.

> I vacuum --analyze daily. I just did again.
> I did a vacuum full last night.
> 
> But to no avail. CPU usage and memory are normal, but the system is
> crawling.

You need to specifically define "crawling" before anyone will be able
to provide any useful advice.  What queries are running slow?  What
does the explain output look like?  The answers are in the details,
so we can't provide the answers unless you provide the details.  Like
the OS you're running it on, for example.

> Here is the info from vacuum.
> 
> CPU 0.02s/0.01u sec elapsed 0.02 sec.
> INFO:  free space map: 167 relations, 1412 pages stored; 3440 total pages
> needed
> DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 178 kB shared
> memory.
> VACUUM

This doesn't look problematic, so I doubt your vacuum policy is to blame.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] Postgres running Very slowly

2007-10-09 Thread Radhika S
Hi -,
I have a very peculiar situation.

I am running a postgres 7.4.6 database. It is running slow... .
I vacuum --analyze daily. I just did again.
I did a vacuum full last night.

But to no avail. CPU usage and memory are normal, but the system is
crawling.

Here is the info from vacuum.

CPU 0.02s/0.01u sec elapsed 0.02 sec.
INFO:  free space map: 167 relations, 1412 pages stored; 3440 total pages
needed
DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 178 kB shared
memory.
VACUUM


Is there anything else I should be looking at like FSM configuration in the
conf file?

Any help would be appreciated.

Thanks.
Radhika

-- 
It is all a matter of perspective. You choose your view by choosing where to
stand. --Larry Wall


Re: [PERFORM] Shared Buffer setting in postgresql.conf

2007-10-09 Thread Marcin Stępnicki
On 10/9/07, Radhika S <[EMAIL PROTECTED]> wrote:
> Hi,
> Along with my previous message (slow postgres), I notice the shared buffer
> setting for our production database is set to 1000.
> How much higher can I go?  I don't know how much my kernel can take?

A lot higher. How much memory do you have?

> I am running postgres 7.4.6 on Redhat enterprise 3 server.

Unless you've got a very good reason do yourself a favour and upgrade to 8.2.5.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] hashjoin chosen over 1000x faster plan

2007-10-09 Thread Simon Riggs
On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:

> I have a situation where a query is running much slower than I would
> expect.  The ANALYZE showed that it is hashing some information which
> is rarely needed.  When I set enable_hashjoin = off for the
> connection the query run in 1/1000 the time.

Can you confirm the two queries give identical outputs? It isn't clear
to me why the second sort is (never executed) in your second plan, which
I would only expect to see for an inner merge join.

Can you show the details for ControlRecord also.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster