I'm using Postgres 7.2.1 on a dual-Athlon running RedHat 7.3bigmem
with 2 Gig of RAM and a 240 Gig RAID 5 (3ware IDE RAID). I just did a
'vacuum analyze' on the database, however the same query to two
similar tables is coming up quite different. The two tables only
differ in that one ("center_out_cell") has an extra int2 field called
"target" which can take up to 8 different values.

Here are the queries:

db02=# explain select distinct area from center_out_cell where subject
= 'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;
NOTICE:  QUERY PLAN:

Unique  (cost=87795.47..87795.80 rows=13 width=5)
  ->  Sort  (cost=87795.47..87795.47 rows=131 width=5)
        ->  Seq Scan on center_out_cell  (cost=0.00..87790.87 rows=131
width=5)

EXPLAIN
db02=# explain select distinct area from circles_cell where subject =
'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;
NOTICE:  QUERY PLAN:

Unique  (cost=258.36..258.52 rows=6 width=5)
  ->  Sort  (cost=258.36..258.36 rows=64 width=5)
        ->  Index Scan using pk1circles_cell on circles_cell 
(cost=0.00..256.43 rows=64 width=5)

EXPLAIN


Here are the definitions for the 2 tables:

db02=# \d center_out_cell
           Table "center_out_cell"
   Column   |        Type        | Modifiers
------------+--------------------+-----------
 subject    | text               |
 arm        | character(1)       |
 target     | smallint           |
 rep        | integer            |
 direction  | smallint           |
 success    | smallint           |
 hemisphere | character(1)       |
 area       | text               |
 filenumber | integer            |
 dsp_chan   | text               |
 num_spikes | integer            |
 spike_data | double precision[] |
Unique keys: pk0center_out_cell,
             pk1center_out_cell

where: 
db02=# \d pk1center_out_cell
Index "pk1center_out_cell"
   Column   |     Type
------------+--------------
 subject    | text
 arm        | character(1)
 target     | smallint
 rep        | integer
 hemisphere | character(1)
 area       | text
 filenumber | integer
 dsp_chan   | text
 direction  | smallint
unique btree
Index predicate: (success = 1)


and
 
db02=# \d pk0center_out_cell
Index "pk0center_out_cell"
   Column   |     Type
------------+--------------
 subject    | text
 arm        | character(1)
 target     | smallint
 rep        | integer
 hemisphere | character(1)
 area       | text
 filenumber | integer
 dsp_chan   | text
 direction  | smallint
unique btree
Index predicate: (success = 0)


db02=# \d circles_cell
            Table "circles_cell"
   Column   |        Type        | Modifiers
------------+--------------------+-----------
 subject    | text               |
 arm        | character(1)       |
 rep        | integer            |
 direction  | smallint           |
 success    | smallint           |
 hemisphere | character(1)       |
 area       | text               |
 filenumber | integer            |
 dsp_chan   | text               |
 num_spikes | integer            |
 spike_data | double precision[] |
Unique keys: pk0circles_cell,
             pk1circles_cell

where:

db02=# \d pk1circles_cell
  Index "pk1circles_cell"
   Column   |     Type
------------+--------------
 subject    | text
 arm        | character(1)
 rep        | integer
 direction  | smallint
 hemisphere | character(1)
 area       | text
 filenumber | integer
 dsp_chan   | text
unique btree
Index predicate: (success = 1)

db02=# \d pk0circles_cell
  Index "pk0circles_cell"
   Column   |     Type
------------+--------------
 subject    | text
 arm        | character(1)
 rep        | integer
 direction  | smallint
 hemisphere | character(1)
 area       | text
 filenumber | integer
 dsp_chan   | text
unique btree
Index predicate: (success = 0)


Now I know that, due to the extra field "target", "center_out_cell"
can be as large as 8 times "circles_cell", but according to the cost
of the planner, the statement is 340 times more costly.  I think this
is because the planner is using the index in the circles_cell case and
not in the center_out_cell case. However, I don't pretend to
understand the intricasies of the planner to make an intelligent
guess. I've been trying random changes to postgresql.conf like
increasing the shared memory size, changing the random_page_cost size,
etc., but would like some help in trying to speed things up.

Here are some relevant settings from my postgresql.conf (made in an
attempt to max out buffers):

shared_buffers = 9000      # 2*max_connections, min 16 
wal_buffers = 32            # min 4
sort_mem = 64000         # min 32 
vacuum_mem = 16384          # min 1024
wal_files = 32
effective_cache_size = 1000  # default in 8k pages


Thanks in advance.
-Tony

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to