Re: [PERFORM] Weird performance drop after VACUUM

2005-08-26 Thread asif ali
Hi,
I have the same issue. After doing "VACCUME ANALYZE"
performance of the query dropped. 

Here is the query 
explain select * from  conversion_table c where   
c.conversion_date BETWEEN '2005-06-07' and
'2005-08-17' 

Before "VACCUME ANALYZE"

"Index Scan using conversion_table_pk on
keyword_conversion_table c  (cost=0.00..18599.25
rows=4986 width=95)"
"  Index Cond: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"


After  "VACCUME ANALYZE"


"Seq Scan on conversion_table c  (cost=0.00..29990.83
rows=1094820 width=66)"
"  Filter: ((conversion_date >= '2005-06-07'::date)
AND (conversion_date <= '2005-08-17'::date))"


I dont know why system is doing "Seq scan" now.

Thanks

asif ali







--- �mit �ztosun <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> We are using PostgreSQL for our business
> application. Recently, during
> testing of our application with large volumes of
> data, we faced a weird
> problem. Our query performance dropped
> *dramatically* after "VACUUM FULL
> ANALYZE" command. We have encountered a similar
> problem listed on
> mailing list archives, but the submitter solved his
> problem by rewriting
> his query, which is unfortunatelly very hard for us.
> 
> I am attaching two EXPLAIN ANALYZE outputs, first
> one is just before the
> VACUUM FULL ANALYZE command and the other is the one
> after. Also
> attached is the SQL query, which is simplified to
> clearify the problem.
> In the example query time increases from 1.8 second
> to > 4.0 secons. The
> difference for the complete query is much bigger,
> query time increases
> from 7.8 seconds to > 110 seconds.
> 
> Any help is appreciated, we were unable to identify
> what causes the
> query planner to choose a different/poor performing
> plan.
> 
> Notes:
> Our production platform is Ubuntu Linux Hoary on
> i386, PostgreSQL 8.0.3,
> compiled from sources. Same tests were carried on
> Windows XP
> Professional and PostgreSQL 8.0.1 with similar
> results. The queries use
> little IO, high CPU. The largest table involved in
> the sample query has
> about 1 rows. Indexes are used intensively, some
> tables use > 4
> indexes.
> 
> Best regards,
> Umit Oztosun
> 
> > SELECT * FROM (
> SELECT 
> COALESCE (
> (SELECT COALESCE (sum(irskal.anamiktar),
> 0)  
> * (SELECT 
> birim.fiyat2 * (SELECT kur1 
> FROM
> sis_doviz_kuru kur 
> WHERE
> birim._key_sis_doviz2 = kur._key_sis_doviz 
> ORDER BY tarih
> desc 
> LIMIT 1)
> FROM scf_stokkart_birimleri
> birim
> WHERE _key_scf_stokkart =
> stok._key
> AND anabirim = '1'
>   )
>  FROM scf_irsaliye irs,
> scf_irsaliye_kalemi irskal
>  WHERE irskal._key_kalemturu =
> stok._key
>  AND irskal._key_scf_irsaliye =
> irs._key
>  AND irs.karsifirma = 'KENDI'
>  AND (irs.turu='MAI' OR
> irs.turu='KGI' OR irs.turu='PS' OR irs.turu='TS' OR
> irs.turu='KC' OR irs.turu='KCO')
>  AND ( irs._key_sis_depo_dest =
> '003l$1$$'  OR  irs._key_sis_depo_dest =
> '0048$1$$'  OR  irs._key_sis_depo_dest =
> '004b$1$$'  OR  irs._key_sis_depo_dest =
> '004d$1$$' )
>  AND ((irskal._key LIKE '%' OR
> irskal._key LIKE '0101%' OR irskal._key LIKE '$$%'))
>  AND irs.tarih <= '2005-08-26'
> ), 0
> ) as arti_fiili_irs_karsifirma,
> stok.*
> FROM scf_stokkart stok
> ) AS _SWT WHERE (_key LIKE '00%' OR _key LIKE '01%'
> OR _key LIKE '$$%') ORDER BY _key desc
> > Before VACUUM FULL ANALYZE - Short Query
> ---
> Sort  (cost=9094.31..9094.40 rows=37 width=817)
> (actual time=1852.799..1877.738 rows=1 loops=1)
>   Sort Key: stok._key
>   ->  Seq Scan on scf_stokkart stok 
> (cost=0.00..9093.34 rows=37 width=817) (actual
> time=8.670..1575.586 rows=1 loops=1)
> Filter: (((_key)::text ~~ '00%'::text) OR
> ((_key)::text ~~ '01%'::text) OR ((_key)::text ~~
> '$$%'::t

Re: [PERFORM] Weird performance drop after VACUUM

2005-08-26 Thread asif ali
Thanks Michael For your reply.

Here is performance on the database on which i did 
VACUUM ANALYZE

explain analyze
select   keyword_id
,sum(daily_impressions) as daily_impressions 
,sum(daily_actions)as daily_actions 
 from  conversion_table c wherec.conversion_date
BETWEEN '2005-06-07' and '2005-08-17' 
group by keyword_Id 

"GroupAggregate  (cost=195623.66..206672.52 rows=20132
width=16) (actual time=8205.283..10139.369 rows=55291
loops=1)"
"  ->  Sort  (cost=195623.66..198360.71 rows=1094820
width=16) (actual time=8205.114..9029.501 rows=863883
loops=1)"
"Sort Key: keyword_id"
"->  Seq Scan on keyword_conversion_table c 
(cost=0.00..29990.83 rows=1094820 width=16) (actual
time=0.057..1422.319 rows=863883 loops=1)"
"  Filter: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 14683.617 ms"


Now see if am changing the query and commenting one
column.

explain analyze
select   keyword_id
,sum(daily_impressions) as daily_impressions 
--  ,sum(daily_actions)as daily_actions 
 from  conversion_table c wherec.conversion_date
BETWEEN '2005-06-07' and '2005-08-17' 
group by keyword_Id 


"HashAggregate  (cost=27373.51..27373.52 rows=2
width=16) (actual time=3030.386..3127.073 rows=55717
loops=1)"
"  ->  Seq Scan on conversion_table c 
(cost=0.00..27336.12 rows=4986 width=16) (actual
time=0.050..1357.164 rows=885493 loops=1)"
"Filter: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 3159.162 ms"


I noticed "GroupAggregate" changes to "HashAggregate"
and performance from 14 sec to 3 sec.


On the other hand I have another database which I did
not do "VACUUM ANALYZE"  working fine.


explain analyze
select   keyword_id
,sum(daily_impressions) as daily_impressions 
,sum(daily_actions)as daily_actions 
 from  conversion_table c wherec.conversion_date
BETWEEN '2005-06-07' and '2005-08-17' 
group by keyword_Id 


"HashAggregate  (cost=27373.51..27373.52 rows=2
width=16) (actual time=3024.289..3120.324 rows=55717
loops=1)"
"  ->  Seq Scan on conversion_table c 
(cost=0.00..27336.12 rows=4986 width=16) (actual
time=0.047..1352.212 rows=885493 loops=1)"
"    Filter: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 3152.437 ms"


I am new to postgres. Thanks in advance.


asif ali






--- Michael Fuhr <[EMAIL PROTECTED]> wrote:

> On Fri, Aug 26, 2005 at 03:52:24PM -0700, asif ali
> wrote:
> > I have the same issue. After doing "VACCUME
> ANALYZE"
> > performance of the query dropped. 
> 
> Your EXPLAIN output doesn't show the actual query
> times -- could
> you post the EXPLAIN ANALYZE output?  That'll also
> show how accurate
> the planner's row count estimates are.
> 
> > Before "VACCUME ANALYZE"
> > 
> > "Index Scan using conversion_table_pk on
> > keyword_conversion_table c  (cost=0.00..18599.25
> > rows=4986 width=95)"
> > "  Index Cond: ((conversion_date >=
> > '2005-06-07'::date) AND (conversion_date <=
> > '2005-08-17'::date))"
> > 
> > After  "VACCUME ANALYZE"
> > 
> > "Seq Scan on conversion_table c 
> (cost=0.00..29990.83
> > rows=1094820 width=66)"
> > "  Filter: ((conversion_date >=
> '2005-06-07'::date)
> > AND (conversion_date <= '2005-08-17'::date))"
> > 
> > I dont know why system is doing "Seq scan" now.
> 
> Notice the row count estimates: 4986 in the "before"
> query and
> 1094820 in the "after" query.  In the latter, the
> planner thinks
> it has to fetch so much of the table that a
> sequential scan would
> be faster than an index scan.  You can see whether
> that guess is
> correct by disabling enable_seqscan to force an
> index scan.  It
> might be useful to see the output of the following:
> 
> SET enable_seqscan TO on;
> SET enable_indexscan TO off;
> EXPLAIN ANALYZE SELECT ...;
> 
> SET enable_seqscan TO off;
> SET enable_indexscan TO on;
> EXPLAIN ANALYZE SELECT ...;
> 
> You might also experiment with planner variables
> like effective_cache_size
> and random_page_cost to see how changing them
> affects the query
> plan.  However, be careful of tuning the system
> based on one query:
> make sure adjustments result in reasonable plans for
> many different
> queries.
> 
> -- 
> Michael Fuhr
> 
> ---(end of
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 





Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Weird performance drop after VACUUM

2005-08-29 Thread asif ali
Michael
The database is on the same system.
What I am doing is only "VACUUM analyze 
conversion_table"

I did the the same thing on a newly created database.
And got the same result. So after "VACUUM analyze"
performance dropped.
Please see this. Runtime changes from "7755.115" to
"14859.291" ms


explain analyze
select keyword_id,sum(daily_impressions) as
daily_impressions ,
 sum(daily_clicks) as daily_clicks, 
COALESCE(sum(daily_cpc::double precision),0) as
daily_cpc, sum(daily_revenues)as daily_revenues,
sum(daily_actions)as daily_actions 
 ,count(daily_cpc) as count from  conversion_table c
wherec.conversion_date BETWEEN '2005-06-07' and
'2005-08-17' 
group by keyword_Id 

"HashAggregate  (cost=18686.51..18686.54 rows=2
width=52) (actual time=7585.827..7720.370 rows=55717
loops=1)"
"  ->  Index Scan using conversion_table_pk on
conversion_table c  (cost=0.00..18599.25 rows=4986
width=52) (actual time=0.129..2882.066 rows=885493
loops=1)"
"Index Cond: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 7755.115 ms"


VACUUM analyze  conversion_table


explain analyze

select keyword_id,sum(daily_impressions) as
daily_impressions ,
 sum(daily_clicks) as daily_clicks, 
COALESCE(sum(daily_cpc::double precision),0) as
daily_cpc, sum(daily_revenues)as daily_revenues,
sum(daily_actions)as daily_actions 
 ,count(daily_cpc) as count from  conversion_table c
wherec.conversion_date BETWEEN '2005-06-07' and
'2005-08-17' 
group by keyword_Id 


"GroupAggregate  (cost=182521.76..200287.99 rows=20093
width=37) (actual time=8475.580..12618.793 rows=55717
loops=1)"
"  ->  Sort  (cost=182521.76..184698.58 rows=870730
width=37) (actual time=8475.246..9418.068 rows=885493
loops=1)"
"Sort Key: keyword_id"
"->  Seq Scan on conversion_table c 
(cost=0.00..27336.12 rows=870730 width=37) (actual
time=0.007..1520.788 rows=885493 loops=1)"
"  Filter: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 14859.291 ms"






  


--- Michael Fuhr <[EMAIL PROTECTED]> wrote:

> On Fri, Aug 26, 2005 at 05:10:49PM -0700, asif ali
> wrote:
> > "GroupAggregate  (cost=195623.66..206672.52
> rows=20132
> > width=16) (actual time=8205.283..10139.369
> rows=55291
> > loops=1)"
> > "  ->  Sort  (cost=195623.66..198360.71
> rows=1094820
> > width=16) (actual time=8205.114..9029.501
> rows=863883
> > loops=1)"
> > "Sort Key: keyword_id"
> > "->  Seq Scan on keyword_conversion_table
> c 
> > (cost=0.00..29990.83 rows=1094820 width=16)
> (actual
> > time=0.057..1422.319 rows=863883 loops=1)"
> > "  Filter: ((conversion_date >=
> > '2005-06-07'::date) AND (conversion_date <=
> > '2005-08-17'::date))"
> > "Total runtime: 14683.617 ms"
> 
> What are your effective_cache_size and work_mem
> (8.x) or sort_mem (7.x)
> settings?  How much RAM does the machine have?  If
> you have enough
> memory then raising those variables should result in
> better plans;
> you might also want to experiment with
> random_page_cost.  Be careful
> not to set work_mem/sort_mem too high, though.  See
> "Run-time
> Configuration" in the "Server Run-time Environment"
> chapter of the
> documentation for more information about these
> variables.
> 
> -- 
> Michael Fuhr
> 
> ---(end of
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 





Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Weird performance drop after VACUUM

2005-08-29 Thread asif ali
Michael,
The 
effective_cache_size, random_page_cost, work_mem
were set to default. (commented).
I have changed the setting of these and now the
performance is better see below.

"HashAggregate  (cost=42573.89..42925.52 rows=20093
width=37) (actual time=5273.984..5430.733 rows=55717
loops=1)"
"  ->  Seq Scan on keyword_conversion_table c 
(cost=0.00..27336.12 rows=870730 width=37) (actual
time=0.052..1405.576 rows=885493 loops=1)"
"Filter: ((conversion_date >=
'2005-06-07'::date) AND (conversion_date <=
'2005-08-17'::date))"
"Total runtime: 5463.764 ms"



Thanks a lot



--- Michael Fuhr <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 29, 2005 at 11:07:17AM -0700, asif ali
> wrote:
> > The database is on the same system.
> > What I am doing is only "VACUUM analyze 
> > conversion_table"
> > 
> > I did the the same thing on a newly created
> database.
> > And got the same result. So after "VACUUM analyze"
> > performance dropped.
> > Please see this. Runtime changes from "7755.115"
> to
> > "14859.291" ms
> 
> As has been pointed out a couple of times, you're
> getting a different
> plan after VACUUM ANALYZE because the row count
> estimates are more
> accurate.  Unfortunately the more accurate estimates
> result in a
> query plan that's slower than the plan for the less
> accurate
> estimates.  PostgreSQL *thinks* the plan will be
> faster but your
> results show that it isn't, so you might need to
> adjust some of the
> planner's cost constants.
> 
> A asked some questions that you didn't answer, so
> I'll ask them again:
> 
> What's your effective_cache_size setting?
> What's your work_mem (8.x) or sort_mem (7.x)
> setting?
> What's your random_page_cost setting?
> How much available RAM does the machine have?
> What version of PostgreSQL are you running?
> 
> Various tuning guides give advice on how to set the
> above and other
> configuration variables.  Here's one such guide:
> 
> http://www.powerpostgresql.com/PerfList/
> 
> -- 
> Michael Fuhr
> 
> ---(end of
> broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 





Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] VACUUM FULL does not works.......

2006-12-06 Thread asif ali
Hi,
  I have a "product" table having 350 records. It takes approx 1.8 seconds to 
get all records from this table. I copies this table to a "product_temp" table 
and run the same query to select all records; and it took 10ms(much faster).
  I did "VACUUM FULL" on "product" table but It did not work.
  
  I checked the file size of these two tables.  
  "product" table's file size is "32mb" and
  "product_temp" table's file size is "72k".
  
  So, it seems that "VACUUM FULL" is not doing anything. 
  Please suggest.
  
  asif ali
  icrossing inc.
 
-
Have a burning question? Go to Yahoo! Answers and get answers from real people 
who know.

Re: [PERFORM] VACUUM FULL does not works.......

2006-12-06 Thread asif ali
Thanks for the prompt reply...
 Here is the output of "VACUUM FULL VERBOSE"
 The postgres version is "8.0.3".
 
 Thanks
 asif ali
 icrossing inc
 
 INFO:  vacuuming "public.product_table"
 INFO:  "product_table": found 0 removable, 139178 nonremovable row versions in 
4305 pages
 DETAIL:  138859 dead row versions cannot be removed yet.
 Nonremovable row versions range from 152 to 273 bytes long.
 There were 26916 unused item pointers.
 Total free space (including removable row versions) is 4507788 bytes.
 249 pages are or will become empty, including 0 at the end of the table.
 746 pages containing 4286656 free bytes are potential move destinations.
 CPU 0.04s/0.06u sec elapsed 0.15 sec.
 INFO:  index "product_table_client_name_unique" now contains 139178 row 
versions in 3916 pages
 DETAIL:  0 index row versions were removed.
 2539 index pages have been deleted, 2055 are currently reusable.
 CPU 0.08s/0.02u sec elapsed 0.76 sec.
 INFO:  index "product_table_cpc_agent_id_unique" now contains 139178 row 
versions in 1980 pages
 DETAIL:  0 index row versions were removed.
 1162 index pages have been deleted, 950 are currently reusable.
 CPU 0.04s/0.02u sec elapsed 0.49 sec.
 INFO:  index "product_table_pk" now contains 139178 row versions in 3472 pages
 DETAIL:  0 index row versions were removed.
 2260 index pages have been deleted, 1870 are currently reusable.
 CPU 0.08s/0.02u sec elapsed 0.53 sec.
 INFO:  "product_table": moved 18631 row versions, truncated 4305 to 4299 pages
 DETAIL:  CPU 0.18s/1.14u sec elapsed 2.38 sec.
 INFO:  index "product_table_client_name_unique" now contains 157728 row 
versions in 3916 pages
 DETAIL:  81 index row versions were removed.
 2407 index pages have been deleted, 1923 are currently reusable.
 CPU 0.04s/0.01u sec elapsed 0.17 sec.
 INFO:  index "product_table_cpc_agent_id_unique" now contains 157728 row 
versions in 1980 pages
 DETAIL:  81 index row versions were removed.
 1100 index pages have been deleted, 888 are currently reusable.
 CPU 0.03s/0.01u sec elapsed 0.16 sec.
 INFO:  index "product_table_pk" now contains 157728 row versions in 3472 pages
 DETAIL:  81 index row versions were removed.
 2150 index pages have been deleted, 1760 are currently reusable.
 CPU 0.05s/0.01u sec elapsed 0.30 sec.
 INFO:  vacuuming "pg_toast.pg_toast_11891545"
 INFO:  "pg_toast_11891545": found 0 removable, 0 nonremovable row versions in 
0 pages
 DETAIL:  0 dead row versions cannot be removed yet.
 Nonremovable row versions range from 0 to 0 bytes long.
 There were 0 unused item pointers.
 Total free space (including removable row versions) is 0 bytes.
 0 pages are or will become empty, including 0 at the end of the table.
 0 pages containing 0 free bytes are potential move destinations.
 CPU 0.00s/0.00u sec elapsed 0.00 sec.
 INFO:  index "pg_toast_11891545_index" now contains 0 row versions in 1 pages
 DETAIL:  0 index pages have been deleted, 0 are currently reusable.
 CPU 0.00s/0.00u sec elapsed 0.00 sec.
 
 Query returned successfully with no result in 5201 ms.

Jean-Max Reymond <[EMAIL PROTECTED]> wrote: 2006/12/6, asif ali :
> Hi,
>  I have a "product" table having 350 records. It takes approx 1.8 seconds to
> get all records from this table. I copies this table to a "product_temp"
> table and run the same query to select all records; and it took 10ms(much
> faster).
>  I did "VACUUM FULL" on "product" table but It did not work.
>
>  I checked the file size of these two tables.
>  "product" table's file size is "32mb" and
>  "product_temp" table's file size is "72k".
>
>  So, it seems that "VACUUM FULL" is not doing anything.
>  Please suggest.

try VACUUM FULL VERBOSE and report the result.

-- 
Jean-Max Reymond
CKR Solutions Open Source
Nice France
http://www.ckr-solutions.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


 
-
Want to start your own business? Learn how on Yahoo! Small Business.

Re: [PERFORM] VACUUM FULL does not works.......

2006-12-06 Thread asif ali
Thanks Everybody for helping me out.
 I checked "pg_stat_activity"/pg_locks, but do not see any activity on the 
table. 
 How to find a old running transaction...
 I saw this link, but it did not help..
 http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php
 
 Thanks
 
 asif ali
 icrossing inc
 
Tom Lane <[EMAIL PROTECTED]> wrote: asif ali  writes:
>  INFO:  vacuuming "public.product_table"
>  INFO:  "product_table": found 0 removable, 139178 nonremovable row versions 
> in 4305 pages
>  DETAIL:  138859 dead row versions cannot be removed yet.

So Scott's guess was correct: you've got a whole lot of dead rows in
there that will eventually be removable, but not while there's still
an open transaction that might be able to "see" them.  Find your open
transaction and get rid of it (pg_stat_activity might help).

   regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


 
-
Check out the all-new Yahoo! Mail beta - Fire up a more powerful email and get 
things done faster.

Re: [PERFORM] VACUUM FULL does not works.......

2006-12-06 Thread asif ali
Thanks Scott,
 It worked!!!
 We killed an old idle running transaction, now everything is fine..
 
 Thanks Again
 asif ali
 icrossing inc

Scott Marlowe <[EMAIL PROTECTED]> wrote: On Wed, 2006-12-06 at 15:53, asif ali 
wrote:
> Thanks Everybody for helping me out.
> I checked "pg_stat_activity"/pg_locks, but do not see any activity on
> the table. 
> How to find a old running transaction...
> I saw this link, but it did not help..
> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php

Sometimes just using top or ps will show you.

on linux you can run top and then hit c for show command line and look
for ones that are IDLE

Or, try ps:

ps axw|grep postgres

On my machine normally:

 2408 ?S  0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D 
/home/postgres/data
 2615 ?S  0:00 postgres: stats buffer process
 2616 ?S  0:00 postgres: stats collector process
 2857 ?S  0:00 postgres: writer process
 2858 ?S  0:00 postgres: stats buffer process
 2859 ?S  0:00 postgres: stats collector process

But with an idle transaction:

 2408 ?S  0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D 
/home/postgres/data
 2615 ?S  0:00 postgres: stats buffer process
 2616 ?S  0:00 postgres: stats collector process
 2857 ?S  0:00 postgres: writer process
 2858 ?S  0:00 postgres: stats buffer process
 2859 ?S  0:00 postgres: stats collector process
 8679 ?S  0:00 postgres: smarlowe test [local] idle in transaction

Thar she blows!

Also, you can restart the database and vacuum it then too. Of course,
don't do that during regular business hours...

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


 
-
Have a burning question? Go to Yahoo! Answers and get answers from real people 
who know.

Re: [PERFORM] Slow update with simple query

2006-12-13 Thread asif ali
Arnaud,
 Have you run "ANALYZE" on the table after creating index?
 Also make sure that "#effective_cache_size" is set properly. A higher 
value makes it more likely to use index scans.
 
 Thanks
 asif ali

Arnaud Lesauvage <[EMAIL PROTECTED]> wrote: Ragnar a écrit :
> On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
>> Jens Schipkowski a écrit :
>> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage   
>> >> Why is this query better than the other one ? Because it runs the  
>> >> "(field IN ('some','other') AND field2 = 'Y')" once and then executes  
>> >> the join with the resulting set ?
>> > True. The Subselect in FROM clause will be executed once and will be  
>> > joined using the condition at where clause. So your condition at t2 is not 
>> >  
>> > executed for each row in t1(2mio records) but for each row in t2(1k  
>> > records). And the boolean value is already set during update.
>> 
>> OK Jens, thanks for clarifying this.
>> I thought the planner could guess what to do in such cases.
> 
> don't worry, it will.
> this is not your problem

Indeed, the new query does not perform that well :

"Hash Join  (cost=112.75..307504.97 rows=2024869 width=355) (actual 
time=53.995..246443.811 rows=2020061 loops=1)"
"  Hash Cond: ("outer".uid = "inner".uid)"
"  ->  Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=338) (actual 
time=19.342..234304.499 rows=2033001 loops=1)"
"  ->  Hash  (cost=110.20..110.20 rows=1020 width=53) (actual time=4.853..4.853 
rows=1020 loops=1)"
"->  Seq Scan on t2  (cost=0.00..110.20 rows=1020 width=53) (actual 
time=0.017..2.586 rows=1020 loops=1)"
"Total runtime: 2777844.892 ms"

I removed all unnecessary indexes on t1 before running the query (I left the 
index on uid and the multicolumn index containind the updated field).
I believe the multicolumn-functional-index computation is taking some time 
here, isn't it ?

Regards
--
Arnaud

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


 
-
Any questions?  Get answers on any topic at Yahoo! Answers. Try it now.