yes, I also thought of this method and tested it before I got your mail and
this solution seems workable.
Thanks for the help
On Feb 12, 2008 9:18 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Linux Guru" <[EMAIL PROTECTED]> writes:
> > Analyzing did not help, here is the out of EXPLAIN ANALYZE of
"Linux Guru" <[EMAIL PROTECTED]> writes:
> Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query
> "Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual
> time=18.927..577929.014 rows=22712 loops=1)"
> " SubPlan"
> "-> Aggregate (cost=2420.41..2420.43
See, its calculating sum by grouping the product field. Here is an example
Product GP
- ---
A 30
B 40
A 30
C 50
C 50
Now the query calculates aggregated sum and divide by grouping product so
Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query
"Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual
time=18.927..577929.014 rows=22712 loops=1)"
" SubPlan"
"-> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual time=
25.423..25.425 row
On Feb 11, 2008 5:06 AM, Linux Guru <[EMAIL PROTECTED]> wrote:
> We have a large datawarehouse stored in postgres and temp tables are created
> based on user query. The process of temp table creation involves selecting
> data from main fact table, this includes several select and update
> statement
"Linux Guru" <[EMAIL PROTECTED]> writes:
> We have a large datawarehouse stored in postgres and temp tables are created
> based on user query. The process of temp table creation involves selecting
> data from main fact table, this includes several select and update
> statements and one of the follo
We have a large datawarehouse stored in postgres and temp tables are created
based on user query. The process of temp table creation involves selecting
data from main fact table, this includes several select and update
statements and one of the following update statement is having performance
issue