'UPDATE is waiting
...' on the list. I killed them all. I backuped current database and DROP the
database, restore to the backup file I just made.
Don't really know why this happened, but thankfully now, everything's normal.
Thank you, guys.
should I do? It's like there's a falsely pointed index here.
Any help would be very much appreciated.
Regards,
Jenny Tania
__
Yahoo! DSL Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com
-
_id;
END;
'IMMUTABLE LANGUAGE 'plpgsql';
create index i_item_order on item (item_order(i_subject));
TIA,
--
Jenny Zhang
Open Source Development Lab
12725 SW Millikan Way, Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31
---(end of broadcast)---
TIP 8: explain analyze is your friend
Oops, I named the var name the same as the column name. Changing it to
something else solved the problem.
Thanks,
Jenny
On Tue, 2003-12-16 at 15:54, Stephan Szabo wrote:
> On Tue, 16 Dec 2003, Jenny Zhang wrote:
>
> > I have stored procedure written in pl/pgsql which takes abou
on shopping_cart (cost=0.00..5.01
rows=1 width=144) (actual time=0.22..0.37 rows=1 loops=1)
Index Cond: (sc_id = 260706::numeric)
Total runtime: 1.87 msec
(3 rows)
Is it true that using pl/pgsql increases the overhead that much?
TIA,
Jenny
--
Jenny Zhang
Open Source Development Lab
12725
The index is created by:
create index i_l_partkey on lineitem (l_partkey);
I do not have any foreign key defined. Does the spec require foreign
keys?
When you create a foreign key reference, does PG create an index
automatically?
Can you try with the index?
Jenny
On Thu, 2003-09-25 at 14:39
1)
-> Index Scan using i_l_partkey on lineitem
(cost=0.00..124.32 rows=30 width=11)
Index Cond: (l_partkey = $0)
(11 rows)
Hope this helps,
Jenny
On Thu, 2003-09-25 at 12:40, Oleg Lebedev wrote:
> I am running TPC-R benchmarks with a scale factor of
erage standard deviation of the time required for each
> step.
>
I created a page with the execution time(in seconds), average, and
stddev for each query and each steps. The data is collected from 6 dbt3
runs.
http://developer.osdl.org/~jenny/pgsql-optimizer/exetime.html
> Higher st
On Thu, 2003-09-18 at 20:20, Tom Lane wrote:
> Jenny Zhang <[EMAIL PROTECTED]> writes:
> > ... It seems to me that small
> > effective_cache_size favors the choice of nested loop joins (NLJ)
> > while the big effective_cache_size is in favor of merge joins (MJ).
>
I posted more results as you requested:
On Fri, 2003-09-19 at 08:08, Manfred Koizar wrote:
> On Thu, 18 Sep 2003 15:36:50 -0700, Jenny Zhang <[EMAIL PROTECTED]>
> wrote:
> >We thought the large effective_cache_size should lead us to better
> >plans. But we found the op
n't usually that important compared to the actual runtime. The links you
> give show the output of 'explain' but not 'explain analyze', so it's not
> clear wich plan is actually _faster_.
>
I put the EXPLAIN ANALYZE output at:
http://developer.osdl.
stream part of the test (as opposed to the single stream part).
We would like to reduce the variation to be less than 1% so that a
2% change between two different kernels would be significant.
Is there anything else we can do?
query: http://developer.osdl.org/~jenny/11.sql
plan with small
be interesting to
> see..:-)
>
>
>
Let me know if you have any suggestions about how to improve the test
kit (parameters, reported information, etc.), or how to make it more
useful to PG community.
Thanks,
--
Jenny Zhang
Open Source Development Lab Inc
12725 SW Millikan Way
Suite
only. We plan to improve it so that
it can run against PostgreSQL patches. To find more information about
STP, visit: http://www.osdl.org/stp/.
A sample OSDL-DBT3 test result report can be found at:
http://khack.osdl.org/stp/276912/
Your comments are welcome,
Regards,
Jenny
--
Jenny Zhang
Open
14 matches
Mail list logo