On Thu, 18 Sep 2003 15:36:50 -0700, Jenny Zhang <[EMAIL PROTECTED]>
wrote:
>We thought the large effective_cache_size should lead us to better
>plans. But we found the opposite.
The common structure of your query plans is:
Sort
Sort Key: sum((partsupp.ps_supplycost * partsupp.ps_availqty))
Tom Lane <[EMAIL PROTECTED]> writes:
> I think this is a pipe dream. Variation in where the data gets laid
> down on your disk drive would alone create more than that kind of delta.
> I'm frankly amazed you could get repeatability within 2-3%.
I think the reason he gets good repeatability is be
Jenny Zhang <[EMAIL PROTECTED]> writes:
> ... It seems to me that small
> effective_cache_size favors the choice of nested loop joins (NLJ)
> while the big effective_cache_size is in favor of merge joins (MJ).
No, I wouldn't think that, because a nestloop plan will involve repeated
fetches of
> We thought the large effective_cache_size should lead us to better
> plans. But we found the opposite.
Maybe it's inappropriate for little old me to jump in here, but the plan
isn't usually that important compared to the actual runtime. The links you
give show the output of 'explain' but not 'e