Can you force 8.4 to generate the same plan as 8.1? For example by running
SET enable_hashjoin = off;
before you run EXPLAIN on the query? If so, then we can compare the
numbers from the forced plan with the old plan and maybe figure out why it
didn't use the same old plan in 8.4 as it did in 8
). Then it should not need to sort at all to do the
> > grouping and it should all be fast.
>
> Not sure if that would make a difference here, since the whole table is being
> read.
The goal was to avoid the sorting which should not be needed with that
index (I hope). So I still
ords, invheadref and
> invprodref are both char(10) and indexed.
For the above query, shouldn't you have one index for both columns
(invheadref, invprodref). Then it should not need to sort at all to do the
grouping and it should all be fast.
--
/Dennis Björklund
On Tue, 26 Aug 2003, Bill Moran wrote:
> As with all performance tests/benchmarks, there are probably dozens or
> more reasons why these results aren't as accurate or wonderful as they
> should be. Take them for what they are and hopefully everyone can
> learn a few things from them.
What versio
On Thu, 7 Aug 2003, Richard Huxton wrote:
> But this parameter controls how much memory can be allocated to sorts - I
> don't see how PG can figure out a reasonable maximum by itself.
One could have one setting for the total memory usage and pg could use
statistics or some heuristics to use the
On Tue, 12 Aug 2003, mixo wrote:
> that I am currently importing data into Pg which is about 2.9 Gigs.
> Unfortunately, to maintain data intergrity, data is inserted into a table
> one row at a time.'
So you don't put a number of inserts into one transaction?
If you don't do that then postgresql
On Sat, 19 Jul 2003, Jeremy M. Guthrie wrote:
> 100megs of new data each day. However, the instant the system finishes only
> a 'vacuum analyze', the whole thing slows WAY down to where each run can take
> 10-15 minutes.
Have you run EXPLAIN ANALYZE on the delete query before and after the
va
On Fri, 18 Jul 2003, Tom Lane wrote:
> >> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
>
> > Doesn't sound very good and it will most likely make other queries slower.
>
> Seems like a reasonable approach to me --- certainly better than setting
> random_page_cost to
On Fri, 18 Jul 2003, Fabian Kreitner wrote:
> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
Doesn't sound very good and it will most likely make other queries slower.
You could always turn off sequential scan before that query and turn it on
after.
> Anything I need
On Sun, 6 Jul 2003, Martin Foster wrote:
> The processor seems to be purposely sitting there twiddling it's thumbs.
> Which leads me to believe that perhaps the nice levels have to be
> changed on the server itself?
It could also be all the usual things that affect performance. Are your
quer
10 matches
Mail list logo