Dave Cramer <[EMAIL PROTECTED]> writes:
> Any idea why it wouldn't choose the right index ?
It thinks that way is cheaper. It's estimating its cost at 8381,
whereas selecting all the rows then sorting must take 10131 plus
some time to sort.
The reason why those estimates are so far off from real
Any idea why it wouldn't choose the right index ?
This is 8.3
# \d battles
Table "public.battles"
Column|Type |
Modifiers
-+-
+
An index scan looks through the index and pulls in each pages as it sees
it.
A bitmap index scan looks through the index and makes a sorted list of
all
the pages it needs and then the bitmap heap scan reads all the pages.
If your data is scattered then you may as well do the index scan, but
On Thu, Apr 24, 2008 at 12:56 PM, PFC <[EMAIL PROTECTED]> wrote:
>
> Our ~600,000,000
>> row table is changed very infrequently and is on a 12 disk software raid-6
>> for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
>> Fusion-MPT SAS Our ~50,000,000 row staging table is o
Do you mean, that the overhead is an artefact of timing the query? In
that case, the query should run faster than its evaluation with EXPLAIN
ANALYZE, correct?
Is there a way to test this assumption regarding the speed of
gettimeofday? I'm on a Macbook and have no idea about the performan
Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software
raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk
hardware
raid-10 using a Dell PowerEdge Expandable RAID con
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[EMAIL PROTECTED]>
wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
gr
On Thu, 24 Apr 2008, Nikolas Everett wrote:
The setup is kind of a beast.
No kidding.
When I run dstat I see only around 2M/sec and it is not consistent at all.
Well, it is having to seek over the disc a little. Firstly, your table may
not be wonderfully ordered for index scans, but goodne
Hi Tom,
The cost of the inner-most Nested Loop is 27 ms, but the total cost
of
the Materialize operator is 666 ms (9250 loops * 0.072 ms per
iteration). So, Materialize introduces more than 10x overhead.
Not hardly. Had the Materialize not been there, we'd have executed
the inner nestloop
I have a question about index us and IO and am seeking advice.
We are running postgres 8.2. We have two big big tables. Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT
"Albe Laurenz" <[EMAIL PROTECTED]> writes:
>> The email message sent to [EMAIL PROTECTED] ,
>> pgsql-performance@postgresql.org requires a confirmation to be delivered.
>> Please, answer this email informing the characters that you see in the
>> image below.
>
> Could somebody remove the latter ad
On Thu, 24 Apr 2008, Vlad Arkhipov wrote:
It was written below in my first post:
"These queries are part of big query and optimizer put them on the leaf
of query tree, so rows miscount causes a real problem. "
actual rows count for the first query is 294, estimate - 11; for the
second -- 283 and
Albe Laurenz пишет:
Vlad Arkhipov wrote:
I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
It was written below in my first post:
"These queries are part of big query and optimizer put them on the leaf
of query tree, so row
I've tried to communicate to the list admin address about this, so far
without any reaction.
Sorry to waste bandwith here, but I don't know where else to turn:
Whenever I post to the -performance list, I get spammed by a
challenge-response bot from [EMAIL PROTECTED]:
> The email message sent to [
Vlad Arkhipov wrote:
> I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perform
15 matches
Mail list logo