Sorry to repost this, but I forgot the subject the first time around.
Hey All,
I am testing upgrading our database from version 8.1 to 8.2. I ran our
worst performing query on this table, an outer join with an "is null"
condition, and I was happy to see it ran over four times faster. I also
not
Hey All,
I am testing upgrading our database from version 8.1 to 8.2. I ran our
worst performing query on this table, an outer join with an "is null"
condition, and I was happy to see it ran over four times faster. I also
noticed the explain analyze showed the planner chose to do sequential
scan
Craig James wrote:
> Tyrrill, Ed wrote:
> > QUERY PLAN
> >
> >
>
> >
>
> > ---
> > Merge L
Craig James wrote:
> Tyrrill, Ed wrote:
>
>> I have a table, let's call it A, whose primary key, a_id, is
referenced
>> in a second table, let's call it B. For each unique A.a_id there are
>> generally many rows in B with the same a_id. My problem is that I
wan
Hey All,
I have a table, let's call it A, whose primary key, a_id, is referenced
in a second table, let's call it B. For each unique A.a_id there are
generally many rows in B with the same a_id. My problem is that I want
to delete a row in A when the last row in B that references it is
deleted.
Tom Lane <[EMAIL PROTECTED]> writes:
> Klint Gore <[EMAIL PROTECTED]> writes:
>> On Tue, 29 May 2007 17:16:57 -0700, "Tyrrill, Ed"
<[EMAIL PROTECTED]> wrote:
>>> mdsdb=# explain analyze select backupobjects.record_id from
>>> backupobjects
Michael Glaesemann <[EMAIL PROTECTED]> writes:
> Off the cuff, when was the last time you vacuumed or ran ANALYZE?
> Your row estimates look off by a couple orders of magnitude. With up-
> to-date statistics the planner might do a better job.
>
> As for any other improvements, I'll leave that
Hi All,
I have a very slow left outer join that speeds up by more then 1000
times when I turn set enable_seqscan=off. This is not the query I
actually do in my application, but is a simplified one that singles out
the part that is really slow. All of the columns involved in the query
have indexe
Tom Lane <[EMAIL PROTECTED]> writes:
>> Thanks for the help guys! That was my problem. I actually need the
>> backup_location_rid index for a different query so I am going to keep
>> it.
>
> Well, you don't really *need* it; the two-column index on (record_id,
> backup_id) will serve perfectly
Tom Lane <[EMAIL PROTECTED]> writes:
>
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > Secondly, it might be more efficient for the planner to choose the
> > backup_location_rid index than the combination primary key index.
>
> Oh, I'm an idiot; I didn't notice the way the index was set up.
> Ye
I have a two column table with over 160 million rows in it. As the size
of the table grows queries on this table get exponentially slower. I am
using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware
is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5
configuration. For
11 matches
Mail list logo