se really matters all
> that much.
A third option is to update, if not found, insert.
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage. This email with any attachments is confidential and may be
subject to lega
e able to take a command line
like:
ANALYZE CARTESIAN CORRELATION orders(order_date,order_fulfilled);
which stores the fraction for each combination of most frequent value, and
domain buckets from order_date and order_fulfilled.
The difficulty is whether the planner can quickly and easily determi
> The strange thing of course is that the data is exactly the same for
> both runs, the tables have not been changed between runs, and I did
> them right after another. Even more strange is that the seq scan is
> faster than the index scan.
It is not strange at all, since both queries read A
et
11 results, so was expecting that the limit wasn't going to come into play, and
that every row would have to be read anyway.
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage. This email with any att
Sean Leach wrote
> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
> >
> >
> > Have you checked Scott Marlowe's note:
> >
> >>> unless you've got a long running transaction
> >
> > How come those 2 million dead rows are not removabl
Sean Leach wrote
> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
> >
> >> So should I do a vacuum full and then hope this doesn't
> >> happen again?
> >> Or should I run a VACUUM FULL after each aggregation run?
> >
> > If your usage patt
ansaction is still alive.
Alternatively, it may be a different 2 million dead row versions now than
earlier, and may simply be a side effect of your particular usage, and nothing
to worry about. (Though it is exactly the same number of rows, which strongly
hints at being exactly the same rows.)
R
ght have a huge number of dead rows in your table, slowing down
the sequential scan.
(Likewise updating a third of the rows, changing an indexed field.)
What do you get from:
VACUUM VERBOSE u_counts;
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achie
2113544.412 rows=0 loops=1)
> Filter: (fklistingsourceid = 5525)
Would it help to have a combined index on fklistingsourceid, entrydate?
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage
Tom Lane wrote:
> "Stephen Denne" <[EMAIL PROTECTED]> writes:
> > So dropping the fk constraint and index results in
> successful query execution with constant memory usage. Does
> this confirm that the memory leak you found is the one I was
> suffering fro
>>"Stephen Denne" <[EMAIL PROTECTED]> writes:
>>> I altered the update statement slightly, and reran the query.
>>> The altered query has been running over 3 hours now,
>>> without using lots of memory (38M private bytes).
>>> 2046 temp
I don't have a PostgreSQL build environment.
It is now Friday night for me. I left the alternate query running, and will
find out on Monday what happened.
If I drop the fk constraint, and/or its index, would I still be affected by the
leak you found?
Regards,
Stephen
> "Stephen Denne" <[EMAIL PROTECTED]> writes:
> > A simple update query, over roughly 17 million rows,
> > populating a newly added column in a table, resulted in an
> > out of memory error when the process memory usage reached
> > 2GB. Could this be
(id = $0)
Stephen Denne
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage. This email with any attachments is confidential and may be
subject to legal privilege. If it is not intended for you please advise by
reply immediately, destroy
14 matches
Mail list logo