Andrew Sullivan wrote:
Probably the most severe objection to doing things this way is that the
selected plan could change unexpectedly as a result of the physical
table size changing. Right now the DBA can keep tight rein on actions
that might affect plan selection (ie, VACUUM and ANALYZE), but th
Roger Ging <[EMAIL PROTECTED]> writes:
> update source_song_title set
> source_song_title_id = nextval('source_song_title_seq')
> ,licensing_match_order = (select licensing_match_order from
> source_system where source_system_id = ss.source_system_id)
> ,affiliation_match_order = (select affiliati
> Probably the most severe objection to doing things this way is that the
> selected plan could change unexpectedly as a result of the physical
> table size changing. Right now the DBA can keep tight rein on actions
> that might affect plan selection (ie, VACUUM and ANALYZE), but that
> would go
The following query has never finished. I have let it run for over 24
hours. This is a one time update that is part of a conversion script
from MSSQL data. All of the tables are freshly built and inserted
into. I have not run explain analyze because it does not return in a
reasonable time.
Josh,
Your hardware setup would be useful too. It's surprising how slow some
big name servers really are.
If you are seriously considering memory sizes over 4G you may want to
look at an opteron.
Dave
Joshua Marsh wrote:
Hello everyone,
I am currently working on a data project that uses PostgreS