Claudio Freire writes:
> On Thu, Apr 26, 2012 at 2:37 PM, Claudio Freire
> wrote:
>> Fun thing is, nothing in the CTE's execution really changed. The only
>> change, is that now a sequential scan of overview was chosen instead
>> of the index.
>> Why could this be? The output (number of search v
On 04/26/2012 04:08 PM, Tom Lane wrote:
Thomas Kellerer writes:
Tom Lane wrote on 26.04.2012 21:17:
Um ... did you analyze all the tables, or just some of them? I get
sub-millisecond runtimes if all four tables have been analyzed, but it
does seem to pick lousy plans if, say, only a and b h
Craig James wrote:
> An update to our system means I'm going to be rewriting every row
> of some large tables (20 million rows by 15 columns). In a
> situation like this, can auto-vacuum take care of it, or should I
> plan on vacuum-full/reindex to clean up?
>
> This is 8.4.4.
If there is any
Thomas Kellerer writes:
> Tom Lane wrote on 26.04.2012 21:17:
>> Um ... did you analyze all the tables, or just some of them? I get
>> sub-millisecond runtimes if all four tables have been analyzed, but it
>> does seem to pick lousy plans if, say, only a and b have been analyzed.
> Here it's sim
On 04/26/2012 12:49 PM, Craig James wrote:
An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns). In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?
If you want to recla
On 04/26/2012 12:49 PM, Craig James wrote:
An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns). In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?
If you rewrite t
An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns). In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?
This is 8.4.4.
Thanks,
Craig
--
Sent via pgsql-performance mai
Tom Lane wrote on 26.04.2012 21:17:
Richard Kojedzinszky writes:
Dear list,
We have a database schema, which looks the same as the attached script.
When filling the tables with data, and skipping analyze on the table (so
pg_stats contains no records for table 'a'), the first select in the
sc
Richard Kojedzinszky writes:
> Dear list,
> We have a database schema, which looks the same as the attached script.
> When filling the tables with data, and skipping analyze on the table (so
> pg_stats contains no records for table 'a'), the first select in the
> script runs fast, but after an
Richard Kojedzinszky wrote:
> tuning our installation to not to use sequence scans in this case?
Make sure effective_cache_size is set to the sum of shared_buffers
and whatever your OS shows as usable for caching. Try adjusting
cost factors: maybe random_page_cost between 1 and 2, and
cpu_tup
On Thu, Apr 26, 2012 at 2:37 PM, Claudio Freire wrote:
> Fun thing is, nothing in the CTE's execution really changed. The only
> change, is that now a sequential scan of overview was chosen instead
> of the index.
> Why could this be? The output (number of search values, even the
> values themselv
On 24/04/2012 20:32, Shaun Thomas wrote:
I'm not sure if you've done metrics or not, but XFS performance is
highly dependent on your init and mount options. I can give you some
guidelines there, but one of the major changes is that the Linux 3.X
kernels have some impressive performance improv
Dear list,
We have a database schema, which looks the same as the attached script.
When filling the tables with data, and skipping analyze on the table (so
pg_stats contains no records for table 'a'), the first select in the
script runs fast, but after an analyze the planner decides to sequenc
Here's a strange thing.
Postgres 9.1.0 on a severely underpowered test machine
effective_cache_size = 128M
work_mem = 48M
This query:
WITH
RECURSIVE subordinates AS (
SELECT id, originator_id FROM partner_deliveries WHERE originator_id
in (225645)
UNION ALL
SELECT partn
On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <
venki_ramachand...@yahoo.com> wrote:
>
> Now I have to run the same pgplsql on all possible combinations of
> employees and with 542 employees that is about say 300,000 unique pairs.
>
> So (30 * 40)/(1000 * 60 * 60) = 3.33 hours and I hav
15 matches
Mail list logo