On Mon, May 13, 2013 at 5:58 PM, Mike McCann wrote:
> We assume that steps taken to improve the worst-case query scenario will
> also improve these kind of queries. If anything above pops out as needing
> better planning please let us know that too!
Bad assumption. If your real workload will be
On May 13, 2013, at 4:24 PM, Jeff Janes wrote:
> On Mon, May 13, 2013 at 3:36 PM, Mike McCann wrote:
>
> Increasing work_mem to 355 MB improves the performance by a factor of 2:
>
> stoqs_march2013_s=# set work_mem='355MB';
> SET
> stoqs_march2013_s=# explain analyze select * from stoqs_measure
On Mon, May 13, 2013 at 3:36 PM, Mike McCann wrote:
>
> Increasing work_mem to 355 MB improves the performance by a factor of 2:
>
> stoqs_march2013_s=# set work_mem='355MB';
> SET
> stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter
> order by datavalue;
>
On 14/05/13 10:10, Marti Raudsepp wrote:
On Mon, May 13, 2013 at 6:01 PM, ach wrote:
what I'm wondering is, since
the unique constraint already covers the whole table and all rows in
entirety, is it really necessary for statistics to be set that high on
those?
AFAIK if there are exact-matchin
On May 7, 2013, at 4:21 PM, Jeff Janes wrote:
> On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe wrote:
> On Thu, May 2, 2013 at 5:11 PM, Mike McCann wrote:
> > Hello,
> >
> > We are in the fortunate situation of having more money than time to help
> > solve our PostgreSQL 9.1 performance problem.
On Mon, May 13, 2013 at 6:01 PM, ach wrote:
> what I'm wondering is, since
> the unique constraint already covers the whole table and all rows in
> entirety, is it really necessary for statistics to be set that high on
> those?
AFAIK if there are exact-matching unique constraints/indexes for a
qu
Robert Haas writes:
> On Mon, May 13, 2013 at 4:14 PM, Tom Lane wrote:
>> You know, of course, that the join size estimate isn't arrived at that
>> way. Still, this point does make it seem more like a planner bug and
>> less like bad input stats. It would be nice to see a self-contained
>> exam
On Mon, May 13, 2013 at 4:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> The planner is estimating this the outer side of this nested loop will
>> produce 33 rows and that the inner side will produce 1. One would
>> assume that the row estimate for the join product couldn't be more
>> than 33 *
Robert Haas writes:
> The planner is estimating this the outer side of this nested loop will
> produce 33 rows and that the inner side will produce 1. One would
> assume that the row estimate for the join product couldn't be more
> than 33 * 1 = 33 rows, but the planner is estimating 62335 rows,
On Tue, 30 Apr 2013 06:20:55 -0500, Christoph Berg
wrote:
Hi,
this is more of a report than a question, because we thought this
would be interesting to share.
We recently (finally) migrated an Request Tracker 3.4 database running
on 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes wei
On Tue, Apr 30, 2013 at 7:20 AM, Christoph Berg
wrote:
>-> Nested Loop
> (cost=24.57..844.83 rows=62335 width=4) (actual time=0.109..0.633 rows=23
> loops=1)
> -> Bitmap Heap Scan
> o
On 2013-05-13 13:21:54 -0400, Robert Haas wrote:
> On Sun, May 12, 2013 at 8:50 AM, Andres Freund wrote:
> > [ a response that I entirely agree with ]
>
> +1 to all that.
> It's maybe worth noting that it's probably fairly uncommon for vacuum
> to read a page and not dirty it, because if the pag
On Sun, May 12, 2013 at 8:50 AM, Andres Freund wrote:
> [ a response that I entirely agree with ]
+1 to all that.
It's maybe worth noting that it's probably fairly uncommon for vacuum
to read a page and not dirty it, because if the page is all-visible,
we won't read it. And if it's not all-visi
I have a unique constraint on two columns of a supermassive table (est. 1.7
bn rows) that are the only way the table's ever queried - and it's
blindingly fast: 51ms to retrieve any single row even non-partitioned.
Anyway: Right now statistics on the two unique constrained columns are set
to 200 e
- I have a problem with some files on a postgresql 9.0 on windows:
2013-05-10 12:49:08 EDT ERROR: could not stat file "base/773074/30352481":
Permission denied
2013-05-10 12:49:08 EDT STATEMENT: SELECT pg_database_size($1) AS size;
I know what does it means: the statistic pooler can`t access
15 matches
Mail list logo