I have a problem with a part of big query because of incorrect
estimation. It's easy to emulate the case:
create table a (id bigint, id2 bigint);
create table b (id bigint, id2 bigint);
insert into a (id, id2)
select random() * 10, random() * 100
from generate_series(1, 10);
insert int
PFC пишет:
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov
<[EMAIL PROTECTED]> wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create tab
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[EMAIL PROTECTED]>
wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
gr
On Thu, 24 Apr 2008, Vlad Arkhipov wrote:
It was written below in my first post:
"These queries are part of big query and optimizer put them on the leaf
of query tree, so rows miscount causes a real problem. "
actual rows count for the first query is 294, estimate - 11; for the
second -- 283 and
Albe Laurenz пишет:
Vlad Arkhipov wrote:
I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
It was written below in my first post:
"These queries are part of big query and optimizer put them on the leaf
of query tree, so row
Vlad Arkhipov wrote:
> I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perform
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
groupid bigint references g(id),
id bigint,
unique(id, groupid));
analyze g;
analyz