Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> ... Rewriting it to something like this made the last iteration about as
> fast as the first:
> SELECT docid, (SELECT work to be done for each document)
> FROM documents
> WHERE docid IN (SELECT docid FROM documents
> ORDER BY docid
>
Alvaro Herrera wrote:
Performance analysis of strange queries is useful, but the input queries
have to be meaningful as well. Otherwise you end up optimizing bizarre
and useless cases.
I had a similar one a few weeks ago. I did some batch-processing over a
bunch of documents and discovered p
Dave Dutcher wrote:
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > Nikolay Samokhvalov
> >
> > What should I do to make Postgres work properly in such cases (I have
> > a lot of similar queries; surely, they are executed w/o seqscans, but
> > overall picture is the same - I see th
> -Original Message-
> From: [EMAIL PROTECTED]
> Nikolay Samokhvalov
>
> What should I do to make Postgres work properly in such cases (I have
> a lot of similar queries; surely, they are executed w/o seqscans, but
> overall picture is the same - I see that starting from sub-selects
> dra
Nikolay Samokhvalov wrote:
2. explain analyze select
*,
(select typname from pg_type where pg_type.oid=pg_proc.prorettype limit 1)
from pg_proc offset 1500 limit 1;
"Limit (cost=8983.31..8989.30 rows=1 width=365) (actual
time=17.648..17.649 rows=1 loops=1)"
" -> Seq Scan on pg_proc (cost=0.
Hello,
I do not understand, why Postgres very ofter starts execution from
sub-select instead of doing main select and then proceeding to "lite"
sub-selects. For example:
(example is quite weird, but it demonstrates the problem)
1. explain analyze select * from pg_proc offset 1500 limit 1;
"Limi