On 11/25/05, Rodrigo Madera <[EMAIL PROTECTED]> wrote:
> I have been reading all this technical talk about costs and such that
> I don't (_yet_) understand.
>
> Now I'm scared... what's the fastest way to do an equivalent of
> count(*) on a table to know how many items it has?
>
> Thanks,
> Rodrigo
I have been reading all this technical talk about costs and such that
I don't (_yet_) understand.
Now I'm scared... what's the fastest way to do an equivalent of
count(*) on a table to know how many items it has?
Thanks,
Rodrigo
---(end of broadcast)--
What "same result"? You only ran it up to 2K rows, not 2M. In any
Sorry, I do this over and over until xxx.000 rows but I do not write
in the mail.
I do it again. initdb, create table, insert, vacuum full analyze,
explain analyze at each stage.
And there was no problem.
So I make a copy
Steinar H. Gunderson wrote:
> On Thu, Nov 24, 2005 at 09:15:44PM -0600, Kyle Cordes wrote:
> > I have hit cases where I have a query for which there is a somewhat
> > "obvious" (to a human...) query plan that should make it possible to get
> > a query answer pretty quickly. Yet the query "never"
On Thu, Nov 24, 2005 at 09:15:44PM -0600, Kyle Cordes wrote:
> I have hit cases where I have a query for which there is a somewhat
> "obvious" (to a human...) query plan that should make it possible to get
> a query answer pretty quickly. Yet the query "never" finishes (or
> rather, after hours