On Tue, 2004-10-05 at 05:09, Dave Page wrote:
>  
> 
> > -----Original Message-----
> > From: [EMAIL PROTECTED] 
> > [mailto:[EMAIL PROTECTED] On Behalf Of Ian FREISLICH
> > Sent: 05 October 2004 09:57
> > To: Greg Sabino Mullane
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: [HACKERS] PL/PgSQL for counting all rows in all tables. 
> > 
> > "Greg Sabino Mullane" wrote:
> > > ANALYZE;
> > >  
> > > SELECT n.nspname, relname, reltuples
> > > FROM pg_class c, pg_namespace n
> > > WHERE c.relnamespace=n.oid
> > > AND relkind='r'
> > > AND NOT n.nspname ~ '^pg_'
> > > ORDER BY 1,2;
> > 
> > Maybe this gem should be passed onto the pgadmin folks.  When 
> > you click on a table name in the interface it does what I can 
> > only presume is a count(*) from relation, which takes forever 
> > on enormous relations.  It then claims this to be a row 
> > estimate anyway, so they could probably drop the analyze.
> 
> The 'Rows (counted)' value is taken from a count(*), but only if the
> 'Rows (estimated)' value (which comes from pg_class.reltuples, as above,
> but without the costly analyze) is less than the cut-off value set in
> the options dialogue. So, if you never want to wait for the exact row
> count, just set the appropriate option to zero.
> 

How do you handle table growth that makes the reltuples value out of
whack since the last analyze?  Seems unfortunate that people may not
realize that the numbers they are looking at are incorrect but I don't
see much way to avoid it. 

Seems new tables would have that problem too since they would default to
1000... do you analyze after table creation?


Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to