Hi,
Shridhar Daithankar wrote:
select relpages,reltuples from pg_class where relname=;
Assuming the stats are recent enough, it would be much faster and accurate..
this needs an analyze ; before select from pg_class, cause
only after analyze will update pg the pg_class
C.
"D'Arcy J.M. Cain" <[EMAIL PROTECTED]> writes:
> In any case, if I have to vacuum a 20,000,000 row table to get an accurate
> count then I may as well run count(*) on it.
> (*): Actually I only analyze but I understand that that should be sufficient.
ANALYZE without VACUUM will deliver a not-very
On January 6, 2004 07:20 am, Shridhar Daithankar wrote:
> On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:
> > On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
> > cert=# select relpages,reltuples::bigint from pg_class where relname=
> > 'certificate';
> > relpages | reltuples
> >
if this situation persists after 'analyze certificate', then you need to:
increase the statistics target 'alter table certificate alter column
certificate_id set statistics 100'
or
'vacuum full certificate'
i.e : there are lots of (dead) updated or deleted tuples in the
relation, distributed
Robert Treat wrote:
On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:
The numbers from pg_class are estimates updated by vacuum /analyze. Of course
you need to run vacuum frequent enough for that statistics to be updated all
the time or run autovacuum daemon..
Ran into same problem on my m
On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:
> On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:
> > On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
> > cert=# select relpages,reltuples::bigint from pg_class where relname=
> > 'certificate';
> > relpages | reltuples
> >
On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:
> On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
> cert=# select relpages,reltuples::bigint from pg_class where relname=
> 'certificate';
> relpages | reltuples
> --+---
>399070 | 24858736
> (1 row)
>
> But:
>
On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
> On Tuesday 06 January 2004 01:22, Rod Taylor wrote:
> > Anyway, with Rules you can force this:
> >
> > ON INSERT UPDATE counter SET tablecount = tablecount + 1;
> >
> > ON DELETE UPDATE counter SET tablecount = tablecount - 1;
>
> That would
On Tuesday 06 January 2004 01:22, Rod Taylor wrote:
> Anyway, with Rules you can force this:
>
> ON INSERT UPDATE counter SET tablecount = tablecount + 1;
>
> ON DELETE UPDATE counter SET tablecount = tablecount - 1;
That would generate lot of dead tuples in counter table. How about
select relpag
On Tuesday 06 January 2004 07:16, Christopher Browne wrote:
> Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Paul
Tuckfield) wrote:
> > Not that I'm offering to do the porgramming mind you, :) but . .
> >
> > In the case of select count(*), one optimization is to do a scan of the
>
[EMAIL PROTECTED] (Rod Taylor) wrote:
>> Especially with very large tables, hearing the disks grind as Postgres scans
>> every single row in order to determine the number of rows in a table or the
>> max value of a column (even a primary key created from a sequence) is pretty
>> painful. If the im
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Paul Tuckfield) wrote:
> Not that I'm offering to do the porgramming mind you, :) but . .
>
> In the case of select count(*), one optimization is to do a scan of the
> primary key, not the table itself, if the table has a primary key.
Paul Tuckfield <[EMAIL PROTECTED]> writes:
> In the case of select count(*), one optimization is to do a scan of the
> primary key, not the table itself, if the table has a primary key. In a
> certain commercial, lesser database, this is called an "index fast full
> scan". It would be important
Not that I'm offering to do the porgramming mind you, :) but . .
In the case of select count(*), one optimization is to do a scan of the
primary key, not the table itself, if the table has a primary key. In a
certain commercial, lesser database, this is called an "index fast full
scan". It wou
Oops! [EMAIL PROTECTED] (John Siracusa) was seen spray-painting on a wall:
> Speaking of special cases (well, I was on the admin list) there are two
> kinds that would really benefit from some attention.
>
> 1. The query "select max(foo) from bar" where the column foo has an
> index. Aren't indexe
John Siracusa <[EMAIL PROTECTED]> writes:
> 1. The query "select max(foo) from bar" where the column foo has an index.
> Aren't indexes ordered? If not, an "ordered index" would be useful in this
> situation so that this query, rather than doing a sequential scan of the
> whole table, would just "
On 1/5/04 2:52 PM, Rod Taylor wrote:
> max(foo) optimizations requires an extension to the aggregates system.
> It will likely happen within a few releases.
Looking forward to it.
> A work around can be accomplished today through the use of LIMIT and ORDER BY.
Wowzers, I never imagined that that
> Especially with very large tables, hearing the disks grind as Postgres scans
> every single row in order to determine the number of rows in a table or the
> max value of a column (even a primary key created from a sequence) is pretty
> painful. If the implementation is not too horrendous, this i
18 matches
Mail list logo