> We avert the subsequent execution of count(*) by passing the
> value of count(*) as a query parameter through the link in page
> numbers.
Mallah, and others who mentioned caching the record count:
Yes, I will certainly do this. I can detect whether the query's filter has
been changed, or whet
> If you only need the count when you've got the results, most PG client
> interfaces will tell you how many rows you've got. What language is your app
> in?
PHP.
But I have only a subset of the results, retrieved via a query with a "LIMIT
" clause, so $pg_numrows is m.
And retrieving all results
I understand that COUNT queries are expensive. So I'm looking for advice on
displaying paginated query results.
I display my query results like this:
Displaying 1 to 50 of 2905.
1-50 | 51-100 | 101-150 | etc.
I do this by executing two queries. One is of the form:
SELECT FROM WHERE
Dennis, Shridhar, and Neil,
Thanks for your input. Here are my responses:
I ran VACUUM FULL on the table in question. Although that did reduce "Pages"
and "UnUsed", the "SELECT *" query is still much slower on this installation
than in the new, restored one.
Old server:
# VACUUM FULL abc;
I backed up my database using pg_dump, and then restored it onto a different
server using psql. I see that the query "SELECT COUNT(*) FROM myTable"
executes immediately on the new server but takes several seconds on the old
one. (The servers are identical.)
What could account for this differe
into upgrading PostgreSQL.
-David
On Tuesday, December 16, 2003 2:51 PM, Neil Conway [SMTP:[EMAIL PROTECTED]
wrote:
> "David Shadovitz" <[EMAIL PROTECTED]> writes:
> > I'm running PG 7.2.2 on RH Linux 8.0.
>
> Note that this version of PostgreSQL is quite old.
>
> I've tried to measure the duration of sql with printing out
> "localtimestamp" but for some reason during the same pg/plsql call
> it returns the same value:
Aram,
>From http://www.postgresql.org/docs/current/static/functions-datetime.html:
There is also the function timeofday(), which for h
I'm running PG 7.2.2 on RH Linux 8.0.
I'd like to know why "VACUUM ANALYZE " is extemely slow (hours) for
certain tables. Here's what the log file shows when I run this command on
my "employees" table, which has just 5 columns and 55 records:
VACUUM ANALYZE employees
DEBUG: --Relation employe
Here are my query and schema. The ERD is at http://dshadovi.f2o.org/pg_erd.jpg
(sorry about its resolution).
-David
SELECT
zbr.zebra_name
, dog.dog_name
, mnk.monkey_name
, wrm.abbreviation || ptr.abbreviation as abbrev2
, whg.warthog_num
, whg.color
, rhn.rhino_name
, der.deer_name
,
> This is not very informative when you didn't show us the query nor
> the table schemas..
> BTW, what did you do with this, print and OCR it?
Tom,
I work in a classified environment, so I had to sanitize the query plan, print
it, and OCR it. I spent a lot of time fixing typos, but I guess at
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Thanks.
-David
Hash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual
time=118773.28..580889.01 rows=57076 loops=1)
-> Hash Join (cost=16619.49..21628.48 rows=40133
11 matches
Mail list logo