John Beaver wrote:
> - Trying the same constant a second time gave an instantaneous result,
> I'm guessing because of query/result caching.
AFAIK no query/result caching is in place in postgres, what you are experiencing
is OS disk/memory caching.
Regards
Gaetano Mendola
--
Sent via pgsql-per
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
1000x speedup with proper tuning - always impressive, lol.
IO seeks are always your w
On Thu, 10 Apr 2008, Bill Moran wrote:
If you install the pg_buffercache addon, you can actually look into
PostgreSQL's internals and see what tables are in the buffer in real
time.
The "Inside the PostgreSQL Buffer Cache" talk I did at the recent East
conference is now on-line at
http://www
In response to John Beaver <[EMAIL PROTECTED]>:
> Thanks Eric and Gaestano - interesting, and both examples of my naivite. :)
>
> I tried running large select(*) queries on other tables followed by another
> try at the offending query, and it was still fast. Just to be absolutely sure
> this is
Thanks Eric and Gaestano - interesting, and both examples of my
naivite. :)
I tried running large select(*) queries on other tables followed by
another try at the offending query, and it was still fast. Just to be
absolutely sure this is a scalable solution, I'll try restarting my
computer in
On Apr 10, 2008, at 9:44 AM, John Beaver wrote:
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
After reading the section on "Statistics Used By the
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
After reading the section on "Statistics Used By the Planner" in the
manual, I was a little concerned th
On Thu, 10 Apr 2008, PFC wrote:
... Lots of useful advice ...
- If you often query rows with the same gene_ref, consider using
CLUSTER to physically group those rows on disk. This way you can get all rows
with the same gene_ref in 1 seek instead of 2000. Clustered tables also make
Bitmap sca
Perfect - thanks Arjen. Using your value of 200 decreased the time to
15 seconds, and using a value of 800 makes it almost instantaneous. I'm
really not concerned about space usage; if having more statistics
increases performance this much, maybe I'll just default it to 1000?
Strangely, th
Hi John,
You don't use the same 'gene_ref ='-value, so its not a perfect
comparison. And obviously, there is the fact that the data can be in the
disk cache, the second time you run it, which would explain the almost
instantaneous result for the second query.
If repeating the query a few tim
Bill Moran wrote:
This is a FAQ, it comes up on an almost weekly basis.
I don't think so. "where".
- select count(*) from gene_prediction_view where gene_ref = 523
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes
This is a FAQ, it comes up on an almost weekly basis. Please do a
little Googling on count(*) and PostgreSQL and you'll get all the
explanations and suggestions on how to fix the problem you could
ever want.
In response to Arjen van der Meijden <[EMAIL PROTECTED]>:
> First of all, there is the
First of all, there is the 'explain analyze' output, which is pretty
helpful in postgresql.
My guess is, postgresql decides to do a table scan for some reason. It
might not have enough statistics for this particular table or column, to
make a sound decision. What you can try is to increase the
14 matches
Mail list logo