hi,
I'm sorry for not posting this first.
The server is the following and is being used exclusively for this
PostgreSQL instance:
PostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real (GCC)
4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit
Amazon EC2 Large Instance, 7.5GB memory, 64-bit
Thi
On Wed, 2 Jun 2010, Jori Jovanovich wrote:
(2) Making the query faster by making the string match LESS specific (odd,
seems like it should be MORE)
No, that's the way round it should be. The LIMIT changes it all. Consider
if you have a huge table, and half of the entries match your WHERE claus
Jori,
What is the PostgreSQL
version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?
Are the statistics for the table up to date? (Run analyze verbose
to update them.) Table and index structure would be nice to know, too.
If all else fails you can set enable_seqscan
"Kevin Grittner" writes:
> Jori Jovanovich wrote:
>> what is the recommended way to solve this?
> The recommended way is to adjust your costing configuration to
> better reflect your environment.
Actually, it's probably not the costs so much as the row estimates.
For instance, that first query
2010/6/2 Jori Jovanovich
> hi,
>
> I have a problem space where the main goal is to search backward in time
> for events. Time can go back very far into the past, and so the
> table can get quite large. However, the vast majority of queries are all
> satisfied by relatively recent data. I have
Jori Jovanovich wrote:
> what is the recommended way to solve this?
The recommended way is to adjust your costing configuration to
better reflect your environment. What version of PostgreSQL is
this? What do you have set in your postgresql.conf file? What does
the hardware look like? How b