Hi Jose,

How much ram is in the server? Can you also post the following values from 
postgres config:

work_mem
shared_buffers

Frequently when those are incorrectly configured, it can lead to significant 
performance issues (like 40 second queries that should be sub-second). Out of 
the box, Postgres is configured incredibly conservatively as to run on even the 
smallest of servers. 

Also, what operating system and version of potgres?

Thanks. 



...spike
(Sent via handheld, please pardon spelling errors)

On Dec 24, 2012, at 12:10 PM, Jose Martinez <jmarti...@opencrowd.com> wrote:

> Thanks for  your responses. Sorry, I forgot to mention that the query 
> actually takes 46 seconds despite what analyze (I dont quite understand the 
> output of explain). We did perform a vacuum last Friday and it seems to help 
> but not too much. We'll also try to recreate the indices. 
> 
> Here's the output of 
> EXPLAIN (ANALYZE, BUFFERS)  SELECT * FROM TICKET 
> WHERE CREATED BETWEEN '2012-12-19 00:00:00' AND  '2012-12-20 00:00:00'
> 
> "Index Scan using t_created_idx on ticket  (cost=0.00..127638.47 rows=206383 
> width=183) (actual time=0.065..46104.557 rows=212126 loops=1)"
> "  Index Cond: ((created >= '2012-12-19 00:00:00+00'::timestamp with time 
> zone) AND (created <= '2012-12-20 00:00:00+00'::timestamp with time zone))"
> "  Buffers: shared hit=44141 read=157167"
> "Total runtime: 46293.384 ms"
> 
> 
> Thanks.
> 
> 
> On Sat, Dec 22, 2012 at 7:26 AM, Andres Freund <and...@2ndquadrant.com> wrote:
>> On 2012-12-22 13:06:21 +0100, Alban Hertroys wrote:
>> > > and here's my query
>> > >
>> > > select * from ticket
>> > > where created between '2012-12-19 00:00:00' and  '2012-12-20 00:00:00'
>> > >
>> > > This was working fine until the number of records started to grow (about 
>> > > 5 million) and now it's taking forever to return.
>> > >
>> > > Explain analyze reveals this:
>> > >
>> > > "Index Scan using ticket_1_idx on ticket  (cost=0.00..10202.64 
>> > > rows=52543 width=1297) (actual time=0.109..125.704 rows=53340 loops=1)"
>> > > "  Index Cond: ((created >= '2012-12-19 00:00:00+00'::timestamp with 
>> > > time zone) AND (created <= '2012-12-20 00:00:00+00'::timestamp with time 
>> > > zone))"
>> > > "Total runtime: 175.853 ms"
>> >
>> > > Nothing works. What am I doing wrong? why is it selecting sequential 
>> > > scan? the indexes are supposed to make the query fast. Anything that can 
>> > > be done to optimize it?
>> 
>> Whats the time you would need? Beause the above isn't that slow. Perhaps
>> the timing youre seing from your application includes transferring the
>> data over a not too fast link?
>> 
>> It would be interesting to see EXPLAIN (ANALYZE, BUFFERS) $query
>> 
>> > It is not selecting sequential scan, you're looking at an index scan. That 
>> > should be pretty fast, and it isn't that slow - that's still sub-second 
>> > performance (0.176s).
>> > Is that explain from the correct table? According to the results there are 
>> > but 53 thousand rows in it, not anywhere near 5 million.
>> 
>> Well, thats the estimate *after* applying the restriction, so that seems
>> sensible.
>> 
>> Greetings,
>> 
>> Andres Freund
>> 
>> --
>>  Andres Freund                     http://www.2ndQuadrant.com/
>>  PostgreSQL Development, 24x7 Support, Training & Services
> 

Reply via email to