On 14 Sep 2011, at 20:45, Brian Fehrle wrote:
>> That is only about 1/30th of your table. I don't think a seqscan makes sense
>> here unless your data is distributed badly.
>>
> Yeah the more I look at it, the more I think it's postgres _thinking_ that
> it's faster to do a seqential scan. I'll
On 09/14/2011 01:10 AM, Alban Hertroys wrote:
On 13 Sep 2011, at 23:44, Brian Fehrle wrote:
These queries basically do a 'select max(primary_key_column) from table group by
column1, column2." Because of the group by, we would result in a sequential
scan of the entire table which proves to be
On 13 Sep 2011, at 23:44, Brian Fehrle wrote:
> These queries basically do a 'select max(primary_key_column) from table group
> by column1, column2." Because of the group by, we would result in a
> sequential scan of the entire table which proves to be costly.
That seems to suggest a row where
Hi,
On 14 September 2011 07:44, Brian Fehrle wrote:
> 2. I have appropriate indexes where they need to be. The issue is in the
> query planner not using them due to it (i assume) just being faster to scan
> the whole table when the data set it needs is as large as it is.
Try to reduce random_pag