Robert Haas wrote:
On Mon, Aug 10, 2009 at 11:19 AM, Kevin Grittner
wrote:
(2) Somehow use effective_cache_size in combination with some sort of
current activity metrics to dynamically adjust random access costs.
(I know, that one's total hand-waving, but it seems to have some
possibility of b
On Mon, Aug 10, 2009 at 11:19 AM, Kevin
Grittner wrote:
> Robert Haas wrote:
>
>> Just handling better the case where we pick a straight nested loop
>> rather than a hash join would help a lot of people. Some basic
>> conservatism about the number of outer rows would be useful here (in
>> particu
Robert Haas wrote:
> Just handling better the case where we pick a straight nested loop
> rather than a hash join would help a lot of people. Some basic
> conservatism about the number of outer rows would be useful here (in
> particular, we should probably assume that there will be at least 2
On Fri, Aug 7, 2009 at 5:09 PM, Scott Carey wrote:
> On 8/7/09 5:53 AM, "Robert Haas" wrote:
>
>> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren
>> wrote:
>>> Would it get attention if I submit this to
>>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
>>> bug, but an i
On 8/7/09 5:53 AM, "Robert Haas" wrote:
> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren
> wrote:
>> Would it get attention if I submit this to
>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
>> bug, but an improvement request).
>
> I think that many of the people
On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren wrote:
> Would it get attention if I submit this to
> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
> bug, but an improvement request).
I think that many of the people who read that mailing list also read
this one, includi
Thanks for your response.
I think your analysis is correct, When there are more than 100 rows that
match this query, limit 100 is fast.
However, we often have less than hundred rows, so this is not sufficient for
us.
This suggestion ('OFFSET 0' trick) did not show differences in response time
(r
Kees van Dieren wrote:
> Hi Folks,
>
> Thanks for your response.
>
> I have added the following index (suggested by other post):
>
> CREATE INDEX events_events_cleared_eventtype
> ON events_events
> USING btree
> (eventtype_id, cleared)
> WHERE cleared = false;
>
> Also with columns in reve
Hi Folks,
Thanks for your response.
I have added the following index (suggested by other post):
CREATE INDEX events_events_cleared_eventtype
ON events_events
USING btree
(eventtype_id, cleared)
WHERE cleared = false;
Also with columns in reversed order.
No changes in response time noti
The query:
select events_events.id FROM events_events
left join events_event_types on events_events.eventType_id=
events_event_types.id
where events_event_types.severity=70
and events_events.cleared='f'
order by events_events.dateTime DESC
The main problem seems to be lack of a suitabl
On Fri, Jul 31, 2009 at 1:11 PM, Kees van Dieren wrote:
> It takes 155ms to run this query (returning 2 rows)
>
> Query plan: without limit:
> "Sort (cost=20169.62..20409.50 rows=95952 width=16)"
Could you send the results of EXPLAIN ANALYZE for both queries?
Evidently the planner is expecting a
11 matches
Mail list logo