On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote:
> Bryan Murphy writes:
>> What I did was change seq_page_cost back to 1.0 and then changed
>> random_page_cost to 0.5
>
> [ squint... ] It makes no physical sense for random_page_cost to be
> less than seq_page_cost. Please set them the same.
>
Bryan Murphy writes:
> What I did was change seq_page_cost back to 1.0 and then changed
> random_page_cost to 0.5
[ squint... ] It makes no physical sense for random_page_cost to be
less than seq_page_cost. Please set them the same.
regards, tom lane
--
Sent via pgsql
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote:
> I think what you should be doing is messing with the cost parameters
> ... and not in the direction you tried before. I gather from
> effective_cache_size = 12GB
> that you have plenty of RAM on this machine. If the tables involved
> are
Bryan Murphy writes:
> I tried that already, but I decided to try again in case I messed up
> something last time. Here's what I ran. As you can see, it still
> chooses to do a sequential scan. Am I changing the stats for those
> columns correctly?
I think what you should be doing is messing w
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote:
> Oh, I see. It thinks that it'll need to pull 260,000 redundant rows in
> order to get 1800 unique ones. Only it's wrong; you're only pulling about
> 4000.
>
> Try increasing some stats still further: itemexperiencelog.visitorid and
> visitors
Bryan,
One thing to keep in mind, due to a lapse in our judgement at the
time, this itemexperiencelog table serves as both a current state
table, and a log table. Therefore, it potentially has multiple
redundant entries, but we typically only look at the most recent entry
to figure out the stat
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote:
> On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
>> Looking through our configuration one more time, I see that at some
>> point I set random_page_cost to 2.0, but I don't see any other changes
>> to query planner settings from their defa
On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
> Looking through our configuration one more time, I see that at some
> point I set random_page_cost to 2.0, but I don't see any other changes
> to query planner settings from their default values.
You don't by any chance have enable_ set to "
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus wrote:
> For some reason, your first post didn't make it to the list, which is why
> nobody responded.
Weird... I've been having problems with gmail and google reader all week.
>>> I've got a query on our production system that isn't choosing a good
>
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank wrote:
> There is one thing I don`t understand:
>
> -> Nested Loop (cost=0.00..180564.28 rows=1806
> width=37) (actual time=0.192..60.214 rows=3174 loops=1)
> -> Index Scan using visitors_userid_index2 on
> visitors v
There is one thing I don`t understand:
-> Nested Loop (cost=0.00..180564.28 rows=1806
width=37) (actual time=0.192..60.214 rows=3174 loops=1)
-> Index Scan using visitors_userid_index2 on
visitors v (cost=0.00..2580.97 rows=1300 width=33) (actual
time=0.052..2
Brian,
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
For some reason, your first post
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Mar 23, 2009 at 2:03
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Mar 23, 2009 at 2:03
14 matches
Mail list logo