On 30/09/10 01:09, Tobias Brox wrote:
With the most popular trans type it chose another plan and it took
more than 3s (totally unacceptable):
Try tweeking effective_cache_size up a bit and see what happens - I've
found these bitmap plans to be sensitive to it sometimes.
regards
Mark
On 29 September 2010 10:03, Mark Kirkwood
> Yeah, I think the idea of trying to
have a few smaller indexes for the 'hot'
> customers is a good idea. However I am wondering if just using single column
> indexes and seeing if the bitmap scan/merge of smaller indexes is actually
> more efficient is w
On 29/09/10 19:41, Tobias Brox wrote:
I just got this crazy, stupid or maybe genius idea :-)
Now, my idea is to drop that fat index and replace it with conditional
indexes for a dozen of heavy users - like those:
acc_trans(trans_type, created) where customer_id=224885;
acc_trans(trans_ty
I just got this crazy, stupid or maybe genius idea :-)
One thing that I've learned in this thread is that fat indexes (i.e.
some index on some_table(a,b,c,d,e,f)) is to be avoided as much as
possible.
One of our biggest indexes looks like this:
acc_trans(customer_id, trans_type, created)
For th
On 25 September 2010 00:00, Greg Smith wrote:
> Overindexed tables containing more columns than are actually selective is a
> very popular source of PostgreSQL slowdowns. It's easy to say "oh, I look
> this data up using columns a,b,c, so lets put an index on a,b,c". But if an
> index on a alone
Tobias Brox wrote:
I have easily measurable improvements on client systems increasing
shared_buffers into the 4GB - 8GB range. Popular indexes move into there,
stay there, and only get written out at checkpoint time rather than all the
time.
Ours is at 12 GB, out of 70 GB total RAM.
Thanks for spending your time on this ... amidst all the useful
feedback I've received, I'd rate your post as the most useful post.
>> 1) Are there any good ways to verify my hypothesis?
>
> You can confim easily whether the contents of the PostgreSQL buffer cache
> contain when you think they do
On 24 September 2010 21:24, Brad Nicholson wrote:
>> The pertinent difference between pg_stat_user_indexes and
>> pg_statio_user_indexes is the latter shows the number of blocks read from
>> disk or found in the cache.
>
> I have a minor, but very important correction involving this point. The
>
On 24 September 2010 00:12, Mark Kirkwood wrote:
> Re index size, you could try indexes like:
>
> some_table(a)
> some_table(b)
>
> which may occupy less space, and the optimizer can bitmap and/or them to
> work like the compound index some_table(a,b).
Hm ... never considered that ... but is it c
On 24 September 2010 21:06, Bob Lunney wrote:
> First off, what version of PostgreSQL are you running? If you have 8.4,
> nightly vacuuming shouldn't be necessary with properly tuned autovacuum jobs.
8.3. We'll upgrade to 9.0 during the December holidays fwiw. But
point taken, I will continue
On 10-09-24 03:06 PM, Bob Lunney wrote:
The pertinent difference between pg_stat_user_indexes and
pg_statio_user_indexes is the latter shows the number of blocks read from disk
or found in the cache.
I have a minor, but very important correction involving this point. The
pg_statio tables s
m referring to. You discovered this independently yourself,
according to your anecdote about the overlapping indexes.
Bob Lunney
--- On Fri, 9/24/10, Tobias Brox wrote:
> From: Tobias Brox
> Subject: Re: [PERFORM] Memory usage - indexes
> To: "Bob Lunney"
> Cc: pgsql
On 10-09-24 01:41 PM, Tobias Brox wrote:
What do you mean, that you could run regular vacuum less frequently,
or that the regular vacuum would go faster?
It means that vacuums ran less frequently. With cron triggered vacuums,
we estimated when tables needed to be vacuumed, and vacuumed them
Tobias Brox wrote:
We do have some bloat-problems as well - every now and then we decide
to shut down the operation, use pg_dump to dump the entire database to
an sql file and restore it. The benefits are dramatic, the space
requirement goes down a lot, and often some of our
performance-problems
Tobias Brox wrote:
1) Are there any good ways to verify my hypothesis?
You can confim easily whether the contents of the PostgreSQL buffer
cache contain when you think they do by installing pg_buffercache. My
paper and sample samples at
http://www.pgcon.org/2010/schedule/events/218.en.html
Tobias Brox wrote:
> If it was to me, we would have had autovacuum turned on. We've
> had one bad experience when the autovacuumer decided to start
> vacuuming one of the biggest table at the worst possible moment -
> and someone figured autovacuum was a bad idea. I think we
> probably still w
On 24 September 2010 19:16, Brad Nicholson wrote:
[Brad Nicholson]
> Why is the vacuum dragging out over time? Is the size of your data
> increasing, are you doing more writes that leave dead tuples, or are your
> tables and/or indexes getting bloated?
Digressing a bit here ... but the biggest r
On 10-09-24 12:46 PM, Tobias Brox wrote:
On 24 September 2010 18:23, Bob Lunney wrote:
Consult pg_statio_user_indexes to see which indexes have been used
and how much.
What is the main differences between pg_statio_user_indexes and
pg_stat_user_indexes?
The pg_stat_* views give you usage i
On 24 September 2010 00:12, Mark Kirkwood wrote:
> All good questions! Before (or maybe as well as) looking at index sizes vs
> memory I'd check to see if any of your commonly run queries have suddenly
> started to use different plans due to data growth, e.g:
>
> - index scan to seq scan (perhaps
On 24 September 2010 18:23, Bob Lunney wrote:
> Consult pg_statio_user_indexes to see which indexes have been used
> and how much.
What is the main differences between pg_statio_user_indexes and
pg_stat_user_indexes?
> Indexes with comparitively low usages rates aren't helping you much and are
n one huge index on the
original table.
Good luck!
Bob Lunney
--- On Thu, 9/23/10, Tobias Brox wrote:
> From: Tobias Brox
> Subject: [PERFORM] Memory usage - indexes
> To: pgsql-performance@postgresql.org
> Date: Thursday, September 23, 2010, 5:50 PM
> We've come to a ti
Tobias Brox wrote:
> Sorry for all the stupid questions ;-)
I'm with Mark -- I didn't see nay stupid questions there.
Where I would start, though, is by checking the level of bloat. One
long-running query under load, or one query which updates or deletes
a large number of rows, can put you
On 24/09/10 09:50, Tobias Brox wrote:
We've come to a tipping point with one of our database servers, it's
generally quite loaded but up until recently it was handling the load
well - but now we're seeing that it struggles to process all the
selects fast enough. Sometimes we're observing some we
We've come to a tipping point with one of our database servers, it's
generally quite loaded but up until recently it was handling the load
well - but now we're seeing that it struggles to process all the
selects fast enough. Sometimes we're observing some weird lock-like
behaviour (see my other po
24 matches
Mail list logo