> We now found (thanks Andres and Snow-Man in #postgresql) that in our
> tests, after the indexes get too large performance drops signficantly
> and our system limps forward due to disk reads (presumably for the
> indexes). If we remove the indexes, performance for our entire sample
> test is gre
> checkpoint_completion_targets spreads out the writes to disk. PostgreSQL
> doesn't make any attempt yet to spread out the sync calls. On a busy
> server, what can happen is that the whole OS write cache fills with dirty
> data--none of which is written out to disk because of the high kernel
> Then the
> database makes the fsync call, and suddenly the OS wants to flush 2-6GB of
> data
> straight to disk. Without that background trickle, you now have a flood that
> only the highest-end disk controller or a backing-store full of SSDs or PCIe
> NVRAM could ever hope to absorb.
Isn
> That's not entirely surprising. The problem with having lots of memory is...
> that you have lots of memory. The operating system likes to cache, and this
> includes writes. Normally this isn't a problem, but with 48GB of RAM, the
> defaults (for CentOS 5.5 in particular) are to use up to 40
>> * Allow CLUSTER to sort the table rather than scanning the index
> when it seems likely to be cheaper (Leonardo Francalanci)
>
> Looks like I owe Leonardo Francalanci a pizza.
Well, the patch started from a work by Gregory Stark, and Tom fixed
a nasty bug; but I&
>> Hash indexes have been improved since 2005 - their performance was
>> improved quite a bit in 9.0. Here's a more recent analysis:
>
>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/
>
> The big picture though is that we're not going to remove hash indexes,
> even if
>1. How does database size affect insert performance?
>2. Why does number of written buffers increase when database size grows?
It might be related to indexes. Indexes size affect insert performance.
>3. How can I further analyze this problem?
Try without indexes?
--
Sent via pgsql-performa
> temp tables are not wal logged or
> synced. Periodically they can be flushed to a permanent table.
What do you mean with "Periodically they can be flushed to
a permanent table"? Just doing
insert into tabb select * from temptable
or using a proper, per-temporary table command???
--
>>> I think this may be caused by GIN's habit of queuing index insertions
>>> until it's accumulated a reasonable-size batch:
So the fact that it takes 21s to sync 365 buffers in this case is normal?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
> 2010-10-21 16:39:15 CEST LOG: checkpoint complete: wrote 365 buffers
> (11.9%); 0 transaction log file(s) added, 0 removed, 3 recycled;
> write=0.403 s, sync=21.312 s, total=21.829 s
I'm no expert, but isn't 21s to sync 365 buffers a big amount of time?
--
Sent via pgsql-performance m
> > does it use a BBU?)
Sorry, this was supposed to read "do you have cache on the controller", of
course a battery can't change the performance... but you got it anyway...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
h
> We are using PostgreSQL for storing data and full-text search indexes
> for the webiste of a daily newspaper. We are very happy overall with the
> results, but we have one "weird" behaviour that we would like to solve.
I think there's a lot of missing info worth knowing:
1) checkpoints
12 matches
Mail list logo