On Sun, Sep 28, 2008 at 9:08 PM, John Huttley <[EMAIL PROTECTED]> wrote:
> Ah yess... actually I can get the Kingston stuff locally.
> However at the moment I'm happily married and want to keep it that way!
>
> Everything is in pairs too. Actually its a lot cheaper than when it first
> came out, bu
I've canned the db and got rid my of data.
I'm in the midst of doing some other benchmarking for a possible change
to the bacula database.
Loading up 1M records into a table of 60M records complete with indexes.
It's still going...
--john
Dan Langille wrote:
On Sep 28, 2008, at 10:01 PM, J
On Sep 28, 2008, at 10:01 PM, John Huttley wrote:
Greg Smith wrote:
On Mon, 29 Sep 2008, John Huttley wrote:
checkpoint _segments=16 is fine, going to 64 made no improvement.
You might find that it does *after* increasing shared_buffers. If
the buffer cache is really small, the checkp
Ah yess... actually I can get the Kingston stuff locally.
However at the moment I'm happily married and want to keep it that way!
Everything is in pairs too. Actually its a lot cheaper than when it
first came out, but still
a lot more than your corner shop DDR-2 stuff.
--John
Scott Marlowe
On Sun, Sep 28, 2008 at 8:01 PM, John Huttley <[EMAIL PROTECTED]> wrote:
> Ahh bugger, I've just trashed my test setup.
> I've settled on 64Mb shared memory since I've only got 1Gb or RAM and the
> system impact of 256M is severe.
> Also it uses FB-DIMMS which cost arm+leg+first born
http://www.c
Greg Smith wrote:
On Mon, 29 Sep 2008, John Huttley wrote:
checkpoint _segments=16 is fine, going to 64 made no improvement.
You might find that it does *after* increasing shared_buffers. If the
buffer cache is really small, the checkpoints can't have very much
work to do, so their impac
On Mon, 29 Sep 2008, John Huttley wrote:
checkpoint _segments=16 is fine, going to 64 made no improvement.
You might find that it does *after* increasing shared_buffers. If the
buffer cache is really small, the checkpoints can't have very much work to
do, so their impact on performance is s
Thanks to everyone that responded.
I've done some benchmarking
checkpoint _segments=16 is fine, going to 64 made no improvement.
Using "update file set size=99" as a statement, but changing 99 on each
run..
With 32M shared memory, time in sec and leaving the system idle long
enough between ru
Ahh! I've not dealt with that before. I'll look it up.
Thanks Tom.
Tom Lane wrote:
John Huttley <[EMAIL PROTECTED]> writes:
You are thinking of HOT?
I don't think it applies in the case of full table updates??
Sure, as long as there's enough free space on each page.
If you wanted t
I have had great success using FILLFACTOR on certain tables where big
updates like this occur and improving performance. It is still not as fast
as I would like, but there are significant gains. A big disk array won't
help you as much as it should -- yes it will be faster, but it will still be
ch
John Huttley <[EMAIL PROTECTED]> writes:
> Scott Marlowe wrote:
>> was... was a part of the trade-offs.
> You are thinking of HOT?
> I don't think it applies in the case of full table updates??
Sure, as long as there's enough free space on each page.
If you wanted to make a table that was optim
Scott Marlowe wrote:
On Fri, Sep 26, 2008 at 5:03 PM, John Huttley <[EMAIL PROTECTED]> wrote:
Hi Andrew,
There are two problems.
The first is the that if there is a table with a index and an update is
performed on a non indexed field,
the index is still re indexed.
I assume you mean
On Sat, Sep 27, 2008 at 4:33 PM, John Huttley <[EMAIL PROTECTED]> wrote:
>
> > > this is part of the trade-offs of MVCC.
>
> > was... was a part of the trade-offs.
>
> You are thinking of HOT?
> I don't think it applies in the case of full table updates??
Sure, you just need a table with plenty o
On Fri, Sep 26, 2008 at 5:03 PM, John Huttley <[EMAIL PROTECTED]> wrote:
> Hi Andrew,
> There are two problems.
> The first is the that if there is a table with a index and an update is
> performed on a non indexed field,
> the index is still re indexed.
I assume you mean updated, not reindexed, a
On Sat, 27 Sep 2008, John Huttley wrote:
I've got 32M shared on a 1G machine and 16 checkpoint segments.
I'll run some tests against 64 segments and see what happens.
Increase shared_buffers to 256MB as well. That combination should give
you much better performance with the type of update yo
Hi Greg,
I've got 32M shared on a 1G machine and 16 checkpoint segments.
I'll run some tests against 64 segments and see what happens.
Your previous postings were extremely helpful wrt the MVCC issue.
I thank you!
-john
Greg Smith wrote:
On Fri, 26 Sep 2008, John Huttley wrote:
running
Hi Andrew,
There are two problems.
The first is the that if there is a table with a index and an update is
performed on a non indexed field,
the index is still re indexed. this is part of the trade-offs of MVCC.
Apparently this is documented under 'MVCC' in the manual. It should be
documented u
Hi,
On Fri, Sep 26, 2008 at 07:24:55AM +1200, John Huttley wrote:
> I've just had an interesting encounter with the slow full table update
> problem that is inherent with MVCC
Quite apart from the other excellent observations in this thread, what
makes you think this is an MVCC issue exactly?
A
On Fri, 26 Sep 2008, John Huttley wrote:
runningupdate file set perms='0664' took about 10 mins
What do you have checkpoint_segments and shared_buffers set to? If you
want something that's doing lots of updates to perform well, you need to
let PostgreSQL have a decent size chunk of memo
On Thursday 25 September 2008, John Huttley <[EMAIL PROTECTED]> wrote:
>
> Comments anyone?
Don't do full table updates? This is not exactly a news flash.
--
Alan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.pos
On Thu, Sep 25, 2008 at 1:24 PM, John Huttley <[EMAIL PROTECTED]> wrote:
> I've just had an interesting encounter with the slow full table update
> problem that is inherent with MVCC
>
> The system is 64 bit linux with 2.6.25 kernel feeding scsi disks.
>
> the table is
>
> CREATE TABLE file (
> f
21 matches
Mail list logo