"Guillaume Smet" <[EMAIL PROTECTED]> wrote
> [EMAIL PROTECTED] root]# iostat 10
>
> Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 7.20 0.0092.00 0920
> sda1 0.00 0.00 0.00 0
Luke Lonergan wrote:
So that leaves the question - why not more than 64% of the I/O scan rate?
And why is it a flat 64% as the I/O subsystem increases in speed from
333-400MB/s?
It might be interesting to see what effect reducing the cpu consumption
entailed by the count aggregation has - b
Alan,
Looks like Postgres gets sensible scan rate scaling as the filesystem speed
increases, as shown below. I'll drop my 120MB/s observation - perhaps CPUs
got faster since I last tested this.
The scaling looks like 64% of the I/O subsystem speed is available to the
executor - so as the I/O sub
On Mon, Nov 21, 2005 at 10:14:29AM -0800, Luke Lonergan wrote:
This has partly been a challenge to get others to post their results.
You'll find that people respond better if you don't play games with
them.
---(end of broadcast)---
TIP 9: In vers
Alan,
Unless noted otherwise all results posted are for block device readahead set
to 16M using "blockdev --setra=16384 ". All are using the
2.6.9-11 Centos 4.1 kernel.
For those who don't have lmdd, here is a comparison of two results on an
ext2 filesystem:
On Sat, 2005-11-19 at 06:29, Alex Wang wrote:
> Great infomation. I didn't know that update is equal to delete+insert in
> Postgres. I would be more careful on designing the database access method in
> this case.
Just make sure you have regular vacuums scheduled (or run them from
within your app
Luke,
it's time to back yourself up with some numbers. You're claiming the
need for a significant rewrite of portions of postgresql and you haven't
done the work to make that case.
You've apparently made some mistakes on the use of dd to benchmark a
storage system. Use lmdd and umount th
Tom,
On 11/21/05 6:56 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
>> OK - slower this time:
>
>> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
>> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
>> which
Alan,
On 11/21/05 6:57 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> $ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=80
> 80+0 records in
> 80+0 records out
>
> real0m13.780s
> user0m0.134s
> sys 0m13.510s
>
> Oops. I just wrote 470MB/s to a file system that has
Would it be worth first agreeing on a common set of criteria to
measure? I see many data points going back and forth but not much
agreement on what's worth measuring and how to measure.
I'm not necessarily trying to herd cats, but it sure would be swell to
have the several knowledgeable minds
On Mon, Nov 21, 2005 at 02:01:26PM -0500, Greg Stark wrote:
I also fear that heading in that direction could push Postgres even further
from the niche of software that works fine even on low end hardware into the
realm of software that only works on high end hardware. It's already suffering
a bit
Greg Stark wrote:
> I also fear that heading in that direction could push Postgres even further
> from the niche of software that works fine even on low end hardware into the
> realm of software that only works on high end hardware. It's already suffering
> a bit from that.
What's high end hardwa
Alan Stange <[EMAIL PROTECTED]> writes:
> The point your making doesn't match my experience with *any* storage or
> program
> I've ever used, including postgresql. Your point suggests that the storage
> system is idle and that postgresql is broken because it isn't able to use the
> resources
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> OK - slower this time:
> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
> which all are capped at 120MB/s when doing sequential scans with different
> ver
Luke Lonergan wrote:
OK - slower this time:
We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
which all are capped at 120MB/s when doing sequential scans with different
versions of Postgres.
Postgresql
Alan,
On 11/19/05 8:43 PM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
> sdd 343.73175035.73 277.555251072 8326
>
> while doing a select count(1) on the same large table as before.
> Subsequent
16 matches
Mail list logo