On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
We use RAID0 for the WAL for performance.
I'm missing the implication of the question...
If you have the WAL on RAID 0 you have no fault tolerance, regardless
Ian Westmacott <[EMAIL PROTECTED]> writes:
> If the WAL is lost, can you lose more than the data since the last
> checkpoint?
The problem is that you might have partially-applied actions since the
last checkpoint, rendering your database inconsistent; then *all* the
data is suspect if not actually
On Fri, 2006-01-06 at 10:37 -0600, Jim C. Nasby wrote:
> The problem is that if you lose WAL or the data, you've lost everything.
> So you might as well use raid0 for the data if you're using it for WAL.
> Or switch WAL to raid1. Actually, a really good controller *might* be
> able to do a good job
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
> On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:
> > On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> > > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> > > The tables are on a 10-spindle
On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:
> On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> > The tables are on a 10-spindle (SCSI) RAID50 with dual U320
> > controllers (XFS). This is overkill f
On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> The tables are on a 10-spindle (SCSI) RAID50 with dual U320
> controllers (XFS). This is overkill for writing and querying the data,
> but we need to constantly A
Kelly Burkhart wrote:
On 1/4/06, Steve Eckmann
<[EMAIL PROTECTED]>
wrote:
Thanks,
Steinar. I don't think we would really run with fsync off, but
I need to document the performance tradeoffs. You're right that my
explanation was confusing; probably because I'm confused about how to
use COPY
Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
<>Thanks for the suggestion, Tom. Yes, I
think I could do that. But I
thought what I was doing now was effectively the same, because the
PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
include multi
On Wed, 2006-01-04 at 09:29 -0500, Ron wrote:
> 2B is a lot of inserts. If you had to guess,
> what do you think is the maximum number of inserts you could do in a day?
It seems we are pushing it there. Our intentions are to scale much
further, but our plans are to distribute at this point.
>
On 1/4/06, Steve Eckmann <[EMAIL PROTECTED]> wrote:
Thanks, Steinar. I don't think we would really run with fsync off, but
I need to document the performance tradeoffs. You're right that my
explanation was confusing; probably because I'm confused about how to
use COPY! I could batch multiple INSERT
Steve Eckmann <[EMAIL PROTECTED]> writes:
> Thanks for the suggestion, Tom. Yes, I think I could do that. But I
> thought what I was doing now was effectively the same, because the
> PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
> include multiple SQL commands (separate
2B is a lot of inserts. If you had to guess,
what do you think is the maximum number of inserts you could do in a day?
How large is each record being inserted?
How much can you put in a COPY and how many COPYs
can you put into a transactions?
What values are you using for bgwriter* and chec
Ian Westmacott wrote:
We have a similar application thats doing upwards of 2B inserts
per day. We have spent a lot of time optimizing this, and found the
following to be most beneficial:
1) use COPY (BINARY if possible)
2) don't use triggers or foreign keys
3) put WAL and tables on differen
dlang wrote:
On Tue, 3 Jan 2006, Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in a single command; the rat
Steinar H. Gunderson wrote:
On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:
Are there general guidelines for tuning the PostgreSQL server for this kind
of application? The suggestions I've found include disabling fsync (done),
Are you sure you really want
Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in a single command; the rate for MyISAM tables improved to
about 2600 objects/
We have a similar application thats doing upwards of 2B inserts
per day. We have spent a lot of time optimizing this, and found the
following to be most beneficial:
1) use COPY (BINARY if possible)
2) don't use triggers or foreign keys
3) put WAL and tables on different spindles (channels if p
On Tue, 3 Jan 2006, Tom Lane wrote:
> Steve Eckmann <[EMAIL PROTECTED]> writes:
> > We also found that we could improve MySQL performance significantly
> > using MySQL's "INSERT" command extension allowing multiple value-list
> > tuples in a single command; the rate for MyISAM tables improved to
On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:
> Are there general guidelines for tuning the PostgreSQL server for this kind
> of application? The suggestions I've found include disabling fsync (done),
Are you sure you really want this? The results could be catastrophic in case
of
Steve Eckmann <[EMAIL PROTECTED]> writes:
> We also found that we could improve MySQL performance significantly
> using MySQL's "INSERT" command extension allowing multiple value-list
> tuples in a single command; the rate for MyISAM tables improved to
> about 2600 objects/second. PostgreSQL doesn'
I have questions about how to improve the write performance of PostgreSQL for
logging data from a real-time simulation. We found that MySQL 4.1.3 could log
about 1480 objects/second using MyISAM tables or about 1225 objects/second
using InnoDB tables, but PostgreSQL 8.0.3 could log only about 5
21 matches
Mail list logo