Scott Marlowe writes:
Spares are placed in service one at a time.
Ah.. that's your point. I know that. :-)
You don't need 2 spares for
RAID 10, trust me.
We bought the machine with 8 drives. At one point we were considering RAID
5, then we decided to give RAID 10 a try. We have a simmila
On Fri, 2006-04-14 at 15:09, Francisco Reyes wrote:
> Michael Stone writes:
>
> > I still don't follow that. Why would the RAID level matter? IOW, are you
> > actually wanting 2 spares, or are you just stick with that because you
> > need a factor of two disks for your mirrors?
>
> RAID 10 need
Michael Stone writes:
I still don't follow that. Why would the RAID level matter? IOW, are you
actually wanting 2 spares, or are you just stick with that because you
need a factor of two disks for your mirrors?
RAID 10 needs pairs.. so we can either have no spares or 2 spares.
Mmm, it's a bi
On Fri, Apr 14, 2006 at 02:01:56PM -0400, Francisco Reyes wrote:
Michael Stone writes:
I guess the first question is why 2 hot spares?
Because we are using RAID 10
I still don't follow that. Why would the RAID level matter? IOW, are you
actually wanting 2 spares, or are you just stick with
Jignesh,
> Don't get me wrong. As Luke mentioned it took a while to get the
> potential of PostgreSQL on Solaris and people like me start doing other
> complex workarounds in Solaris like "forcedirectio", etc. (Yeah I did a
> test, if you force fsync as wal_sync_method while on Solaris, then
>
Michael Stone writes:
I guess the first question is why 2 hot spares?
Because we are using RAID 10
larger array with more spindles with outperform a smaller one with
fewer, regardless of RAID level (assuming a decent battery-backed
cache).
Based on what I have read RAID 10 is supposed to
Tom Lane writes:
That will help not at all, if the problem is too-short transactions
as it sounds to be.
How about commit_delay?
You really need to pester the authors of bacula
to try to wrap multiple inserts per transaction.
Like any volunteer project I am sure it's more an issue of res
Ian Westmacott <[EMAIL PROTECTED]> writes:
> That's what I feared, thanks. But then if I simplify things a bit,
> such that the row counts are quite good, and PG still chooses a
> worse plan, can I conclude anything about my configuration settings
> (like random_page_cost)?
Well, the other thing
On Fri, 2006-04-14 at 12:13 -0400, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
> > over Nested Loop. However, this order turns out to increase in
> > performance. I was hoping someone might be able to shed
Francisco Reyes <[EMAIL PROTECTED]> writes:
> I think I am going to try increasing wal_buffers
That will help not at all, if the problem is too-short transactions
as it sounds to be. You really need to pester the authors of bacula
to try to wrap multiple inserts per transaction. Or maybe find so
Ian Westmacott <[EMAIL PROTECTED]> writes:
> I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
> over Nested Loop. However, this order turns out to increase in
> performance. I was hoping someone might be able to shed some light on
> why PG chooses the plans in this order, and
Marc Cousin writes:
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done
Thanks for the info.
For now, I only could get good performance with bacula and postgresql when
disabling fs
I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
over Nested Loop. However, this order turns out to increase in
performance. I was hoping someone might be able to shed some light on
why PG chooses the plans in this order, and what I might do to
influence it otherwise. Thanks
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> In my case it would be helpful to break the heap access numbers out
> between seqscans and index scans, since each of those represents very
> different access patterns. Would adding that be a mess?
Yes; it'd require more counters-per-table than we now k
Hi Bruce,
I saw even on this alias also that people assumed that the default
wal_sync_method was fsync on Solaris.
I would select fsync or fdsync as the default on Solaris. (I prefer
fsync as it is already highlighted as default in postgresql)
Another thing to improve the defaults on Solar
Hi everyone,
I've seen my pg_toast tables are becoming bigger
and bigger. After googling I would like to modify my max_fsm_pages parameter to
prevent that kind of problem. So I'm wondering if changing this parameter is
enough and after that how can I reduce the size of these tables? By doin
I hope I'm not going to say stupid things, but here's what i know (or i think
i know :) ) about bacula + postgresql
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done, and of course i
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:
In RAID 10 would it matter that WALL is in the same RAID set?
Would it be better:
4 disks in RAID10 Data
2 disks RAID 1 WALL
2 hot spares
I guess the first question is why 2 hot spares? You don't have many
spindles, so you don't
Jim C. Nasby writes:
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:
In RAID 10 would it matter that WALL is in the same RAID set?
Would it be better:
4 disks in RAID10 Data
2 disks RAID 1 WALL
2 hot spares
Well, benchmark it with your app and find out, but generally speaking
On 4/13/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:
> > $ dropdb dbname
> > $ createdb dbname
> > $ pg_restore -vsOd dbname dbname.DUMP
>
> That step is pointless, because the next pg_restore will create the
> schema for you anyw
On 4/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> "patrick keshishian" <[EMAIL PROTECTED]> writes:
> > With these settings and running:
> > pg_restore -vaOd dbname dbname.DUMP
>
> If you had mentioned you were using random nondefault switches, we'd
Random?
With all due respect, I did.
I specifie
21 matches
Mail list logo