Hi,
Op 1-jun-2007, om 1:39 heeft Steinar H. Gunderson het volgende
geschreven:
On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:
Yeah, I've never seen a way to RAID-1 more than 2 drives either.
pannekake:~> grep -A 1 md0 /proc/mdstat
md0 : active raid1 dm-20[2] dm-19[1] dm-18
On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:
> Yeah, I've never seen a way to RAID-1 more than 2 drives either.
pannekake:~> grep -A 1 md0 /proc/mdstat
md0 : active raid1 dm-20[2] dm-19[1] dm-18[0]
64128 blocks [3/3] [UUU]
It's not a big device, but I can ensure you it
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also is there any thumb ru
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also is there any thumb rule about best stripe size ? (8k,16k,32k...)
It isn't that
Mark,
On 5/30/07 8:57 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> One part is corruption. Another is ordering and consistency. ZFS represents
> both RAID-style storage *and* journal-style file system. I imagine consistency
> and ordering is handled through journalling.
Yep and versionin
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED] <[EMAIL
"Michael Stone" <[EMAIL PROTECTED]> writes:
"Michael Stone" <[EMAIL PROTECTED]> writes:
> On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:
>
> > Much better to get a RAID system that checksums blocks so that "good" is
> > known. Solaris ZFS does that, as do high end systems from EMC
On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:
> > This is standard stuff, very well proven: try googling 'self healing zfs'.
> The first hit on this search is a demo of ZFS detecting corruption of one of
> the mirror pair using checksums, very cool:
>
> http://www.opensolaris.or
> This is standard stuff, very well proven: try googling 'self healing zfs'.
The first hit on this search is a demo of ZFS detecting corruption of one of
the mirror pair using checksums, very cool:
http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
D464883F194061E341F
Oh by the way, I saw a nifty patch in the queue :
Find a way to reduce rotational delay when repeatedly writing last WAL page
Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again.
One idea is to write the WAL to different offsets that might reduce
On Wed, 30 May 2007 16:36:48 +0200, Luke Lonergan
<[EMAIL PROTECTED]> wrote:
I don't see how that's better at all; in fact, it reduces to
exactly the same problem: given two pieces of data which
disagree, which is right?
The one that matches the checksum.
- postgres tells OS "write
1:11 AM Eastern Standard Time
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] setting up raid10 with more than 4 drives
On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
>> I don't see how that's better at all; in fact, it reduces to
>> exactly
On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
I don't see how that's better at all; in fact, it reduces to
exactly the same problem: given two pieces of data which
disagree, which is right?
The one that matches the checksum.
And you know the checksum is good, how?
Mike St
> I don't see how that's better at all; in fact, it reduces to
> exactly the same problem: given two pieces of data which
> disagree, which is right?
The one that matches the checksum.
- Luke
---(end of broadcast)---
TIP 5: don't forget to inc
On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:
On 5/30/07 12:29 AM, "Peter Childs" <[EMAIL PROTECTED]> wrote:
Good point, also if you had Raid 1 with 3 drives with some bit errors at least
you can take a vote on whats right. Where as if you only have 2 and they
disagree how do yo
Hi Peter,
On 5/30/07 12:29 AM, "Peter Childs" <[EMAIL PROTECTED]> wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at least
> you can take a vote on whats right. Where as if you only have 2 and they
> disagree how do you know which is right other than pick one and ho
"Jonah H. Harris" <[EMAIL PROTECTED]> writes:
> On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
>> to me.
Sure you can. In fact it's a very common backup strategy. You build a
three-way mirror and then when it co
* Peter Childs ([EMAIL PROTECTED]) wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at
> least you can take a vote on whats right. Where as if you only have 2 and
> they disagree how do you know which is right other than pick one and hope...
> But whatever it will be s
On 30/05/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Wed, 30 May 2007, Jonah H. Harris wrote:
> On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> AFAIK you can't RAID1 more than two drives, so the above doesn't make
>> sense
>> to me.
>
> Yeah, I've never seen a way to RAID-1 m
On Wed, 30 May 2007, Jonah H. Harris wrote:
On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
AFAIK you can't RAID1 more than two drives, so the above doesn't make
sense
to me.
Yeah, I've never seen a way to RAID-1 more than 2 drives either. It
would have to be his first one:
D1 + D2
On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
to me.
Yeah, I've never seen a way to RAID-1 more than 2 drives either. It
would have to be his first one:
D1 + D2 = MD0 (RAID 1)
D3 + D4 = MD1 ...
D5 + D6 = MD2 ..
Stephen,
On 5/29/07 8:31 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> It's just more copies of the same data if it's really a RAID1, for the
> extra, extra paranoid. Basically, in the example above, I'd read it as
> "D1, D2, D5 have identical data on them".
In that case, I'd say it's a wast
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> Hi Rajesh,
>
> On 5/29/07 7:18 PM, "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> wrote:
>
> > D1 raid1 D2 raid1 D5 --> MD0
> > D3 raid1 D4 raid1 D6 --> MD1
> > MD0 raid0 MD1 --> MDF (final)
>
> AFAIK you can't RAID1 more than two drives, so the above d
Hi Rajesh,
On 5/29/07 7:18 PM, "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> wrote:
> D1 raid1 D2 raid1 D5 --> MD0
> D3 raid1 D4 raid1 D6 --> MD1
> MD0 raid0 MD1 --> MDF (final)
AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
to me.
- Luke
-
On 5/30/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
Stripe of mirrors is preferred to mirror of stripes for the best balance of
protection and performance.
nooo! i am not aksing raid10 vs raid01 . I am considering stripe of
mirrors only.
the question is how are more number of disks supposed to
Stripe of mirrors is preferred to mirror of stripes for the best balance of
protection and performance.
In the stripe of mirrors you can lose up to half of the disks and still be
operational. In the mirror of stripes, the most you could lose is two
drives. The performance of the two should be si
26 matches
Mail list logo