Sc847 36 drive config. Wd RE3 1tb drives. Areca and lsi hba. 3 or so drives
would completely hang under any kind of decent load.
Replaced with lsi 1068 and chenbro sas espanders and replaced the supermicro
backplane with the -A version (direct port) and its been riPping along for well
over a y
On Aug 5, 2011, at 8:55 PM, Edward Ned Harvey wrote:
>
> In any event... You need to do something like this:
> installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
> (substitute whatever device & slice you have used for rpool)
That did the trick, thanks.
Out of curiosity, d
- Forwarded message from Gordan Bobic -
From: Gordan Bobic
Date: Sat, 06 Aug 2011 21:37:30 +0100
To: vser...@list.linux-vserver.org
Subject: Re: [vserver] hybrid zfs pools as iSCSI targets for vserver
Reply-To: vser...@list.linux-vserver.org
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64;
- Forwarded message from "John A. Sullivan III"
-
From: "John A. Sullivan III"
Date: Sat, 06 Aug 2011 16:30:04 -0400
To: vser...@list.linux-vserver.org
Subject: Re: [vserver] hybrid zfs pools as iSCSI targets for vserver
Reply-To: vser...@list.linux-vserver.org
X-Mailer: Evolution 2.30.
> I'm using 4 x WD RE3 1TB drives with a Supermicro X7SB3 mobo with a
> builtin LSI 1068E controller and a CSE-SAS-833TQ SAS backplane.
>
> Have run ZFS with both Solaris and FreeBSD without a problem for a
> couple years now. Had one drive go bad, but it was caught early by
> running periodic scr
On Sat, Aug 06, 2011 at 06:45:05PM +0200, Roy Sigurd Karlsbakk wrote:
> Have anyone here used WD drives with LSI controllers (3801/3081/9211)
> with Super Micro machines? Any success stories?
I'm using 4 x WD RE3 1TB drives with a Supermicro X7SB3 mobo with a
builtin LSI 1068E controller and a CSE
> If I'm not mistaken, a 3-way mirror is not
> implemented behind the scenes in
> the same way as a 3-disk raidz3. You should use a
> 3-way mirror instead of a
> 3-disk raidz3.
RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But,
yes, a 3-way mirror is implemented tota
On Sat, 6 Aug 2011, Alexander Lesle wrote:
Those using mirrors or raidz1 are best advised to perform periodic
scrubs. This helps avoid future media read errors and also helps
flush out failing hardware.
And what is your suggestion for scrubbing a mirror pool?
Once per month, every 2 weeks, e
On Sat, 6 Aug 2011, Rob Cohen wrote:
Perhaps you are saying that they act like stripes for bandwidth purposes, but
not for read ops/sec?
Exactly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphic
> How much time needs the thread opener with his config?
> > Technical Specs:
> > 216x 3TB 7k3000 HDDs
> > 24x 9 drive RAIDZ3
>
> I suggest resilver need weeks and the chance that a second or
> third HD crashs in that time is high. Murphy’s Law
With a full pool, perhaps a couple of weeks, but unl
> Might this be the SATA drives taking too long to reallocate bad
> sectors? This is a common problem "desktop" drives have, they will
> stop and basically focus on reallocating the bad sector as long as it
> takes, which causes the raid setup to time out the operation and flag
> the drive as faile
Hello Rob Cohen and List,
On August, 06 2011, 17:32 wrote in [1]:
> In this case, RAIDZ is at least 8x slower to resilver (assuming CPU
> and writing happen in parallel). In the mean time, performance for
> the array is severely degraded for RAIDZ, but not for mirrors.
> Aside from resilvering
Upgrading to hacked N36L BIOS seems to have done the trick:
eugen@nexenta:~$ zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
Hello Bob Friesenhahn and List,
On August, 06 2011, 18:34 wrote in [1]:
> Those using mirrors or raidz1 are best advised to perform periodic
> scrubs. This helps avoid future media read errors and also helps
> flush out failing hardware.
And what is your suggestion for scrubbing a mirror pool
This might be related to your issue:
http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html
On Saturday, August 6, 2011, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or
On Aug 6, 2011, at 9:56 AM, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or later. Use SAS and be happy.
>
> Funny thing is Hitachi and Seagate drives work stably, whereas WD drives tend
>
Thanks for clarifying.
If a block is spread across all drives in a RAIDZ group, and there are no
partial block reads, how can each drive in the group act like a stripe? Many
RAID5&6 implementations can do partial block reads, allowing for parallel
random reads across drives (as long as there a
WD's drives have gotten better the last few years but their quality is still
not very good. I doubt they test their drives extensively for heavy duty server
configs, particularly since you don't see them inside any of the major server
manufactures' boxes.
Hitachi in particular does well in mas
> In my experience, SATA drives behind SAS expanders just don't work.
> They "fail" in the manner you
> describe, sooner or later. Use SAS and be happy.
Funny thing is Hitachi and Seagate drives work stably, whereas WD drives tend
to fail rather quickly
Vennlige hilsener / Best regards
roy
--
R
On Aug 6, 2011, at 9:45 AM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> We have a few servers with WD Black (and some green) drives on Super Micro
> systems. We've seen both drives work well with direct attach, but with LSI
> controllers and Super Micro's SAS expanders, well, that's another story.
Hi all
We have a few servers with WD Black (and some green) drives on Super Micro
systems. We've seen both drives work well with direct attach, but with LSI
controllers and Super Micro's SAS expanders, well, that's another story. With
those SAS expanders, we've seen numerous drives being kicked
On Sat, 6 Aug 2011, Rob Cohen wrote:
Can RAIDZ even do a partial block read? Perhaps it needs to read
the full block (from all drives) in order to verify the checksum.
If so, then RAIDZ groups would always act like one stripe, unlike
RAID5/6.
ZFS does not do partial block reads/writes. It
On Sat, 6 Aug 2011, Rob Cohen wrote:
I may have RAIDZ reading wrong here. Perhaps someone could clarify.
For a read-only workload, does each RAIDZ drive act like a stripe,
similar to RAID5/6? Do they have independant queues?
They act like a stripe like in RAID5/6.
It would seem that there
On Sat, 6 Aug 2011, Orvar Korvar wrote:
Ok, so mirrors resilver faster.
But, it is not uncommon that another disk shows problem during
resilver (for instance r/w errors), this scenario would mean your
entire raid is gone, right? If you are using mirrors, and one disk
crashes and you start re
> I may have RAIDZ reading wrong here. Perhaps someone
> could clarify.
>
> For a read-only workload, does each RAIDZ drive act
> like a stripe, similar to RAID5/6? Do they have
> independant queues?
>
> It would seem that there is no escaping
> read/modify/write operations for sub-block writes
RAIDZ has to rebuild data by reading all drives in the group, and
reconstructing from parity. Mirrors simply copy a drive.
Compare 3tb mirros vs. 9x3tb RAIDZ2.
Mirrors:
Read 3tb
Write 3tb
RAIDZ2:
Read 24tb
Reconstruct data on CPU
Write 3tb
In this case, RAIDZ is at least 8x slower to resilver
I may have RAIDZ reading wrong here. Perhaps someone could clarify.
For a read-only workload, does each RAIDZ drive act like a stripe, similar to
RAID5/6? Do they have independant queues?
It would seem that there is no escaping read/modify/write operations for
sub-block writes, forcing the RA
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>
> Ok, so mirrors resilver faster.
>
> But, it is not uncommon that another disk shows problem during resilver
(for
> instance r/w errors), this scenario would mean your entire r
Shouldn't the choice of RAID type also
be based on the i/o requirements?
Anyway, with RAID-10, even a second
failed disk is not catastophic, so long
as it is not the counterpart of the first
failed disk, no matter the no. of disks.
(With 2-way mirrors.)
But that's why we do backups, right?
Mark
Ok, so mirrors resilver faster.
But, it is not uncommon that another disk shows problem during resilver (for
instance r/w errors), this scenario would mean your entire raid is gone, right?
If you are using mirrors, and one disk crashes and you start resilver. Then the
other disk shows r/w error
30 matches
Mail list logo