On Sat, Mar 20, 2010 at 1:35 PM, Richard Elling wrote:
> For those disinclined to click, data retention when mirroring wins over
> raidz
> when looking at the problem from the perspective of number of drives
> available. Why? Because 5+1 raidz survives the loss of any disk, but 3
> sets
> of 2-wa
Nah, the 8x2.5"-in-2 are $220, while the 5x3.5"-in-3 are $120. You can
get 4x3.5"-in-3 for $100, 3x3.5"-in-2 for $80, and even 4x2.5"-in-1 for
$65. ( http://www.addonics.com/products/raid_system/ae4rcs25nsa.asp )
The Cool Master thing you linked to isn't a Hot Swap module. It does
4-in-3, b
Whoops, Erik's links show I was wrong about my first point. Though those
5-in-3s are five times as expensive as the 4-in-3.
On Sat, Mar 20, 2010 at 22:46, Ethan wrote:
> I don't think you can fit five 3.5" drives in 3 x 5.25", but I have a
> number of coolermaster 4-in-3 modules, I recommend the
I don't think you can fit five 3.5" drives in 3 x 5.25", but I have a number
of coolermaster 4-in-3 modules, I recommend them:
http://www.amazon.com/-/dp/B00129CDGC/
On Sat, Mar 20, 2010 at 20:23, Geoff wrote:
> Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris
> so I'v
Geoff wrote:
Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris so
I've been looking for a replacement. This card seems perfect so I ordered one
last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could
use with this card? The extra ports necess
On 03/19/10 19:07, zfs ml wrote:
What are peoples' experiences with multiple drive failures?
1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's
operating temperature. Head crashes. No more need be said.
- Bill
__
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when working in SAN environments one have to
tak
Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris so
I've been looking for a replacement. This card seems perfect so I ordered one
last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could
use with this card? The extra ports necessitate more driv
On Sat, Mar 20, 2010 at 5:36 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sat, 20 Mar 2010, Tim Cook wrote:
>
>>
>> Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have
>> been running around since day
>> one claiming the basic concept of ZFS fly's in the fa
On 21.03.2010 00:14, Erik Trimble wrote:
Richard Elling wrote:
I see this on occasion. However, the cause is rarely attributed to a bad
batch of drives. More common is power supplies, HBA firmware, cables,
Pepsi syndrome, or similar.
-- richard
Mmmm. Pepsi Syndrome. I take it this is similar to
Richard Elling wrote:
I see this on occasion. However, the cause is rarely attributed to a bad
batch of drives. More common is power supplies, HBA firmware, cables,
Pepsi syndrome, or similar.
-- richard
Mmmm. Pepsi Syndrome. I take it this is similar to the Coke addiction
many of my keyboa
On Sat, 20 Mar 2010, Eric Andersen wrote:
2. Taking into account the above, it's a great deal easier on the
pocket book to expand two drives at a time instead of four at a
time. As bigger drives are always getting cheaper, I feel that I
have a lot more flexibility with mirrors when it comes
On Sat, 20 Mar 2010, Tim Cook wrote:
Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have been
running around since day
one claiming the basic concept of ZFS fly's in the face of that very concept.
Rather than do one thing
well, it's unifying two things (file system and r
On 20.03.2010 23:00, Gary Gendel wrote:
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring a
I went through this determination when setting up my pool. I decided to go
with mirrors instead of raidz2 after considering the following:
1. Drive capacity in my box. At most, I can realistically cram 10 drives in
my box and I am not interested in expanding outside of the box. I could go
w
On Sat, Mar 20, 2010 at 5:00 PM, Gary Gendel wrote:
> I'm not sure I like this at all. Some of my pools take hours to scrub. I
> have a cron job run scrubs in sequence... Start one pool's scrub and then
> poll until it's finished, start the next and wait, and so on so I don't
> create too much
On Sat, Mar 20, 2010 at 4:00 PM, Richard Elling wrote:
> On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote:
> > We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
> >
> > Could we introduce a zpool option (as in zpool set )
> fo
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring all I/O to a crawl.
The job is launched on
On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote:
> We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
>
> Could we introduce a zpool option (as in zpool set ) for
> "scrub period", in "number of hours" (with 0 being no autom
On Fri, 19 Mar 2010, zfs ml wrote:
same enclosure, same rack, etc for a given raid 5/6/z1/z2/z3 system, should
we be paying more attention to harmonics, vibration/isolation and
non-intuitive system level statistics that might be inducing close proximity
drive failures rather than just throwing
On Mar 18, 2010, at 6:28 AM, Darren J Moffat wrote:
> The only tool I'm aware of today that provides a copy of the data, and all of
> the ZPL metadata and all the ZFS dataset properties is 'zfs send'.
AFAIK, this is correct.
Further, the only type of tool that can backup a pool is a tool like
On Mar 19, 2010, at 5:32 AM, Chris Dunbar - Earthside, LLC wrote:
> Hello,
>
> After being immersed in this list and other ZFS sites for the past few weeks
> I am having some doubts about the zpool layout on my new server. It's not too
> late to make a change so I thought I would ask for commen
On 20.03.2010 20:53, Giovanni Tirloni wrote:
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen mailto:sv...@stillbilde.net>> wrote:
We all know that data corruption may happen, even on the most
reliable of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (a
On Mar 19, 2010, at 7:07 PM, zfs ml wrote:
> Most discussions I have seen about RAID 5/6 and why it stops "working" seem
> to base their conclusions solely on single drive characteristics and
> statistics.
> It seems to me there is a missing component in the discussion of drive
> failures in the
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen wrote:
> We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
>
> Could we introduce a zpool option (as in zpool set ) for
> "scrub period", in "number of hours" (with 0 being no aut
Thanks for the info.
I'll try the live CD method when I have access to the system next week.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 20, 2010, at 14:37, Remco Lengers wrote:
You seem to be concerned about the availability? Open HA seems to be
a package last updated in 2005 (version 0.3.6). (?) It seems to me
like a real fun toy project to build but I would be pretty reserved
about the actual availability and putti
We all know that data corruption may happen, even on the most reliable
of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (as in zpool set )
for "scrub period", in "number of hours" (with 0 being no automatic
scrubbing).
I see several modern raidcontrollers (s
On Sun, Mar 21, 2010 at 12:32 AM, Miles Nordin wrote:
>> "sn" == Sriram Narayanan writes:
>
> sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view
>
> yeah, but he has no slog, and he says 'zpool clear' makes the system
> panic and reboot, so even from way over here that link looks u
> "sn" == Sriram Narayanan writes:
sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view
yeah, but he has no slog, and he says 'zpool clear' makes the system
panic and reboot, so even from way over here that link looks useless.
Patrick, maybe try a newer livecd from genunix.org lik
On 20.03.2010 17:39, Henk Langeveld wrote:
On 2010-03-15 16:50, Khyron:
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2
levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be
used
t
Vikkr,
You seem to be concerned about the availability? Open HA seems to be a
package last updated in 2005 (version 0.3.6). (?)
It seems to me like a real fun toy project to build but I would be
pretty reserved about the actual availability and putting using these
kind of setup for production
On Sat, 20 Mar 2010, Robin Axelsson wrote:
My idea is rather that the "hot spares" (or perhaps we should say
"cold spares" then) are off all the time until they are needed or
when a user initiated/scheduled system integrity check is being
conducted. They could go up for a "test spin" at each o
On Mar 20, 2010, at 11:48 AM, vikkr wrote:
THX Ross, i plan exporting each drive individually over iSCSI.
I this case, the write, as well as reading, will go to all 6 discs
at once, right?
The only question - how to calculate fault tolerance of such a
system if the discs are all different
On 2010-03-15 16:50, Khyron:
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be used
to store redundancy (parity) data and only
On Sat, Mar 20, 2010 at 9:19 PM, Patrick Tiquet wrote:
> Also, I tried to run zpool clear, but the system crashes and reboots.
Please see if this link helps
http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view
-- Sriram
-
Belenix: www.belenix.org
___
Also, I tried to run zpool clear, but the system crashes and reboots.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 20, 2010, at 00:57, Edward Ned Harvey wrote:
I used NDMP up till November, when we replaced our NetApp with a
Solaris Sun
box. In NDMP, to choose the source files, we had the ability to
browse the
fileserver, select files, and specify file matching patterns. My
point is:
NDMP is fi
THX Ross, i plan exporting each drive individually over iSCSI.
I this case, the write, as well as reading, will go to all 6 discs at once,
right?
The only question - how to calculate fault tolerance of such a system if the
discs are all different in size?
Maybe there is such a tool? or check?
--
This system is running stock 111b runinng on an Intel Atom D945GCLF2
motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system
was locked up, rebooted and the pool status shows as follows:
pool: atomfs
state: FAULTED
status: An intent log record could not be read.
On Mar 20, 2010, at 10:18 AM, vikkr wrote:
Hi sorry for bad eng and picture :).
Can such a decision?
3 servers openfiler give their drives 2 - 1 tb ISCSI server to
OpenSolaris
On OpenSolaris assembled a RAID-Z with double parity.
Server OpenSolaris provides NFS access to this array, and du
> I know about those SoHo boxes and the whatnot, they
> keep spinning up and down all the time and the worst
> thing is that you cannot disable this sleep/powersave
> feature on most of these devices.
That to judge is in the eye of the beholder. We have a couple of Thecus NAS
boxes and some LVM R
That's a good idea, thanks. I get the feeling the remainder won't be zero,
which will back up the misalignment theory. After a bit more digging, it seems
the problem is just an NTFS issue and can be addressed irrespective of
underlying storage system.
I think I'm going to try the process in the
> 5+ years ago the variety of NDMP that was available with the
> combination of NetApp's OnTap and Veritas NetBackup did backups at the
> volume level. When I needed to go to tape to recover a file that was
> no longer in snapshots, we had to find space on a NetApp to restore
> the volume. It cou
> > I'll say it again: neither 'zfs send' or (s)tar is an
> > enterprise (or
> > even home) backup system on their own one or both can
> > be components of
> > the full solution.
> >
>
> Up to a point. zfs send | zfs receive does make a very good back up
> scheme for the home user with a moderate
I know about those SoHo boxes and the whatnot, they keep spinning up and down
all the time and the worst thing is that you cannot disable this
sleep/powersave feature on most of these devices.
I believe I have seen a "sleep mode" support when I skimmed through the feature
lists of the LSI contro
On Fri, Mar 19, 2010 at 11:57 PM, Edward Ned Harvey
wrote:
>> 1. NDMP for putting "zfs send" streams on tape over the network. So
>
> Tell me if I missed something here. I don't think I did. I think this
> sounds like crazy talk.
>
> I used NDMP up till November, when we replaced our NetApp wit
>
> I'll say it again: neither 'zfs send' or (s)tar is an
> enterprise (or
> even home) backup system on their own one or both can
> be components of
> the full solution.
>
Up to a point. zfs send | zfs receive does make a very good back up scheme for
the home user with a moderate amount of s
> So, is there a
> sleep/hibernation/standby mode that the hot spares
> operate in or are they on all the time regardless of
> whether they are in use or not?
This depends on the power-save options of your hardware, not on ZFS. Arguably,
there is less ware on the heads for a hot spare. I guess th
49 matches
Mail list logo