Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread JZ
Nice... More on sector checksum -- * anything prior to 2005 would be sort of out-of-date/fashion, because http://www.patentstorm.us/patents/6952797/description.html * the software RAID - NetApp view http://pages.cs.wisc.edu/~krioukov/ParityLostAndParityRegained-FAST08.ppt * the Linux view http

[zfs-discuss] Error 16: Inconsistent filesystem structure after a change in the system

2009-01-02 Thread Rafal Pratnicki
I've hit this bug on my home machine couple of times and finally decided to log it since I've spend 2 days configuring my "OpenSolaris 2008.11 snv_101b_rc2 X86" and after installing the SUNWsmbfskr package I ended up in the grub> menu. The package contains a necessary module for CIFS. After pkg ins

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Ulrich Graef
Hi Carsten, Carsten Aulbert wrote: > Hi Marc, > > Marc Bevand wrote: >> Carsten Aulbert aei.mpg.de> writes: >>> In RAID6 you have redundant parity, thus the controller can find out >>> if the parity was correct or not. At least I think that to be true >>> for Areca controllers :) >> Are you sure

[zfs-discuss] ZFS iSCSI (For VirtualBox target) and SMB

2009-01-02 Thread Kevin Pattison
Hey all, I'm setting up a ZFS based fileserver to use both as a shared network drive and separately to have an iSCSI target to be used as the "Hard disk" of a windows based VM runninf on another machine. I've built the machine, installed the OS, created the RAIDZ pool and now have a couple of

Re: [zfs-discuss] What will happen when write a block of 8k if the recordsize is 128k. Will 128k be written instead of 8k?

2009-01-02 Thread Roch Bourbonnais
HI Qihua, there are many reasons why the recordsize does not govern the I/O size directly. Metadata I/O is one, ZFS I/O scheduler aggregation is another. The application behavior might be a third. Make sure to create the DB files after modifying the ZFS property. -r Le 26 déc. 08 à 11:49, q

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Mika Borner
Ulrich Graef wrote: > You need not to wade through your paper... > ECC theory tells, that you need a minimum distance of 3 > to correct one error in a codeword, ergo neither RAID-5 or RAID-6 > are enough: you need RAID-2 (which nobody uses today). > > Raid-Controllers today take advantage of the fa

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Tim
On Fri, Jan 2, 2009 at 10:47 AM, Mika Borner wrote: > Ulrich Graef wrote: > > You need not to wade through your paper... > > ECC theory tells, that you need a minimum distance of 3 > > to correct one error in a codeword, ergo neither RAID-5 or RAID-6 > > are enough: you need RAID-2 (which nobody

[zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
I'm trying to add a pair of new cache devices to my zpool, but I'm getting the following error: # zpool add space cache c10t7d0 Assertion failed: nvlist_lookup_string(cnv, "path", &path) == 0, file zpool_vdev.c, line 650 Abort (core dumped) I replaced a failed disk a few minutes before trying thi

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Richard Elling
Tim wrote: > > > > The Netapp paper mentioned by JZ > (http://pages.cs.wisc.edu/~krioukov/ParityLostAndParityRegained-FAST08.ppt > > ) > talks about write verify. > > Would this feature make sense in a

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread JZ
Folks feedback on my spam communications was that -- I jump from point to point too fast and am lazy to explain and often somewhat misleading.;-) On the NetApp thing, please note they had their time talking about SW RAID can be as good as/better than HW RAID. However, from a customer point

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Akhilesh Mritunjai
As for source, here you go :) http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailm

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Bob Friesenhahn
On Fri, 2 Jan 2009, JZ wrote: > > I have not done a cost study on ZFS towards the 999s, but I guess we can > do better with more system and I/O based assurance over just RAID checksum, > so customers can get to more s with less redundant hardware and > software feature enablement fees.

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread JZ
Yes, agreed. However, for enterprises with risk management as a key factor building into their decision making processes -- what if the integrity risk is reflected on Joe Tucci's personal network data? OMG, big impact to the SLA when the SLA is critical... [right, Tim?] ;-) -z - Original

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Bob Friesenhahn
On Fri, 2 Jan 2009, JZ wrote: > We are talking about 0.001% of defined downtime headroom for a 4-9 SLA (that > may be defined as "accessing the correct data"). It seems that some people spend a lot of time analyzing their own hairy navel and think that it must be the surely be center of the uni

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai wrote: > As for source, here you go :) > > http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 Thanks. It's in the middle of get_replication, so I suspect it's a bug--zpool tries to check on the replication s

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread JZ
haha, this makes a cheerful new year start -- this kind of humor is only available at open storage. BTW, I did not know the pyramids are crumbling now, since it was built with love. but the GreatWall was crumbling, since it was built with hate (until we fixed part of that for tourist $$$).

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread JZ
On second thought, let me further explain why I had the Linux link in the same post. That was written a while ago, but I think the situation for the cheap RAID cards has not changed much, though the RAID ASICs in RAID enclosures are getting more and more robust, just not "open". If you take ri

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Richard Elling
Scott Laird wrote: > On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai > wrote: > >> As for source, here you go :) >> >> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 >> > > Thanks. It's in the middle of get_replication, so I suspect it's a > bu

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling wrote: > Scott Laird wrote: >> >> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai >> wrote: >> >>> >>> As for source, here you go :) >>> >>> >>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 >>> >> >> Tha