Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: > ppf> What I wanted to point out is the Al's example: he wrote about damaged > data. Data > ppf> were damaged by firmware _not_ disk surface ! In such case ZFS doesn't > help. ZFS can > ppf> detect (and repair) errors on disk surf

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote: > [EMAIL PROTECTED] wrote: > >On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: > > > >What I wanted to point out is the Al's example: he wrote about damaged > >data. Data > >were damaged by firmware _not_ disk surface !

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter.  With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Joe Little
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote: On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > But Joe makes a good point about RAID-Z and iSCSI. > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > to do that: parity computation on write, checksum verificat

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Torrey McMahon
Philip Brown wrote: raid5 IS useful in zfs+hwraid boxes, for "Mean Time To Recover" purposes. Or, and people haven't really mentioned this yet, if you're using R5 for the raid set and carving LUNs out of it to multiple hosts. ___ zfs-discuss mail

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nicolas Williams
On Thu, Jun 29, 2006 at 09:25:21AM +1000, Nathan Kroenert wrote: > On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > > But Joe makes a good point about RAID-Z and iSCSI. > > > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > > to do that: parity computation on write,

Re: [zfs-discuss] ZFS components for a minimal Solaris 10 U2 install?

2006-06-28 Thread Jason Schroeder
Dale Ghent wrote: On Jun 28, 2006, at 4:27 PM, Jim Connors wrote: For an embedded application, I'm looking at creating a minimal Solaris 10 U2 image which would include ZFS functionality. In quickly taking a look at the opensolaris.org site under pkgdefs, I see three packages that appear

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Philip Brown
Erik Trimble wrote: Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm assuming that the proper way to get benefits from both ZFS and HW RAID is the following: (1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror hwStripe1 hwStripe2" (2) ZFS RAIDZ of HW mirrors

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Philip Brown
Roch wrote: Philip Brown writes: > but there may not be filesystem space for double the data. > Sounds like there is a need for a zfs-defragement-file utility perhaps? > > Or if you want to be politically cagey about naming choice, perhaps, > > zfs-seq-read-optimize-file ? :-) > P

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Richard Elling
[hit send too soon...] Richard Elling wrote: Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to los

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nathan Kroenert
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > But Joe makes a good point about RAID-Z and iSCSI. > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > to do that: parity computation on write, checksum verification on read > and, if the checksum verification fails, c

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 17:25, Erik Trimble wrote: On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote: On Jun 28, 2006, at 12:32, Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: > > Which is better - > > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? > > The latter. With a mirror of RAID-5 arrays, you get: > > (1) Self-healing data. > > (2) Tolerance of whole-array failure. > > (3) Tolerance of *

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote: > On Wed, 2006-06-28 at 17:32, Erik Trimble wrote: > > Given a reasonable number of hot-spares, I simply can't see the (very) > > marginal increase in safety give by using HW RAID5 as out balancing the > > considerable speed hit using RAID5

Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-28 Thread Neil Perrin
Robert Milkowski wrote On 06/28/06 15:52,: Hello Neil, Wednesday, June 21, 2006, 8:15:54 PM, you wrote: NP> Robert Milkowski wrote On 06/21/06 11:09,: Hello Neil, Why is this option available then? (Yes, that's a loaded question.) NP> I wouldn't call it an option, but an internal debug

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Erik, Wednesday, June 28, 2006, 6:32:38 PM, you wrote: ET> Robert - ET> I would definitely like to see the difference between read on HW RAID5 ET> vs read on RAIDZ. Naturally, one of the big concerns I would have is ET> how much RAM is needed to avoid any cache starvation on the ZFS ET

Re[4]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Peter, Wednesday, June 28, 2006, 11:24:32 PM, you wrote: PT> Robert, >> PT> You really need some level of redundancy if you're using HW raid. >> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all >> PT> that. Seems to me that the simplest way to go is to use zfs to mirror >

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jeff Bonwick
> Which is better - > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter. With a mirror of RAID-5 arrays, you get: (1) Self-healing data. (2) Tolerance of whole-array failure. (3) Tolerance of *at least* three disk failures. (4) More IOPs than raidz of hardware mirror

Re[2]: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-28 Thread Robert Milkowski
Hello Neil, Wednesday, June 21, 2006, 8:15:54 PM, you wrote: NP> Robert Milkowski wrote On 06/21/06 11:09,: >> Hello Neil, Why is this option available then? (Yes, that's a loaded question.) >> >> NP> I wouldn't call it an option, but an internal debugging switch that I >> NP> originally ad

Re: [zfs-discuss] disk evacuate

2006-06-28 Thread Noel Dellofano
Hey Robert, Well, not yet. Right now our top two priorities are improving performance in multiple areas of zfs(soon there will be a performance page tracking progess on the zfs community page), and also getting zfs boot done. Hence, we're not currently working on heaps of brand new features

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote: > > On Jun 28, 2006, at 12:32, Erik Trimble wrote: > > > The main reason I don't see ZFS mirror / HW RAID5 as useful is this: > > > > > > ZFS mirror/ RAID5: capacity = (N / 2) -1 > > > > speed

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Peter Tribble
Robert, > PT> You really need some level of redundancy if you're using HW raid. > PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all > PT> that. Seems to me that the simplest way to go is to use zfs to mirror > PT> HW raid5, preferably with the HW raid5 LUNs being completely > PT>

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Peter Tribble
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote: > The main reason I don't see ZFS mirror / HW RAID5 as useful is this: > > ZFS mirror/ RAID5: capacity = (N / 2) -1 > speed << N / 2 -1 > minimum # disks to lose before

Re: [zfs-discuss] ZFS components for a minimal Solaris 10 U2 install?

2006-06-28 Thread Dale Ghent
On Jun 28, 2006, at 4:27 PM, Jim Connors wrote: For an embedded application, I'm looking at creating a minimal Solaris 10 U2 image which would include ZFS functionality. In quickly taking a look at the opensolaris.org site under pkgdefs, I see three packages that appear to be related to ZF

[zfs-discuss] ZFS components for a minimal Solaris 10 U2 install?

2006-06-28 Thread Jim Connors
For an embedded application, I'm looking at creating a minimal Solaris 10 U2 image which would include ZFS functionality. In quickly taking a look at the opensolaris.org site under pkgdefs, I see three packages that appear to be related to ZFS: SUNWzfskr, SUNWzfsr, and SUNWzfsu. Is it naive t

Re: [zfs-discuss] ZFS RAM recommendations?

2006-06-28 Thread eric kustarz
Rich Teer wrote: Hi all, ISTR reading somewhere that ZFS like a generous supply of RAM. For an X4200 with a pair of 73 GB disks (for now), what would constitute a "generous" amount of RAM? The server currently has a total of 2 GB of RAM, and my gut is telling me that that isn't enough... TIA,

Re: [zfs-discuss] This may be a somewhat silly question ...

2006-06-28 Thread Dennis Clarke
> > Dennis, > > You are absolutely correct that the doc needs a step to verify > that the backup occurred. > > I'll work on getting this step added to the admin guide ASAP. > Hey, I'm sorry that I triggered more work for you. Never meant to do that. I was just a little lost as to how to get a g

[zfs-discuss] ZFS RAM recommendations?

2006-06-28 Thread Rich Teer
Hi all, ISTR reading somewhere that ZFS like a generous supply of RAM. For an X4200 with a pair of 73 GB disks (for now), what would constitute a "generous" amount of RAM? The server currently has a total of 2 GB of RAM, and my gut is telling me that that isn't enough... TIA, -- Rich Teer, SCN

Re: [zfs-discuss] This may be a somewhat silly question ...

2006-06-28 Thread Cindy Swearingen
Dennis, You are absolutely correct that the doc needs a step to verify that the backup occurred. I'll work on getting this step added to the admin guide ASAP. Thanks for feedback... Cindy Dennis Clarke wrote: Am I missing something here? [1] Dennis [1] I am fully prepared for RTFM and

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nicolas Williams
On Wed, Jun 28, 2006 at 11:15:34AM +0200, Robert Milkowski wrote: > DV> If ZFS is providing better data integrity then the current storage > DV> arrays, that sounds like to me an opportunity for the next generation > DV> of intelligent arrays to become better. > RM> Actually they can't. RM> If yo

Re: [zfs-discuss] ZFS root install

2006-06-28 Thread Tabriz Leman
Doug, Very nice setup! As you mention, more notes would be very helpful, but very neat stuff! Thanks, Tabriz Doug Scott wrote: I have posted a blog http://solaristhings.blogspot.com/ on how I have configured a zfs root partition on my laptop. It is a slightly modified version of Tabriz's

Re: [Security-discuss] Re: AW: AW: [zfs-discuss] Proposal for new basic privileges related with filesystem access checks

2006-06-28 Thread Darren J Moffat
Mark Shellenbaum wrote: Can you give us an example of a 'file' the ssh-agent wishes to open and what the permission are on the file and also what privileges the ssh-agent has, and what the expected results are. The whole point is that ssh-agent should NEVER be opening any files that the user

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5:      capacity =  (N / 2) -1                                     speed <<  N / 2 -1                                     minimum # disks to lose before loss of data: 

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Richard Elling
Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to lose before loss

Re: [Security-discuss] Re: AW: AW: [zfs-discuss] Proposal for new basic privileges related with filesystem access checks

2006-06-28 Thread Nicolas Williams
On Wed, Jun 21, 2006 at 04:34:59PM -0600, Mark Shellenbaum wrote: > Can you give us an example of a 'file' the ssh-agent wishes to open and > what the permission are on the file and also what privileges the > ssh-agent has, and what the expected results are. ssh-agent(1) should need to open no f

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
Robert Milkowski wrote: Hello Peter, Wednesday, June 28, 2006, 1:11:29 AM, you wrote: PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: PT> You really need some level of redundancy if you're using HW raid. PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all PT> that. Seems to

Re[2]: [zfs-discuss] disk evacuate

2006-06-28 Thread Robert Milkowski
Hello Noel, Wednesday, June 28, 2006, 5:59:18 AM, you wrote: ND> a zpool remove/shrink type function is on our list of features we want ND> to add. ND> We have RFE ND> 4852783 reduce pool capacity ND> open to track this. Is there someone actually working on this right now? -- Best regards, R

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Nagakiran
Depends on your definition of firmware. In higher end arrays the data is checksummed when it comes in and a hash is written when it gets to disk. Of course this is no where near end to end but it is better then nothing. ... and code is code. Easier to debug is a context sensitive term.

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Casper . Dik
>Depends on your definition of firmware. In higher end arrays the data is >checksummed when it comes in and a hash is written when it gets to disk. >Of course this is no where near end to end but it is better then nothing. The checksum is often stored with the data (so if the data is not writ

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Bill Sommerfeld
On Wed, 2006-06-28 at 09:05, [EMAIL PROTECTED] wrote: > > But the point is that ZFS should detect also such errors and take > > proper actions. Other filesystems can't. > > Does it mean that ZFS can detect errors in ZFS's code itself ? ;-) In many cases, yes. As a hypothetical: Consider a bug i

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Torrey McMahon
Jeremy Teo wrote: Hello, What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk surface, bad cables, etc. But cannot detect and repair errors in

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeff Victor
[EMAIL PROTECTED] wrote: On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk s

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello przemolicc, Wednesday, June 28, 2006, 3:05:42 PM, you wrote: ppf> On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: >> Hello przemolicc, >> >> Wednesday, June 28, 2006, 10:57:17 AM, you wrote: >> >> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: >> >> Case

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Darren J Moffat
Robert Milkowski wrote: Hello David, Wednesday, June 28, 2006, 12:30:54 AM, you wrote: DV> If ZFS is providing better data integrity then the current storage DV> arrays, that sounds like to me an opportunity for the next generation DV> of intelligent arrays to become better. Actually they can

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeremy Teo
Hello, What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk surface, bad cables, etc. But cannot detect and repair errors in its (ZFS) code. I

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: > Hello przemolicc, > > Wednesday, June 28, 2006, 10:57:17 AM, you wrote: > > ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > >> list

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello przemolicc, Wednesday, June 28, 2006, 10:57:17 AM, you wrote: ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86 >> list and described how faulty firmware on a Hitach HDS system damaged a >> bunch of

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Peter, Wednesday, June 28, 2006, 1:11:29 AM, you wrote: PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: PT> You really need some level of redundancy if you're using HW raid. PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all PT> that. Seems to me that the simplest way

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Erik, Tuesday, June 27, 2006, 6:50:52 PM, you wrote: ET> Personally, I can't think of a good reason to use ZFS with HW RAID5; ET> case (3) above seems to me to provide better performance with roughly ET> the same amount of redundancy (not quite true, but close). I can see a reason. In o

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello David, Wednesday, June 28, 2006, 12:30:54 AM, you wrote: DV> If ZFS is providing better data integrity then the current storage DV> arrays, that sounds like to me an opportunity for the next generation DV> of intelligent arrays to become better. Actually they can't. If you want end-to-end

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Al Hopper
On Wed, 28 Jun 2006 [EMAIL PROTECTED] wrote: > On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > > Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > > list and described how faulty firmware on a Hitach HDS system damaged a > > bunch of data. The HDS system mo

Re: [zfs-discuss] This may be a somewhat silly question ...

2006-06-28 Thread Darren J Moffat
eric kustarz wrote: What's needed after that is a way (such as a script) to 'zfs send' all the snapshot to the appropiate place. And very importantly you need a way to preserve all of the options set on the ZFS data set, otherwise IMO zfs send is no better than using an archiver that uses POS

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > list and described how faulty firmware on a Hitach HDS system damaged a > bunch of data. The HDS system moves disk blocks around, between one disk > and another