Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-29 Thread Robert Milkowski
Hello Philip, Thursday, June 29, 2006, 2:58:41 AM, you wrote: PB> Erik Trimble wrote: >> >> Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm >> assuming that the proper way to get benefits from both ZFS and HW RAID >> is the following: >> >> (1) ZFS mirror of HW stripes,

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter.  With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Joe Little
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote: On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > But Joe makes a good point about RAID-Z and iSCSI. > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > to do that: parity computation on write, checksum verificat

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Torrey McMahon
Philip Brown wrote: raid5 IS useful in zfs+hwraid boxes, for "Mean Time To Recover" purposes. Or, and people haven't really mentioned this yet, if you're using R5 for the raid set and carving LUNs out of it to multiple hosts. ___ zfs-discuss mail

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nicolas Williams
On Thu, Jun 29, 2006 at 09:25:21AM +1000, Nathan Kroenert wrote: > On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > > But Joe makes a good point about RAID-Z and iSCSI. > > > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > > to do that: parity computation on write,

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Philip Brown
Erik Trimble wrote: Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm assuming that the proper way to get benefits from both ZFS and HW RAID is the following: (1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror hwStripe1 hwStripe2" (2) ZFS RAIDZ of HW mirrors

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Richard Elling
[hit send too soon...] Richard Elling wrote: Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to los

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nathan Kroenert
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > But Joe makes a good point about RAID-Z and iSCSI. > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > to do that: parity computation on write, checksum verification on read > and, if the checksum verification fails, c

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 17:25, Erik Trimble wrote: On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote: On Jun 28, 2006, at 12:32, Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: > > Which is better - > > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? > > The latter. With a mirror of RAID-5 arrays, you get: > > (1) Self-healing data. > > (2) Tolerance of whole-array failure. > > (3) Tolerance of *

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote: > On Wed, 2006-06-28 at 17:32, Erik Trimble wrote: > > Given a reasonable number of hot-spares, I simply can't see the (very) > > marginal increase in safety give by using HW RAID5 as out balancing the > > considerable speed hit using RAID5

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Erik, Wednesday, June 28, 2006, 6:32:38 PM, you wrote: ET> Robert - ET> I would definitely like to see the difference between read on HW RAID5 ET> vs read on RAIDZ. Naturally, one of the big concerns I would have is ET> how much RAM is needed to avoid any cache starvation on the ZFS ET

Re[4]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Peter, Wednesday, June 28, 2006, 11:24:32 PM, you wrote: PT> Robert, >> PT> You really need some level of redundancy if you're using HW raid. >> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all >> PT> that. Seems to me that the simplest way to go is to use zfs to mirror >

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jeff Bonwick
> Which is better - > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter. With a mirror of RAID-5 arrays, you get: (1) Self-healing data. (2) Tolerance of whole-array failure. (3) Tolerance of *at least* three disk failures. (4) More IOPs than raidz of hardware mirror

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote: > > On Jun 28, 2006, at 12:32, Erik Trimble wrote: > > > The main reason I don't see ZFS mirror / HW RAID5 as useful is this: > > > > > > ZFS mirror/ RAID5: capacity = (N / 2) -1 > > > > speed

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Peter Tribble
Robert, > PT> You really need some level of redundancy if you're using HW raid. > PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all > PT> that. Seems to me that the simplest way to go is to use zfs to mirror > PT> HW raid5, preferably with the HW raid5 LUNs being completely > PT>

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Peter Tribble
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote: > The main reason I don't see ZFS mirror / HW RAID5 as useful is this: > > ZFS mirror/ RAID5: capacity = (N / 2) -1 > speed << N / 2 -1 > minimum # disks to lose before

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Nicolas Williams
On Wed, Jun 28, 2006 at 11:15:34AM +0200, Robert Milkowski wrote: > DV> If ZFS is providing better data integrity then the current storage > DV> arrays, that sounds like to me an opportunity for the next generation > DV> of intelligent arrays to become better. > RM> Actually they can't. RM> If yo

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5:      capacity =  (N / 2) -1                                     speed <<  N / 2 -1                                     minimum # disks to lose before loss of data: 

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Richard Elling
Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to lose before loss

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
Robert Milkowski wrote: Hello Peter, Wednesday, June 28, 2006, 1:11:29 AM, you wrote: PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: PT> You really need some level of redundancy if you're using HW raid. PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all PT> that. Seems to

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Darren J Moffat
Robert Milkowski wrote: Hello David, Wednesday, June 28, 2006, 12:30:54 AM, you wrote: DV> If ZFS is providing better data integrity then the current storage DV> arrays, that sounds like to me an opportunity for the next generation DV> of intelligent arrays to become better. Actually they can

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Peter, Wednesday, June 28, 2006, 1:11:29 AM, you wrote: PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: PT> You really need some level of redundancy if you're using HW raid. PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all PT> that. Seems to me that the simplest way

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello Erik, Tuesday, June 27, 2006, 6:50:52 PM, you wrote: ET> Personally, I can't think of a good reason to use ZFS with HW RAID5; ET> case (3) above seems to me to provide better performance with roughly ET> the same amount of redundancy (not quite true, but close). I can see a reason. In o

Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello David, Wednesday, June 28, 2006, 12:30:54 AM, you wrote: DV> If ZFS is providing better data integrity then the current storage DV> arrays, that sounds like to me an opportunity for the next generation DV> of intelligent arrays to become better. Actually they can't. If you want end-to-end

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Gregory Shaw
On Jun 27, 2006, at 3:30 PM, Al Hopper wrote:On Tue, 27 Jun 2006, Gregory Shaw wrote: Yes, but the idea of using software raid on a large server doesn'tmake sense in modern systems.  If you've got a large database serverthat runs a large oracle instance, using CPU cycles for RAID iscounter producti

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Peter Tribble
On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: > Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm > assuming that the proper way to get benefits from both ZFS and HW RAID > is the following: > > (1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror > hwStripe1 hw

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread David Valin
Al Hopper wrote: > On Tue, 27 Jun 2006, Gregory Shaw wrote: > > >>Yes, but the idea of using software raid on a large server doesn't >>make sense in modern systems. If you've got a large database server >>that runs a large oracle instance, using CPU cycles for RAID is >>counter productive. Add

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Al Hopper
On Tue, 27 Jun 2006, Gregory Shaw wrote: > Yes, but the idea of using software raid on a large server doesn't > make sense in modern systems. If you've got a large database server > that runs a large oracle instance, using CPU cycles for RAID is > counter productive. Add to that the need to mana

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Torrey McMahon
Your example would prove more effective if you added, "I've got ten databases. Five on AIX, Five on Solaris 8" Peter Rival wrote: I don't like to top-post, but there's no better way right now. This issue has recurred several times and there have been no answers to it that cover the bases.

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Torrey McMahon
[EMAIL PROTECTED] wrote: That's the dilemma, the array provides nice features like RAID1 and RAID5, but those are of no real use when using ZFS. RAID5 is not a "nice" feature when it breaks. A RAID controller cannot guarantee that all bits of a RAID5 stripe are written when power fai

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Joe Little
On 6/27/06, Erik Trimble <[EMAIL PROTECTED]> wrote: Darren J Moffat wrote: > Peter Rival wrote: > >> storage arrays with the same arguments over and over without >> providing an answer to the customer problem doesn't do anyone any >> good. So. I'll restate the question. I have a 10TB database

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Dale Ghent
Gregory Shaw wrote: Yes, but the idea of using software raid on a large server doesn't make sense in modern systems. If you've got a large database server that runs a large oracle instance, using CPU cycles for RAID is counter productive. Add to that the need to manage the hardware directly (

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Erik Trimble
Darren J Moffat wrote: Peter Rival wrote: storage arrays with the same arguments over and over without providing an answer to the customer problem doesn't do anyone any good. So. I'll restate the question. I have a 10TB database that's spread over 20 storage arrays that I'd like to migrat

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Richard Elling
Peter Rival wrote: I don't like to top-post, but there's no better way right now. This issue has recurred several times and there have been no answers to it that cover the bases. The question is, say I as a customer have a database, let's say it's around 8 TB, all built on a series of high en

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Jeff Victor
Peter Rival wrote: See, telling folks "you should just use JBOD" when they don't have JBOD and have invested millions to get to state they're in where they're efficiently utilizing their storage via a SAN infrastructure is just plain one big waste of everyone's time. Shouting down the advant

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Darren J Moffat
Peter Rival wrote: storage arrays with the same arguments over and over without providing an answer to the customer problem doesn't do anyone any good. So. I'll restate the question. I have a 10TB database that's spread over 20 storage arrays that I'd like to migrate to ZFS. How should I co

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Peter Rival
I don't like to top-post, but there's no better way right now. This issue has recurred several times and there have been no answers to it that cover the bases. The question is, say I as a customer have a database, let's say it's around 8 TB, all built on a series of high end storage arrays th

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Gregory Shaw
Yes, but the idea of using software raid on a large server doesn't make sense in modern systems. If you've got a large database server that runs a large oracle instance, using CPU cycles for RAID is counter productive. Add to that the need to manage the hardware directly (drive microcode,

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Jeff Victor
Does it make sense to solve these problems piece-meal: * Performance: ZFS algorithms and NVRAM * Error detection: ZFS checksums * Error correction: ZFS RAID1 or RAIDZ Nathanael Burton wrote: If you've got hardware raid-5, why not just run regular (non-raid) pools on top of the raid-5? I wouldn

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Robert Milkowski
Hello Nathanael, NB> I'm a little confused by the first poster's message as well, but NB> you lose some benefits of ZFS if you don't create your pools with NB> either RAID1 or RAIDZ, such as data corruption detection. The NB> array isn't going to detect that because all it knows about are blocks

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Roch
Mika Borner writes: > >RAID5 is not a "nice" feature when it breaks. > > Let me correct myself... RAID5 is a "nice" feature for systems without > ZFS... > > >Are huge write caches really a advantage? Or are you taking about > huge > >write caches with non-volatile storage? > > Yes,

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Mika Borner
>RAID5 is not a "nice" feature when it breaks. Let me correct myself... RAID5 is a "nice" feature for systems without ZFS... >Are huge write caches really a advantage? Or are you taking about huge >write caches with non-volatile storage? Yes, you are right. The huge cache is needed mostly beca

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Casper . Dik
>That's the dilemma, the array provides nice features like RAID1 and >RAID5, but those are of no real use when using ZFS. RAID5 is not a "nice" feature when it breaks. A RAID controller cannot guarantee that all bits of a RAID5 stripe are written when power fails; then you have data corruptio

[zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Mika Borner
>I'm a little confused by the first poster's message as well, but you lose some benefits of ZFS if you don't create >your pools with either RAID1 or RAIDZ, such as data corruption detection. The array isn't going to detect that >because all it knows about are blocks. That's the dilemma, the arra

[zfs-discuss] Re: ZFS and Storage

2006-06-26 Thread Nathanael Burton
> If you've got hardware raid-5, why not just run > regular (non-raid) > pools on top of the raid-5? > > I wouldn't go back to JBOD. Hardware arrays offer a > number of > advantages to JBOD: > - disk microcode management > - optimized access to storage > - large write cache