Hello Philip,
Thursday, June 29, 2006, 2:58:41 AM, you wrote:
PB> Erik Trimble wrote:
>>
>> Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
>> assuming that the proper way to get benefits from both ZFS and HW RAID
>> is the following:
>>
>> (1) ZFS mirror of HW stripes,
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter. With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verificat
Philip Brown wrote:
raid5 IS useful in zfs+hwraid boxes, for "Mean Time To Recover" purposes.
Or, and people haven't really mentioned this yet, if you're using R5 for
the raid set and carving LUNs out of it to multiple hosts.
___
zfs-discuss mail
On Thu, Jun 29, 2006 at 09:25:21AM +1000, Nathan Kroenert wrote:
> On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> > But Joe makes a good point about RAID-Z and iSCSI.
> >
> > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> > to do that: parity computation on write,
Erik Trimble wrote:
Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
assuming that the proper way to get benefits from both ZFS and HW RAID
is the following:
(1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror
hwStripe1 hwStripe2"
(2) ZFS RAIDZ of HW mirrors
[hit send too soon...]
Richard Elling wrote:
Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed << N / 2 -1
minimum # disks to los
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verification on read
> and, if the checksum verification fails, c
On Jun 28, 2006, at 17:25, Erik Trimble wrote:
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote:
On Jun 28, 2006, at 12:32, Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote:
> > Which is better -
> > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
>
> The latter. With a mirror of RAID-5 arrays, you get:
>
> (1) Self-healing data.
>
> (2) Tolerance of whole-array failure.
>
> (3) Tolerance of *
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote:
> On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
> > Given a reasonable number of hot-spares, I simply can't see the (very)
> > marginal increase in safety give by using HW RAID5 as out balancing the
> > considerable speed hit using RAID5
Hello Erik,
Wednesday, June 28, 2006, 6:32:38 PM, you wrote:
ET> Robert -
ET> I would definitely like to see the difference between read on HW RAID5
ET> vs read on RAIDZ. Naturally, one of the big concerns I would have is
ET> how much RAM is needed to avoid any cache starvation on the ZFS
ET
Hello Peter,
Wednesday, June 28, 2006, 11:24:32 PM, you wrote:
PT> Robert,
>> PT> You really need some level of redundancy if you're using HW raid.
>> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
>> PT> that. Seems to me that the simplest way to go is to use zfs to mirror
>
> Which is better -
> zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least* three disk failures.
(4) More IOPs than raidz of hardware mirror
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote:
>
> On Jun 28, 2006, at 12:32, Erik Trimble wrote:
>
> > The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
> >
> >
> > ZFS mirror/ RAID5: capacity = (N / 2) -1
> >
> > speed
Robert,
> PT> You really need some level of redundancy if you're using HW raid.
> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
> PT> that. Seems to me that the simplest way to go is to use zfs to mirror
> PT> HW raid5, preferably with the HW raid5 LUNs being completely
> PT>
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
> The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
>
> ZFS mirror/ RAID5: capacity = (N / 2) -1
> speed << N / 2 -1
> minimum # disks to lose before
On Wed, Jun 28, 2006 at 11:15:34AM +0200, Robert Milkowski wrote:
> DV> If ZFS is providing better data integrity then the current storage
> DV> arrays, that sounds like to me an opportunity for the next generation
> DV> of intelligent arrays to become better.
>
RM> Actually they can't.
RM> If yo
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to lose before loss of data:
Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed << N / 2 -1
minimum # disks to lose before loss
Robert Milkowski wrote:
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT> You really need some level of redundancy if you're using HW raid.
PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT> that. Seems to
Robert Milkowski wrote:
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV> If ZFS is providing better data integrity then the current storage
DV> arrays, that sounds like to me an opportunity for the next generation
DV> of intelligent arrays to become better.
Actually they can
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT> You really need some level of redundancy if you're using HW raid.
PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT> that. Seems to me that the simplest way
Hello Erik,
Tuesday, June 27, 2006, 6:50:52 PM, you wrote:
ET> Personally, I can't think of a good reason to use ZFS with HW RAID5;
ET> case (3) above seems to me to provide better performance with roughly
ET> the same amount of redundancy (not quite true, but close).
I can see a reason.
In o
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV> If ZFS is providing better data integrity then the current storage
DV> arrays, that sounds like to me an opportunity for the next generation
DV> of intelligent arrays to become better.
Actually they can't.
If you want end-to-end
On Jun 27, 2006, at 3:30 PM, Al Hopper wrote:On Tue, 27 Jun 2006, Gregory Shaw wrote: Yes, but the idea of using software raid on a large server doesn'tmake sense in modern systems. If you've got a large database serverthat runs a large oracle instance, using CPU cycles for RAID iscounter producti
On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
> Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
> assuming that the proper way to get benefits from both ZFS and HW RAID
> is the following:
>
> (1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror
> hwStripe1 hw
Al Hopper wrote:
> On Tue, 27 Jun 2006, Gregory Shaw wrote:
>
>
>>Yes, but the idea of using software raid on a large server doesn't
>>make sense in modern systems. If you've got a large database server
>>that runs a large oracle instance, using CPU cycles for RAID is
>>counter productive. Add
On Tue, 27 Jun 2006, Gregory Shaw wrote:
> Yes, but the idea of using software raid on a large server doesn't
> make sense in modern systems. If you've got a large database server
> that runs a large oracle instance, using CPU cycles for RAID is
> counter productive. Add to that the need to mana
Your example would prove more effective if you added, "I've got ten
databases. Five on AIX, Five on Solaris 8"
Peter Rival wrote:
I don't like to top-post, but there's no better way right now. This
issue has recurred several times and there have been no answers to it
that cover the bases.
[EMAIL PROTECTED] wrote:
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a "nice" feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power fai
On 6/27/06, Erik Trimble <[EMAIL PROTECTED]> wrote:
Darren J Moffat wrote:
> Peter Rival wrote:
>
>> storage arrays with the same arguments over and over without
>> providing an answer to the customer problem doesn't do anyone any
>> good. So. I'll restate the question. I have a 10TB database
Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't make
sense in modern systems. If you've got a large database server that
runs a large oracle instance, using CPU cycles for RAID is counter
productive. Add to that the need to manage the hardware directly (
Darren J Moffat wrote:
Peter Rival wrote:
storage arrays with the same arguments over and over without
providing an answer to the customer problem doesn't do anyone any
good. So. I'll restate the question. I have a 10TB database that's
spread over 20 storage arrays that I'd like to migrat
Peter Rival wrote:
I don't like to top-post, but there's no better way right now. This
issue has recurred several times and there have been no answers to it
that cover the bases. The question is, say I as a customer have a
database, let's say it's around 8 TB, all built on a series of high en
Peter Rival wrote:
See, telling folks "you should just use JBOD" when they don't have JBOD
and have invested millions to get to state they're in where they're
efficiently utilizing their storage via a SAN infrastructure is just
plain one big waste of everyone's time. Shouting down the advant
Peter Rival wrote:
storage arrays with the same arguments over and over without providing
an answer to the customer problem doesn't do anyone any good. So. I'll
restate the question. I have a 10TB database that's spread over 20
storage arrays that I'd like to migrate to ZFS. How should I co
I don't like to top-post, but there's no better way right now. This issue has
recurred several times and there have been no answers to it that cover the
bases. The question is, say I as a customer have a database, let's say it's
around 8 TB, all built on a series of high end storage arrays th
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that the need to manage the hardware
directly (drive microcode,
Does it make sense to solve these problems piece-meal:
* Performance: ZFS algorithms and NVRAM
* Error detection: ZFS checksums
* Error correction: ZFS RAID1 or RAIDZ
Nathanael Burton wrote:
If you've got hardware raid-5, why not just run regular (non-raid) pools on
top of the raid-5?
I wouldn
Hello Nathanael,
NB> I'm a little confused by the first poster's message as well, but
NB> you lose some benefits of ZFS if you don't create your pools with
NB> either RAID1 or RAIDZ, such as data corruption detection. The
NB> array isn't going to detect that because all it knows about are blocks
Mika Borner writes:
> >RAID5 is not a "nice" feature when it breaks.
>
> Let me correct myself... RAID5 is a "nice" feature for systems without
> ZFS...
>
> >Are huge write caches really a advantage? Or are you taking about
> huge
> >write caches with non-volatile storage?
>
> Yes,
>RAID5 is not a "nice" feature when it breaks.
Let me correct myself... RAID5 is a "nice" feature for systems without
ZFS...
>Are huge write caches really a advantage? Or are you taking about
huge
>write caches with non-volatile storage?
Yes, you are right. The huge cache is needed mostly beca
>That's the dilemma, the array provides nice features like RAID1 and
>RAID5, but those are of no real use when using ZFS.
RAID5 is not a "nice" feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power fails; then you have data corruptio
>I'm a little confused by the first poster's message as well, but you
lose some benefits of ZFS if you don't create >your pools with either
RAID1 or RAIDZ, such as data corruption detection. The array isn't
going to detect that >because all it knows about are blocks.
That's the dilemma, the arra
> If you've got hardware raid-5, why not just run
> regular (non-raid)
> pools on top of the raid-5?
>
> I wouldn't go back to JBOD. Hardware arrays offer a
> number of
> advantages to JBOD:
> - disk microcode management
> - optimized access to storage
> - large write cache
46 matches
Mail list logo