Re: [zfs-discuss] # devices in raidz.

2007-04-11 Thread Cindy . Swearingen
Mike, This RFE is still being worked and I have no ETA on completion... cs Mike Seda wrote: I noticed that there is still an open bug regarding removing devices from a zpool: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783 Does anyone know if or when this feature will be im

Re: [zfs-discuss] # devices in raidz.

2007-04-10 Thread Mike Seda
I noticed that there is still an open bug regarding removing devices from a zpool: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783 Does anyone know if or when this feature will be implemented? Cindy Swearingen wrote: Hi Mike, Yes, outside of the hot-spares feature, you can

Re: [zfs-discuss] # devices in raidz.

2006-11-15 Thread Richard Elling - PAE
Torrey McMahon wrote: Richard Elling - PAE wrote: Torrey McMahon wrote: Robert Milkowski wrote: Hello Torrey, Friday, November 10, 2006, 11:31:31 PM, you wrote: [SNIP] Tunable in a form of pool property, with default 100%. On the other hand maybe simple algorithm Veritas has used is good e

Re: [zfs-discuss] # devices in raidz.

2006-11-14 Thread Torrey McMahon
Richard Elling - PAE wrote: Torrey McMahon wrote: Robert Milkowski wrote: Hello Torrey, Friday, November 10, 2006, 11:31:31 PM, you wrote: [SNIP] Tunable in a form of pool property, with default 100%. On the other hand maybe simple algorithm Veritas has used is good enough - simple delay be

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Richard Elling - PAE
Torrey McMahon wrote: Robert Milkowski wrote: Hello Torrey, Friday, November 10, 2006, 11:31:31 PM, you wrote: TM> Robert Milkowski wrote: Also scrub can consume all CPU power on smaller and older machines and that's not always what I would like. REP> The big question, though, is "10% of

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Cindy Swearingen
Hi Mike, Yes, outside of the hot-spares feature, you can detach, offline, and replace existing devices in a pool, but you can't remove devices, yet. This feature work is being tracked under this RFE: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783 Cindy Mike Seda wrote:

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Mike Seda
Hi All, From reading the docs, it seems that you can add devices (non-spares) to a zpool, but you cannot take them away, right? Best, Mike Victor Latushkin wrote: Maybe something like the "slow" parameter of VxVM? slow[=iodelay] Reduces toe system performan

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Victor Latushkin
Maybe something like the "slow" parameter of VxVM? slow[=iodelay] Reduces toe system performance impact of copy operations. Such operations are usually per- formed on small regions of the volume (nor-

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Torrey McMahon
Howdy Robert. Robert Milkowski wrote: You've got the same behavior with any LVM when you replace a disk. So it's not something unexpected for admins. Also most of the time they expect LVM to resilver ASAP. With default setting not being 100% you'll definitely see people complaining ZFS is slooo

Re: [zfs-discuss] # devices in raidz.

2006-11-12 Thread Torrey McMahon
Robert Milkowski wrote: Hello Torrey, Friday, November 10, 2006, 11:31:31 PM, you wrote: TM> Robert Milkowski wrote: Also scrub can consume all CPU power on smaller and older machines and that's not always what I would like. REP> The big question, though, is "10% of what?" User CPU? iop

Re: [zfs-discuss] # devices in raidz.

2006-11-10 Thread Torrey McMahon
Robert Milkowski wrote: Also scrub can consume all CPU power on smaller and older machines and that's not always what I would like. REP> The big question, though, is "10% of what?" User CPU? iops? AH> Probably N% of I/O Ops/Second would work well. Or if 100% means full speed, then 1

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Daniel Rock
Richard Elling - PAE schrieb: For modern machines, which *should* be the design point, the channel bandwidth is underutilized, so why not use it? And what about encrypted disks? Simply create a zpool with checksum=sha256, fill it up, then scrub. I'd be happy if I could use my machine during s

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Torrey McMahon
Richard Elling - PAE wrote: The better approach is for the file system to do what it needs to do as efficiently as possible, which is the current state of ZFS. This implies that the filesystem has exclusive use of the channel - SAN or otherwise - as well as the storage array front end control

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Richard Elling - PAE
Daniel Rock wrote: Richard Elling - PAE schrieb: The big question, though, is "10% of what?" User CPU? iops? Maybe something like the "slow" parameter of VxVM? slow[=iodelay] Reduces toe system performance impact of copy operations. Su

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Daniel Rock
Richard Elling - PAE schrieb: The big question, though, is "10% of what?" User CPU? iops? Maybe something like the "slow" parameter of VxVM? slow[=iodelay] Reduces toe system performance impact of copy operations. Such operations are us

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Richard Elling - PAE
Robert Milkowski wrote: Saturday, November 4, 2006, 12:46:05 AM, you wrote: REP> Incidentally, since ZFS schedules the resync iops itself, then it can REP> really move along on a mostly idle system. You should be able to resync REP> at near the media speed for an idle system. By contrast, a har

Re: [zfs-discuss] # devices in raidz.

2006-11-06 Thread Torrey McMahon
Richard Elling - PAE wrote: Incidentally, since ZFS schedules the resync iops itself, then it can really move along on a mostly idle system. You should be able to resync at near the media speed for an idle system. By contrast, a hardware RAID array has no knowledge of the context of the data

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Richard Elling - PAE
Al Hopper wrote: [1] Using MTTDL = MTBF^2 / (N * (N-1) * MTTR) But ... I'm not sure I buy into your numbers given the probability that more than one disk will fail inside the service window - given that the disks are identical? Or ... a disk failure occurs at 5:01 PM (quitting time) on a Frida

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Al Hopper
On Fri, 3 Nov 2006, Richard Elling - PAE wrote: > ozan s. yigit wrote: > > for s10u2, documentation recommends 3 to 9 devices in raidz. what is the > > basis for this recommendation? i assume it is performance and not failure > > resilience, but i am just guessing... [i know, recommendation was in

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Richard Elling - PAE
ozan s. yigit wrote: for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no f

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Robert Milkowski
Hello ozan, Friday, November 3, 2006, 3:57:00 PM, you wrote: osy> for s10u2, documentation recommends 3 to 9 devices in raidz. what is the osy> basis for this recommendation? i assume it is performance and not failure osy> resilience, but i am just guessing... [i know, recommendation was intended