Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Krzys
Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote: > Sorry, Bart, is correct: > > If new_device is not specified, it defaults to > old_device. This form of replacement is useful after an > existing disk has failed

Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos wrote: > Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: > > I currently have a 400GB disk that is full of data on a linux system. > > If I buy 2 more disk

Re: [zfs-discuss] raidz recovery

2010-12-21 Thread Gareth de Vaux
Hi, I'm copying the list - assume you meant to send it there. On Sun 2010-12-19 (15:52), Miles Nordin wrote: > If 'zpool replace /dev/ad6' will not accept that the disk is a > replacement, then You can unplug the disk, erase the label in a > different machine using > > dd if=/dev/zero of=/dev/the

Re: [zfs-discuss] raidz recovery

2010-12-18 Thread Gareth de Vaux
On Sat 2010-12-18 (14:55), Tuomas Leikola wrote: > have you tried zpool replace? like remove ad6, fill with zeroes, > replace, command "zpool replace tank ad6". That should simulate drive > failure and replacement with a new disk. 'replace' requires a different disk to replace with. How do you "r

Re: [zfs-discuss] raidz recovery

2010-12-18 Thread Tuomas Leikola
On Wed, Dec 15, 2010 at 3:29 PM, Gareth de Vaux wrote: > On Mon 2010-12-13 (16:41), Marion Hakanson wrote: >> After you "clear" the errors, do another "scrub" before trying anything >> else.  Once you get a complete scrub with no new errors (and no checksum >> errors), you should be confident that

Re: [zfs-discuss] raidz recovery

2010-12-15 Thread Gareth de Vaux
On Mon 2010-12-13 (16:41), Marion Hakanson wrote: > After you "clear" the errors, do another "scrub" before trying anything > else. Once you get a complete scrub with no new errors (and no checksum > errors), you should be confident that the damaged drive has been fully > re-integrated into the po

Re: [zfs-discuss] raidz recovery

2010-12-13 Thread Marion Hakanson
z...@lordcow.org said: > For example when I 'dd if=/dev/zero of=/dev/ad6', or physically remove the > drive for awhile, then 'online' the disk, after it resilvers I'm typically > left with the following after scrubbing: > > r...@file:~# zpool status > pool: pool > state: ONLINE status: One or m

[zfs-discuss] raidz recovery

2010-12-11 Thread Gareth de Vaux
Hi all, I'm trying to simulate a drive failure and recovery on a raidz array. I'm able to do so using 'replace', but this requires an extra disk that was not part of the array. How do you manage when you don't have or need an extra disk yet? For example when I 'dd if=/dev/zero of=/dev/ad6', or phy

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Hans-Christian Otto
Hi Cindy, > I don't think the force import of a degraded pool would cause the pool > to be faulted. In general, the I/O error is caused when ZFS can't access the > underlying devices. In this case, your non-standard devices names > might have caused that message. as I wrote in my first mail, zpo

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Christian, Yes, with non-standard disks you will need to provide the path to zpool import. I don't think the force import of a degraded pool would cause the pool to be faulted. In general, the I/O error is caused when ZFS can't access the underlying devices. In this case, your non-standard d

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Hans-Christian Otto
Hi Cindy, > Can you provide the commands you used to create this pool? I don't have them anymore, no. But they were pretty much like what you wrote below. > Are the pool devices actually files? If so, I don't see how you > have a pool device that starts without a leading slash. I tried > to creat

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Hans-Christian, Can you provide the commands you used to create this pool? Are the pool devices actually files? If so, I don't see how you have a pool device that starts without a leading slash. I tried to create one and it failed. See the example below. By default, zpool import looks in the

[zfs-discuss] raidz faulted with only one unavailable disk

2010-10-07 Thread Hans-Christian Otto
Hi, I've been playing around with zfs for a few days now, and now ended up with a faulted raidz (4 disks) with 3 disks still marked as online. Lets start with the output of zpool import: pool: tank-1 id: 15108774693087697468 state: FAULTED status: One or more devices contains corrupted da

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-12 Thread Adam Leventhal
> In my case, it gives an error that I need at least 11 disks (which I don't) > but the point is that raidz parity does not seem to be limited to 3. Is this > not true? RAID-Z is limited to 3 parity disks. The error message is giving you false hope and that's a bug. If you had plugged in 11 dis

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-12 Thread Arne Schwabe
Am 11.08.10 00:40, schrieb Peter Taps: > Hi, > > I am going through understanding the fundamentals of raidz. From the man > pages, a raidz configuration of P disks and N parity provides (P-N)*X storage > space where X is the size of the disk. For example, if I have 3 disks of 10G > each and I c

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you, Eric. Your explanation is clear to understand. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
I am running ZFS file system version 5 on Nexenta. Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Peter wrote: > One question though. Marty mentioned that raidz > parity is limited to 3. But in my experiment, it > seems I can get parity to any level. > > You create a raidz zpool as: > > # zpool create mypool raidzx disk1 diskk2 > > Here, x in raidzx is a numeric value indicating the > d

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you all for your help. It appears my understanding of parity was rather limited. I kept on thinking about parity in memory where the extra bit would be used to ensure that the total of all 9 bits is always even. In case of zfs, the above type of checking is actually moved into checksum.

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Eric D. Mudama
On Tue, Aug 10 at 21:57, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. The data for any given sector striped across all drives can be thought of as: A+B+C = P where A..C

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Erik Trimble wrote: > On 8/10/2010 9:57 PM, Peter Taps wrote: > > Hi Eric, > > > > Thank you for your help. At least one part is clear > now. > > > > I still am confused about how the system is still > functional after one disk fails. > > > > Consider my earlier example of 3 disks zpool > configure

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Thomas Burgess
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps wrote: > Hi Eric, > > Thank you for your help. At least one part is clear now. > > I still am confused about how the system is still functional after one disk > fails. > > Consider my earlier example of 3 disks zpool configured for raidz-1. To > keep i

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Haudy Kazemi
Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consider block sizes. Let's

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Erik Trimble
On 8/10/2010 9:57 PM, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consid

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Peter Taps
Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consider block sizes. Let's say I send a write

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Eric D. Mudama
On Tue, Aug 10 at 15:40, Peter Taps wrote: Hi, First, I don't understand why parity takes so much space. From what I know about parity, there is typically one parity bit per byte. Therefore, the parity should be taking 1/8 of storage, not 1/3 of storage. What am I missing? Think of it as 1 bit

[zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Peter Taps
Hi, I am going through understanding the fundamentals of raidz. From the man pages, a raidz configuration of P disks and N parity provides (P-N)*X storage space where X is the size of the disk. For example, if I have 3 disks of 10G each and I configure it with raidz1, I will have 20G of usable

Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-18 Thread Craig Cory
When viewing a raidz|raidz1|raidz2 pool, 'zpool list|status' will report the total "device" space; ie: 3 1TB drives in a raidz will show approx. 3TB space. 'zfs list' will show available FILESYSTEM space, ie: 3 1TB raidz disks, approx 2TB space. Logic wrote: > Ian Collins (i...@ianshome.com) wrot

Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Logic
Ian Collins (i...@ianshome.com) wrote: > On 07/18/10 11:19 AM, marco wrote: >> *snip* >> >> > Yes, that is correct. zfs list reports usable space, which is 2 out of > the three drives (parity isn't confined to one device). > >> *snip* >> >> > Are you sure? That result looks odd. It is w

Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Ian Collins
On 07/18/10 11:19 AM, marco wrote: Im seeing weird differences between 2 raidz pools, 1 created on a recent freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old osol build. The raidz pool on the fbsd box is created from 3 2Tb sata drives. The raidz pool on the osol box

[zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread marco
Im seeing weird differences between 2 raidz pools, 1 created on a recent freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old osol build. The raidz pool on the fbsd box is created from 3 2Tb sata drives. The raidz pool on the osol box was created in the past from 3 smalle

Re: [zfs-discuss] raidz using partitions

2010-01-28 Thread Lutz Schumann
Also write performance may drop because of write dache disable: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools Just a hint, have not tested this. Robert -- This message posted from opensolaris.org ___ zfs-discus

Re: [zfs-discuss] raidz using partitions

2010-01-28 Thread Sanjeev
Albert, On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote: > hi there, > > maybe this is a stupid question, yet i haven't found an answer anywhere ;) > let say i got 3x 1,5tb hdds, can i create equal partitions out of each and > make a raid5 out of it? sure the safety would drop, but

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread A Darren Dunham
On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote: > hi there, > > maybe this is a stupid question, yet i haven't found an answer anywhere ;) > let say i got 3x 1,5tb hdds, can i create equal partitions out of each and > make a raid5 out of it? sure the safety would drop, but that is n

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
ok nice to know :) thank you very much for your quick answer -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread Thomas Burgess
On Wed, Jan 27, 2010 at 1:55 PM, Albert Frenz wrote: > hi there, > > maybe this is a stupid question, yet i haven't found an answer anywhere ;) > let say i got 3x 1,5tb hdds, can i create equal partitions out of each and > make a raid5 out of it? sure the safety would drop, but that is not that >

[zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
hi there, maybe this is a stupid question, yet i haven't found an answer anywhere ;) let say i got 3x 1,5tb hdds, can i create equal partitions out of each and make a raid5 out of it? sure the safety would drop, but that is not that important to me. with roughly 500gb partitions and the raid5 fo

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-05 Thread Richard Elling
On Jan 4, 2010, at 7:08 PM, Brad wrote: Hi Adam, From your the picture, it looks like the data is distributed evenly (with the exception of parity) across each spindle then wrapping around again (final 4K) - is this one single write operation or two? | P | D00 | D01 | D02 | D03 | D04 | D

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-05 Thread Kjetil Torgrim Homme
Brad writes: > Hi Adam, I'm not Adam, but I'll take a stab at it anyway. BTW, your crossposting is a bit confusing to follow, at least when using gmane.org. I think it is better to stick to one mailing list anyway? > From your the picture, it looks like the data is distributed evenly > (with

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
Hi Adam, >From your the picture, it looks like the data is distributed evenly (with the >exception of parity) across each spindle then wrapping around again (final 4K) >- is this one single write operation or two? | P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | <-one write op

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Adam Leventhal
Hi Brad, RAID-Z will carve up the 8K blocks into chunks at the granularity of the sector size -- today 512 bytes but soon going to 4K. In this case a 9-disk RAID-Z vdev will look like this: | P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | | P | D08 | D09 | D10 | D11 | D12 | D13 | D14 |

[zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
If a 8K file system block is written on a 9 disk raidz vdev, how is the data distributed (writtened) between all devices in the vdev since a zfs write is one continuously IO operation? Is it distributed evenly (1.125KB) per device? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
@ross "If the write doesn't span the whole stripe width then there is a read of the parity chunk, write of the block and a write of the parity chunk which is the write hole penalty/vulnerability, and is 3 operations (if the data spans more then 1 chunk then it is written in parallel so you can thi

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 5:37 PM, Brad wrote: Hi! I'm attempting to understand the pros/cons between raid5 and raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0 ). I would appreciate some feedback on what I've und

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread A Darren Dunham
On Tue, Dec 29, 2009 at 02:37:20PM -0800, Brad wrote: > I would appreciate some feedback on what I've understood so far: > > WRITES > > raid5 - A FS block is written on a single disk (or multiple disks depending on size data???) There is no direct relationship between a filesystem and the RAID s

[zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
Hi! I'm attempting to understand the pros/cons between raid5 and raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0). I would appreciate some feedback on what I've understood so far: WRITES raid5 - A FS block is

Re: [zfs-discuss] raidz data loss stories?

2009-12-27 Thread Al Hopper
I know I'm a bit late to contribute to this thread, but I'd still like to add my $0.02. My "gut feel" is that we (generally) don't yet understand the subtleties of disk drive failure modes as they relate to 1.5 or 2Tb+ drives. Why? Because those large drives have not been widely available until

Re: [zfs-discuss] raidz data loss stories?

2009-12-25 Thread Adam Leventhal
>> Applying classic RAID terms to zfs is just plain >> wrong and misleading >> since zfs does not directly implement these classic >> RAID approaches >> even though it re-uses some of the algorithms for >> data recovery. >> Details do matter. > > That's not entirely true, is it? > * RAIDZ is RA

Re: [zfs-discuss] raidz data loss stories?

2009-12-23 Thread Eric D. Mudama
On Tue, Dec 22 at 12:33, James Risner wrote: As for whether or not to do raidz, for me the issue is performance. I can't handle the raidz write penalty. If I needed triple drive protection, a 3way mirror setup would be the only way I would go. I don't yet quite understand why a 4+ drive raidz3

Re: [zfs-discuss] raidz data loss stories?

2009-12-23 Thread Bob Friesenhahn
On Tue, 22 Dec 2009, Marty Scholes wrote: If there is a RAIDZ write penalty over mirroring, I am unaware of it. In fact, sequential writes are faster under RAIDZ. There is always an IOPS penalty for raidz when writing or reading, given a particular zfs block size. There may be a write pena

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Marc Bevand
Ross Walker gmail.com> writes: > > Scrubbing on a routine basis is good for detecting problems early, but > it doesn't solve the problem of a double failure during resilver. Scrubbing doesn't solve double failures, but it significantly decreases their likelihood. The assumption here is that t

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Frank Cusack
On December 21, 2009 10:45:29 PM -0500 Ross Walker wrote: Scrubbing on a routine basis is good for detecting problems early, but it doesn't solve the problem of a double failure during resilver. As the size of disks become huge the chance of a double failure during resilvering increases to the p

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Ross Walker
On Dec 22, 2009, at 11:46 AM, Bob Friesenhahn > wrote: On Tue, 22 Dec 2009, Ross Walker wrote: Raid10 provides excellent performance and if performance is a priority then I recommend it, but I was under the impression that resiliency was the priority, as raidz2/raidz3 provide greater res

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Toby Thain
On 22-Dec-09, at 3:33 PM, James Risner wrote: ... Joerg Moellenkamp: I do "consider RAID5 as 'Stripeset with an interleaved Parity'", so I don't agree with the strong objection in this thread by many about the use of RAID5 to describe what raidz does. I don't think many particularly

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Travis Tabbal
> Everything I've seen you should stay around 6-9 > drives for raidz, so don't do a raidz3 with 12 > drives. Instead make two raidz3 with 6 drives each > (which is (6-3)*1.5 * 2 = 9 TB array.) So the question becomes, why? If it's performance, I can live with lower IOPS and max throughput. If i

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Marty Scholes
risner wrote: > If I understand correctly, raidz{1} is 1 drive > protection and space is (drives - 1) available. > Raidz2 is 2 drive protection and space is (drives - > 2) etc. Same for raidz3 being 3 drive protection. Yes. > Everything I've seen you should stay around 6-9 > drives for raidz, so

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Bob Friesenhahn
On Tue, 22 Dec 2009, James Risner wrote: I do "consider RAID5 as 'Stripeset with an interleaved Parity'", so I don't agree with the strong objection in this thread by many about the use of RAID5 to describe what raidz does. I don't think many particularly care about the nuanced difference

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread James Risner
ttabbal: If I understand correctly, raidz{1} is 1 drive protection and space is (drives - 1) available. Raidz2 is 2 drive protection and space is (drives - 2) etc. Same for raidz3 being 3 drive protection. Everything I've seen you should stay around 6-9 drives for raidz, so don't do

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Richard Elling
On Dec 22, 2009, at 11:49 AM, Toby Thain wrote: On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote: On Tue, 22 Dec 2009, Ross Walker wrote: Applying classic RAID terms to zfs is just plain wrong and misleading since zfs does not directly implement these classic RAID approaches even though it

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Toby Thain
On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote: On Tue, 22 Dec 2009, Ross Walker wrote: Applying classic RAID terms to zfs is just plain wrong and misleading since zfs does not directly implement these classic RAID approaches even though it re-uses some of the algorithms for data recovery.

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Marty Scholes
Bob Friesenhahn wrote: > On Tue, 22 Dec 2009, Marty Scholes wrote: > > > > That's not entirely true, is it? > > * RAIDZ is RAID5 + checksum + COW > > * RAIDZ2 is RAID6 + checksum + COW > > * A stack of mirror vdevs is RAID10 + checksum + > COW > > These are layman's simplifications that no one her

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Travis Tabbal
Interesting discussion. I know the bias here is generally toward enterprise users. I was wondering if the same recommendations hold for home users that are generally more price sensitive. I'm currently running OpenSolaris on a system with 12 drives. I had split them into 3 sets of 4 raidz1 array

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Bob Friesenhahn
On Tue, 22 Dec 2009, Roman Naumenko wrote: raid6 is raid6, not matter how you name it: raidz2, raid-dp, raid-ADG or somehow else. Sounds nice, but it's is just buzzwords. It is true that many vendors like to make their storage array seem special, but references to RAID6 when describing raidz

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Joerg Moellenkamp
On 22.12.09 18:42, Roman Naumenko wrote: On Tue, 22 Dec 2009, Ross Walker wrote: Applying classic RAID terms to zfs is just plain wrong and misleading since zfs does not directly implement these classic RAID approaches even though it re-uses some of the algorithms for data recovery. Details do

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Bob Friesenhahn
On Tue, 22 Dec 2009, Marty Scholes wrote: That's not entirely true, is it? * RAIDZ is RAID5 + checksum + COW * RAIDZ2 is RAID6 + checksum + COW * A stack of mirror vdevs is RAID10 + checksum + COW These are layman's simplifications that no one here should be comfortable with. Zfs borrows pr

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Roman Naumenko
> On Tue, 22 Dec 2009, Ross Walker wrote: > Applying classic RAID terms to zfs is just plain > wrong and misleading since zfs does not directly implement these classic > RAID approaches > even though it re-uses some of the algorithms for data recovery. > Details do matter. > > Bob > -- > Bob F

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Marty Scholes
Bob Friesenhahn wrote: > Why are people talking about "RAID-5", RAID-6", and > "RAID-10" on this > list? This is the zfs-discuss list and zfs does not > do "RAID-5", > "RAID-6", or "RAID-10". > > Applying classic RAID terms to zfs is just plain > wrong and misleading > since zfs does not direc

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Bob Friesenhahn
On Tue, 22 Dec 2009, Ross Walker wrote: Raid10 provides excellent performance and if performance is a priority then I recommend it, but I was under the impression that resiliency was the priority, as raidz2/raidz3 provide greater resiliency for a sacrifice in performance. Why are people tal

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Marty Scholes
> > Hi Ross, > > > > What about old good raid10? It's a pretty > reasonable choice for > > heavy loaded storages, isn't it? > > > > I remember when I migrated raidz2 to 8xdrives > raid10 the application > > administrators were just really happy with the new > access speed. (we > > didn't use

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Ross Walker
On Dec 21, 2009, at 11:56 PM, Roman Naumenko wrote: On Dec 21, 2009, at 4:09 PM, Michael Herf wrote: Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? Scrubbing on a routine basis i

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Roman Naumenko
> On Dec 21, 2009, at 4:09 PM, Michael Herf > wrote: > > > Anyone who's lost data this way: were you doing > weekly scrubs, or > > did you find out about the simultaneous failures > after not touching > > the bits for months? > > Scrubbing on a routine basis is good for detecting > problems

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Roman Naumenko
> On Dec 21, 2009, at 4:09 PM, Michael Herf > wrote: > > > Anyone who's lost data this way: were you doing > weekly scrubs, or > > did you find out about the simultaneous failures > after not touching > > the bits for months? > > Scrubbing on a routine basis is good for detecting > problems

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Ross Walker
On Dec 21, 2009, at 4:09 PM, Michael Herf wrote: Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? Scrubbing on a routine basis is good for detecting problems early, but it doesn't so

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Adam Leventhal
Hey James, > Personally, I think mirroring is safer (and 3 way mirroring) than raidz/z2/5. > All my "boot from zfs" systems have 3 way mirrors root/usr/var disks (using > 9 disks) but all my data partitions are 2 way mirrors (usually 8 disks or > more and a spare.) Double-parity (or triple-pa

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Michael Herf
Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/l

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Scott Meilicke
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all data. I have not had a failure, however when migrating EqualLogic arrays in and out of pools, I lost a disk on an array. No data loss, but it concerns me because during the moves, you are essentially reading and writing

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread James Risner
If you are asking if anyone has experienced two drive failures simultaneously? The answer is yes. It has happened to me (at home) and to one client, at least that I can remember. In both cases, I was able to dd off one of the failed disks (with just bad sectors or less bad sectors) and recons

[zfs-discuss] raidz data loss stories?

2009-12-20 Thread Frank Cusack
The zfs best practices page (and all the experts in general) talk about MTTDL and raidz2 is better than raidz and so on. Has anyone here ever actually experienced data loss in a raidz that has a hot spare? Of course, I mean from disk failure, not from bugs or admin error, etc. -frank __

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Richard Elling
On Nov 11, 2009, at 4:30 PM, Rob Logan wrote: from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Bob Friesenhahn
On Wed, 11 Nov 2009, Rob Logan wrote: from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the reads betw

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Rob Logan
> from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the reads between two disks. another way to say the s

[zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Thomas Maier-Komor
Hi everybody, I am considering moving my data pool from a two disk (10krpm) mirror layout to a three disk raidz-1. This is just a single user workstation environment, where I mostly perform compile jobs. From past experiences with raid5 I am a little bit reluctant to do so, as software raid5 has a

Re: [zfs-discuss] raidz "ZFS Best Practices" wiki inconsistency

2009-10-22 Thread Cindy Swearingen
Thanks for your comments, Frank. I will take a look at the inconsistencies. Cindy On 10/22/09 08:29, Frank Cusack wrote: says that the number of disks in a RAIDZ sh

[zfs-discuss] raidz "ZFS Best Practices" wiki inconsistency

2009-10-22 Thread Frank Cusack
says that the number of disks in a RAIDZ should be (N+P) with N = {2,4,8} and P = {1,2}. But if you go down the page just a little further to the thumper configuration e

Re: [zfs-discuss] raidz failure, trying to recover

2009-10-05 Thread Victor Latushkin
Victor Latushkin wrote: Liam Slusser wrote: Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). Long story short - we've been able to get access to data in the pool. This involved finding better old

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-02 Thread David Stewart
Cindy: I believe I may have been mistaken. When I recreated the zpools, you are correct you receive different numbers for "zpool list" and "zfs list" for the sizes. I must have typed one command and then the other when creating the different pools. Thanks for the assist. Sheepish grin. Dav

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen
David, When you get back to the original system, it would be helpful if you could provide a side-by-side comparison of the zpool create syntax and the zfs list output of both pools. Thanks, Cindy On 10/01/09 13:48, David Stewart wrote: Cindy: I am not at the machine right now, but I installe

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
Cindy: I am not at the machine right now, but I installed from the OpenSolaris 2009.06 LiveCD and have all of the updates installed. I have solely been using "zfs list" to look at the size of the pools. from a saved file on my laptop: me...@opensolarisnas:~$ zfs list NAME

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen
Hi David, Which Solaris release is this? Are you sure you are using the same ZFS command to review the sizes of the raidz1 and raidz pools? The zpool list and zfs list commands will display different values. See the output below of my tank pool created with raidz or raidz1 redundancy. The pool

[zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Sho

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Volker A. Brandt
> > How do I identify which drive it is? I hear each drive spinning (I listened > > to them individually) so I can't simply select the one that is not spinning. > > You can try reading from each raw device, and looking for a blinky-light > to identify which one is active. If you don't have indivi

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
> You can try reading from each raw device, and looking > for a blinky-light > to identify which one is active. If you don't have > individual lights, > you may be able to hear which one is active. The > "dd" command should do. I received an email from another member (Ross) recommending the sa

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Ross Walker
On Tue, Sep 29, 2009 at 5:30 PM, David Stewart wrote: > Before I try these options you outlined I do have a question.  I went in to > VMWare Fusion and removed one of the drives from the virtual machine that was > used to create a RAIDZ pool (there were five drives, one for the OS, and four > f

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
Before I try these options you outlined I do have a question. I went in to VMWare Fusion and removed one of the drives from the virtual machine that was used to create a RAIDZ pool (there were five drives, one for the OS, and four for the RAIDZ.) Instead of receiving the "removed" status that

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Marion Hakanson
David Stewart wrote: > How do I identify which drive it is? I hear each drive spinning (I listened > to them individually) so I can't simply select the one that is not spinning. You can try reading from each raw device, and looking for a blinky-light to identify which one is active. If you don't

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Trevor Pretty
David That depends on the hardware layout. If you don't know and you say the data is still somewhere else You could. Pull a disk out and see what happens to the pool the one you pulled will be highlighted as the pool looses all it's replicas (clear "should" fix when you plug it back in.)

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
How do I identify which drive it is? I hear each drive spinning (I listened to them individually) so I can't simply select the one that is not spinning. David -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Trevor Pretty
David The disk is broken! Unlike other file systems which would silently loose your data ZFS has decide that this particular disk has "persistent errors" action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. ^^ It's clear you are unsuccessful at repairing

[zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
Having casually used IRIX in the past and used BeOS, Windows, and MacOS as primary OSes, last week I set up a RAIDZ NAS with four Western Digital 1.5TB drives and copied over data from my WinXP box. All of the hardware is fresh out of the box so I did not expect any hardware problems, but when

Re: [zfs-discuss] raidz failure, trying to recover

2009-09-28 Thread Victor Latushkin
Liam Slusser wrote: Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). Long story short - we've been able to get access to data in the pool. This involved finding better old state with the help of '

  1   2   3   >