Hmmm. Alright, but supporting hot-swap isn't the issue, is it? I mean, like I 
said in my response to myxiplx, if I have to bring down the machine in order to 
replace a faulty drive, that's perfectly acceptable - I can do that whenever 
it's most convenient for me. 

What is _not_ perfectly acceptable (indeed, what is quite _unacceptable_) is if 
the machine hangs/freezes/locks up or is otherwise brought down by an isolated 
failure in a supposedly redundant array... Yanking the drive is just how I 
chose to simulate that failure. I could just as easily have decided to take a 
sledgehammer or power drill to it,

http://www.youtube.com/watch?v=CN6iDzesEs0 (fast-forward to the 2:30 part)
http://www.youtube.com/watch?v=naKd9nARAes

and the machine shouldn't have skipped a beat. After all, that's the whole 
point behind the "redundant" part of RAID, no?

And besides, RAID's been around for almost 20 years now... It's nothing new. 
I've seen (countless times, mind you) plenty of regular old IDE drives fail in 
a simple software RAID5 array and not bring the machine down at all. Granted, 
you still had to power down to re-insert a new one (unless you were using some 
fancy controller card), but the point remains: the machine would still work 
perfectly with only 3 out of 4 drives present... So I know for a fact this type 
of stability can be achieved with IDE.

What I'm getting at is this: I don't think the method by which the drives are 
connected - or whether or not that method supports hot-swap - should matter. A 
machine _should_not_ crash when a single drive (out of a 4 drive ZFS RAID-Z 
array) is ungracefully removed, regardless of how abruptly that drive is 
excised (be it by a slow failure of the drive motor's spindle, by yanking the 
drive's power cable, by yanking the drive's SATA connector, by smashing it to 
bits with a sledgehammer, or by drilling into it with a power drill).

So we've established that one potential work around is to use the ahci instead 
of the pci-ide driver. Good! I like this kind of problem solving! But that's 
still side-stepping the problem... While this machine is entirely SATA II, what 
about those who have a mix between SATA and IDE? Or even much larger entities 
whose vast majority of hardware is only a couple of years old, and still 
entirely IDE?

I'm grateful for your help, but is there another way that you can think of to 
get this to work?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to