Matt:
What's your contact information so that I can send that information to you?
My apologies for taking so long to get back to this.
Sincerely,
Ewen
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Well, the drives technically didn't "malfunction".
Like I said, the reason why I had to pull the drives out is because 70 lbs is a
little TOO much for me to be able to lift.
The drives aren't more than 3 weeks old, with a DOM of Jul 2006.
Is there anything that I can do to find out how the syst
Hi,
Like what matt said, unless there is a bug in code, zfs should automatically
figure out the drive mappings. The real problem as I see is using 16 drives in
single raidz... which means if two drives malfunction, you're out of luck.
(raidz2 would survive 2 drives... but still I believe 16 dri
Ewen Chan wrote:
However, in order for me to lift the unit, I needed to pull the
drives out so that it would actually be moveable, and in doing so, I
think that the drive<->cable<->port allocation/assignment has
changed.
If that is the case, then ZFS would automatically figure out the new
mapp
well...let me give a little bit of background.
I built the system with a 4U, 16 drive rackmount enclosure; without a
backplane. Originally, I thought that I wouldn't really need one because I was
going to have 16 cables running around anyways.
Once everything was in place, and AFTER, I had tran