I'm struggling to get a reliable OpenSolaris system on a file server. I'm running an Asus P5BV-C/4L server motherboard, 4GB ECC ram, an E3110 processor, and an Areca 1230 with 12 1-TB disks attached. In a previous posting, it looked like RAM or the power supply by be a problem, so I ended up upgrading everything except the raid card and the disks. I'm running OpenSolaris preview build 134.
I started off my setting up all the disks to be pass-through disks, and tried to make a raidz2 array using all the disks. It would work for a while, then suddenly every disk in the array would have too many errors and the system would fail. I don't know why the sudden failure, but eventually I gave up. Instead, I used the Areca card to create a Raid-6 array with a hot spare, and created a pool directly on the 8TB disk the raid card exposed. I'll let the card handle the redundancy, and zfs just the file system. Disk performance is noticeably faster, by the way, compared to software raid. I have been testing the system, and it suddenly failed again: # zpool status -v pool: bigraid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM bigraid DEGRADED 0 0 7 c4t0d0 DEGRADED 0 0 34 too many errors errors: Permanent errors have been detected in the following files: <metadata>:<0x1> <metadata>:<0x18> bigraid:<0x3> The raid card says the array is fine - no errors - so something is going on with ZFS. I'm out of ideas this point, except that build 134 might be unstable and I should install an earlier, more stable version. Is there anything I'm missing that I should check? -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss