Pardon me but I had to change subject lines just to get out of that other thread.
In that other thread .. you were saying : >> dick hoogendijk uttered: >> true. Furthermore, much so-called consumer hardware is very good these >> days. My guess is ZFS should work quite reliably on that hardware. >> (i.e. non ECC memory should work fine!) / mirroring is a -must- ! > Gavin correctly revealed : > No, ECC memory is a must too. ZFS checksumming verifies and corrects > data read back from a disk, but once it is read from disk it is stashed > in memory for your application to use - without ECC you erode confidence > > that what you read from memory is correct. Well here I run into a small issue. And timing is everything in life and this small issue is happening right in front of me as I write this. I have a Sun Blade 2500 with 4GB of genuine Sun ECC memory ( 370-6203 [1] ) and internally there are dual Sun 72GB Ultra 320 disks ( 390-0106 ). I like to have mirrors everywhere and I also like safety. I had the brilliant idea of pulling the secondary disk in slot 1 out and installing some more ethernet and SCSI paths. So I popped in a 501-5727 ( Dual FastEthernet / Dual SCSI Ultra-2 PCI Adapter ) and then moved the internal disk out to an external disk pack. So now I still have a mirror but with dual SCSI controllers involved. When the machine boots I see this : Rebooting with command: boot Boot device: /p...@1d,700000/s...@4/d...@0,0:a File and args: SunOS Release 5.10 Version Generic_141414-02 64-bit Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: mercury Loading smf(5) service descriptions: 1/1 Reading ZFS config: done. Mounting ZFS filesystems: (5/5) mercury console login: root Password: Jul 20 00:13:06 mercury login: ROOT LOGIN /dev/console Last login: Sun Jul 19 23:41:22 on console Sun Microsystems Inc. SunOS 5.10 Generic January 2005 # zpool status pool: mercury_rpool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM mercury_rpool DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c0t0d0s0 ONLINE 0 0 0 c1t2d0s0 UNAVAIL 0 0 0 cannot open So I have to manually intervene and do this : # zpool online mercury_rpool c1t2d0s0 # zpool status pool: mercury_rpool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Mon Jul 20 00:13:28 2009 config: NAME STATE READ WRITE CKSUM mercury_rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t0d0s0 ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 errors: No known data errors This means that I do have a Zpool with mirrored ZFS boot and root and all that goodness but not unless I *know* to look at the state of the mirror after boot. The system seems to be lazy in that it does not report the DEGRADED state on the console or via syslogd. Now I caught this, just now ( see date and kernel rev above ) and wonder .. is this not a bug ? -- Dennis [1] DDR266, PC2100, CL2, ECC Serial Presence Detect 1.0 1GB Registered DIMM _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss