> Trying to spare myself the expense as this is my home system so budget is > a constraint.
> What I am trying to avoid is having multiple raidz's because every time I > have another one I loose a lot of extra space to parity. Much like in raid 5. There's a common perception which I tend to share now, that "consumer" drives have a somewhat higher rate of unreliability and failure. Some aspects relate to the design priorities (i.e. balance price vs size vs duty cycle), or conspiracy-theory stuff (force consumers into buying more drives more often). Hand-made computers tend to increase that rate due to any number of reasons (components, connections, thermal issues, power source issues). I've passed that the hard way while building many home computers and cheap campus servers at my Uni. Including 24-drive linux filers with mdadm and hardware raid cards :) Another problem is that larger drives take a lot longer to rebuild (about 4 hours to write a single drive in your case with an otherwise idle system) or even resilver with a filled-up array like yours. This is especially a problem in classic RAID setups, where the whole drive is considered failed if anything goes wrong. It's quite often that some hidden problem occurs with another drive of the array, so it is all considered dead, and the chance grows with disk size. That's one of many other valid good reasons why "enterprise" drives are smaller. Hopefully ZFS does contain such failures down to a few blocks which have checksum mismatch. Anyway, I'd not be comfortable with large unreliable-big-drive sets even with some redundancy. Hence my somewhat arbitrary recommendation of 4-drive raidz1 sets. The industry seems to agree that at most 7-9 drives are reasonable for a single RAID5/6 volume (vdev in case of ZFS), though. Since you already have 2 clean 1Tb disks, you can buy just 2 more. In the end you'd have one 4*1Tb raidz1 and two 4*1.5Tb raidz1 vdevs in a pool, summing up to 3+(4.5*2) = 12Gb of usable space in a redundant set. For me personally, that would be worth its salt. There may however be some discrepancy between the space on the first set (3Gb) which amounts to just 2*1Tb drives freeing up. That can introduce more costly corrections into my calculations (i.e. a 5*1Tb disk set)... Concerning the sd/cf card for booting, I have no experience. From what I've seen, you can google for notes on card booting in Eeepc and similar netbooks, and for comments on the making of livecd/liveusb - capable Solaris distros (see some at http://www.opensolaris.org/os/downloads/). You'd probably need to make sure that the BIOS emulates the card as an IDE/SATA hard disk device, and/or bundle the needed drivers into the Solaris miniroot. > And last thx so very much for spending so much time and effort in > transferring > knowlege, I really do appreciate it. You're very welcome. I do hope this helps and you don't lose data in the progress, due to my possible mistakes or misconceptions, or otherwise ;) //Jim -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss