[zfs-discuss] Help with setting up ZFS
Hello I recently purchased some hardware which I plan on turning into a data server. I purchased the following: 4 gigs of registered ECC ram 667 SuperMicro X7DCA motherboard (found it for really cheap and figured it couldn't be too bad) http://www.supermicro.com/products/motherboard/Xeon1333/5100/X7DCA-3.cfm An Intel Xeon 2.5 Ghz Quadcore E5420 4 WD 750 gig desktop hard drives Does this setup seem ok for using opensolaris and particularly ZFS? I am aware of the Time Limited Recover on WD drives when you choose desktop models instead of the Raid editions. http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery I plan on changing the desktop models to include this fact thus turning them into the Raid editions. So based off the motherboard and hard drives would this configuration work for ZFS? If so how should I go about setting up ZFS? For instance in Raid configuration I would set all the hard drives to master and hook them up to my Raid controller. Do I set all the hard drives to master here for ZFS? Also do you recommend getting a smaller hard drive to store the OS and merely use the ZFS drives as my data backup? Thank you for your time -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The importance of ECC RAM for ZFS
On Sat, 25 Jul 2009 21:58:48 + (UTC) Marc Bevand wrote: > dick hoogendijk nagual.nl> writes: > > > > I live in Holland and it is not easy to find motherboards that (a) > > truly support ECC ram and (b) are (Open)Solaris compatible. > > Virtually all motherboards for AMD processors support ECC RAM because > the memory controller is in the CPU and all AMD CPUs support ECC RAM. Than why is it that most AMD MoBo's in the shops clearly state that ECC Ram is not supported on the MoBo? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Help with setting up ZFS
Brian wrote: Hello I recently purchased some hardware which I plan on turning into a data server. I purchased the following: 4 gigs of registered ECC ram 667 SuperMicro X7DCA motherboard (found it for really cheap and figured it couldn't be too bad) http://www.supermicro.com/products/motherboard/Xeon1333/5100/X7DCA-3.cfm An Intel Xeon 2.5 Ghz Quadcore E5420 4 WD 750 gig desktop hard drives Does this setup seem ok for using opensolaris and particularly ZFS? I am aware of the Time Limited Recover on WD drives when you choose desktop models instead of the Raid editions. http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery I plan on changing the desktop models to include this fact thus turning them into the Raid editions. So based off the motherboard and hard drives would this configuration work for ZFS? If so how should I go about setting up ZFS? For instance in Raid configuration I would set all the hard drives to master and hook them up to my Raid controller. Do I set all the hard drives to master here for ZFS? Also do you recommend getting a smaller hard drive to store the OS and merely use the ZFS drives as my data backup? Thank you for your time Overall, that MB looks file. The 1068E is a well-supported SAS/SATA controller in OpenSolaris, so you won't have any problems using it. Likewise, the ICH9R SATA controller. The NICs are supported as well, though I don't know about the audio chipset (which is less of a concern). You will need to get a video card, as there is no on-board video controller, and the add-on IPMI card for this board is sub-par. The board supports console redirection to COM1, but I've never tried it with these boards. You haven't said what you plan to use the server for, which will drive how you want to configure the drives (i.e. RAIDZ or mirror/striped) A couple of notes: (1) If you have space in your chassis, I'd get two smaller SATA drives and use them as the (mirrored) boot drives. Attach them to the ICHR9 controller (via the black SATA connectors). You can use ZFS to mirror your boot drives, too. Which is good, since ZFS doesn't support using stripes or RAIDZ for root volumes. (2) I'd connect your data drives to the 1068E controller, via the two multi-lane connectors. You'll need a break-out cable to use them. The multi-lanes connectors are in the lower left hand corner (the two silver squares pointing forward, not up). (3) make sure all controllers are operating in non-RAID (i.e. JBOD) mode. (4) if you can, spring for more RAM. 4GB is a bit skimpy. 8GB would likely be much better. (also, there are problems with the memory allocation if you only install 4GB - it's a chipset thing, and it reduces the amount of RAM usable by almost 40%. This /only/ happens when there is 4GB, so don't install 4GB. See section 2-3 of the MB manual for more info) (5) depending on use, you might want to invest in a SSD (flash hard drive). See a couple of the other threads on which SSD makes the most sense for you. (6) If you are just doing file-serving, a quad-core CPU is likely overkill. I suspect that even with compression turned on, the CPU will be only modestly loaded. (7) For SAS and SATA drives, there is no Master or Slave. They're all Master. No setting required. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The importance of ECC RAM for ZFS
dick hoogendijk wrote: On Sat, 25 Jul 2009 21:58:48 + (UTC) Marc Bevand wrote: dick hoogendijk nagual.nl> writes: I live in Holland and it is not easy to find motherboards that (a) truly support ECC ram and (b) are (Open)Solaris compatible. Virtually all motherboards for AMD processors support ECC RAM because the memory controller is in the CPU and all AMD CPUs support ECC RAM. Than why is it that most AMD MoBo's in the shops clearly state that ECC Ram is not supported on the MoBo? All /OPTERON/ chips support ECC, unbuffered, non-registered in the case of 100/1000 series, and unbuffered, registered in the case of 200/2000/800/8000 series. I _believe_ all socket AM2, AM2+ and AM3 consumer chips (Phenom, Phenom II, Athlon X2, Athlon X3 and Athlon X4) also support unbuffered non-registered ECC. The AMD Specs page for the above processors indicates I'm right about those CPUs. I think what they're (the retail shops, that is) stating is consumer AMD CPUs won't take the "server" (i.e. registered) ECC DIMMs. A quick glance at ASUS's website shows that all current consumer (i.e. socket AM2/2+/3) AMD motherboards from them support unregistered, unbuffered ECC. I suspect it's the same for the other board makers, too. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The importance of ECC RAM for ZFS
Erik Trimble wrote: I _believe_ all socket AM2, AM2+ and AM3 consumer chips (Phenom, Phenom II, Athlon X2, Athlon X3 and Athlon X4) also support unbuffered non-registered ECC. The AMD Specs page for the above processors indicates I'm right about those CPUs. Quick correction: the current AMD CPUs are Phenom X3, Phenom X4, Phenom II, Athlon X2, Athlon, and Sempron. According to the Processor Data Sheets for all AMD CPUs, they /all/ support ECC RAM (in some form). All the way back to the Socket 754 chips. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work
On 07/25/09 04:30 PM, Carson Gaspar wrote: No. You'll lose unwritten data, but won't corrupt the pool, because the on-disk state will be sane, as long as your iSCSI stack doesn't lie about data commits or ignore cache flush commands. Why is this so difficult for people to understand? Let me create a simple example for you. Are you sure about this example? AFAIK metadata refers to things like the file's name, atime, ACLs, etc., etc. Your example seems to be more about how a journal works, which has little to do with metatdata other than to manage it. Now if you were too lazy to bother to follow the instructions properly, we could end up with bizarre things. This is what happens when storage lies and re-orders writes across boundaries. On 07/25/09 07:34 PM, Toby Thain wrote: The problem is assumed *ordering*. In this respect VB ignoring flushes and real hardware are not going to behave the same. Why? An ignored flush is ignored. It may be more likely in VB, but it can always happen. It mystifies me that VB would in some way alter the ordering. I wonder if the OP could tell us what actual disks and controller he used to see if the hardware might actually have done out-of-order writes despite the fact that ZFS already does write optimization. Maybe the disk didn't like the physical location of the log relative to the data so it wrote the data first? Even then it isn't onvious why this would cause the pool to be lost. A traditional journalling file system should survive the loss pf a flush. Either the log entry was written or it wasn't. Even if the disk, for some bizarre reason, writes some of the actual data before writing the log, the repair process should undo that, If written properly, it will use the information in the most current complete journal entry to repair the file system. Doing synchs are devastating to performance so usually there's an option to disable them, at the known risk of losing a lot more data. I've been using SPARCs and Solaris from the beginning. Ever since UFS supported journalling, I've never lost a file unless the disk went totally bad, and none since mirroring. Didn't miss fsck either :-) Doesn't ZIL effectively make ZFS into a journalled file system (in another thread, Bob Friesenhahn says it isn't, but I would submit that the general opinion is correct that it is; "log" and "journal" have similar semantics). The evil tuning guide is pretty emphatic about not disabling it! My intuition (and this is entirely speculative) is that the ZFS ZIL either doesn't contain everything needed to restore the superstructure, or that if it does, the recovery process is ignoring it. I think I read that the ZIL is per-file system, but one hopes it doesn't rely on the superstructure recursively, or this would be impossible to fix (maybe there's a ZIL for the ZILs :) ). On 07/21/09 11:53 AM, George Wilson wrote: We are working on the pool rollback mechanism and hope to have that soon. The ZFS team recognizes that not all hardware is created equal and thus the need for this mechanism. We are using the following CR as the tracker for this work: 6667683 need a way to rollback to an uberblock from a previous txg so maybe this discussion is moot :-) -- Frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)
dick hoogendijk wrote: r...@westmark:/# share -...@store/snaps /store/snaps sec=sys,rw=arwen,root=arwen "" arwen# zfs send -Rv rp...@0906 > /net/westmark/store/snaps/rpool.0906 zsh: permission denied: /net/westmark/store/snaps/rpool.0906 try sharing with the @ network syntax. See "man share_nfs" r...@192.168.xx.xx/32,ro...@192.168.xx.xx/32 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work
On Sun, 26 Jul 2009, David Magda wrote: That's the whole point of this thread: what should happen, or what should the file system do, when the drive (real or virtual) lies about the syncing? It's just as much a problem with any other POSIX file system (which have to deal with fsync(2))--ZFS isn't that special in that regard. The Linux folks went through a protracted debate on a similar issue not too long ago: Zfs is pretty darn special. RAIDed disk setups under Linux or *BSD work differently than zfs in a rather big way. Consider that with a normal software-based RAID setup, you use OS tools to create a virtual RAIDed device (LUN) which appears as a large device that you can then create (e.g. mkfs) a traditional filesystem on top of. Zfs works quite differently in that it is uses a pooled design which incorporates several RAID strategies directly. Instead of sending the data to a virtual device which then arranges the underlying data according to a policy (striping, mirror, RAID5), zfs incorporates knowledge of the vdev RAID strategy and intelligently issues data to the disks in an ideal order, executing the disk drive commit requests directly. Zfs removes the RAID obfustication which exists in traditional RAID systems. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)
On Sun, 26 Jul 2009 12:14:03 -0400 Oscar del Rio wrote: > dick hoogendijk wrote: > > > r...@westmark:/# share > > -...@store/snaps /store/snaps sec=sys,rw=arwen,root=arwen "" > > > > arwen# zfs send -Rv rp...@0906 > > arwen# > /net/westmark/store/snaps/rpool.0906 > > zsh: permission denied: /net/westmark/store/snaps/rpool.0906 > > try sharing with the @ network syntax. See "man share_nfs" > > r...@192.168.xx.xx/32,ro...@192.168.xx.xx/32 Does not work! The root part is to blame for that. This rule does work: r...@192.168.xx.xx/32,root=arwen I have no idea why root=arwen has to be specified as a name, while the nodename can be a @ form. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked
byleal, Can you share how to recreate or test this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Subscribing broken?
What's the deal with the mailing list? I've unsubscribed an old email address, and attempted to sign up the new one 4 times now over the last month, and have yet to receive any updates/have it approved. Are the admins asleep at the helm for zfs-discuss or what? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Help with setting up ZFS
Yea I have a cheap nvidia video card I found that should work with this. I found this MB at Fry's for under 100 dollars so I figured Id try it out. Its probably a discontinued line of server motherboards by SuperMicro so I figured it probably would be an OK board. 1.) Why would I put the boot volumes into a mirrored configuration? I figure if the OS blows up Ill just format it and load it on again. Is it really worth it to have the OS mirrored? 2.) What is the benefit of hooking the SATA hard drives up to the SAS port? Is it not wise to put the OS hard drives and the data hard drives in the same port? 3.) Ill try to figure that out, shouldn't be too hard as presumably its in the BIOS 4.) Ha thats pretty hillarious that it has trouble operating in the RAM configuration I picked. Who would have thought? I guess Ill pick up two 1 Gig sticks to make it 6 Gigs, as I dont really want to spend another 100 dollars on Ram. 5.) Maybe in a few years 6.) Overkill indeed however who doesn't like power? "You haven't said what you plan to use the server for, which will drive how you want to configure the drives (i.e. RAIDZ or mirror/striped)" This is going to be used for my parents business (Im merely setting it up for them and then leaving it.) So basically what I want is reliability and redundancy. I want there to be very little chance of data loss as the business they are in requires them to keep all documents. Currently they have them all on a precarious external hard drive so I want this thing to basically be equivalent to Raid 6. I also want to be able to leave it and have it perform without touching it for decent periods of time. Usually I would use Linux as its great for that but I decided to try out ZFS. Now I read that its advisable to scrub the system every week or month, is it possible just to make a script that will do this so I dont have to be there? Also I know ZFS can use blank hard drives that will activate when a disk fails, is this feature well made in ZFS? Meaning is it trustworthy? I guess I'm just used to trusting several hundred dollar Raid cards, seems odd to be back to software. Thank you for your help -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Help with setting up ZFS
Im sorry I forgot to ask again if its worth setting to the Time Limited Recovery to its Raid counterpart mode. The reason I ask is because all I can find to do this is a DOS file so Im not sure how I would go about doing it in OpenSolaris. http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery#Western_Digital_Time_Limit_Error_Recovery_Utility_-_WDTLER.EXE All it lists is a .exe file, so is changing these settings something that must be done? I guess I am unclear on how important this is, though I have read that someone lost their data 'possibly' due to this. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Help with setting up ZFS
On Sun, 26 Jul 2009, Brian wrote: as its great for that but I decided to try out ZFS. Now I read that its advisable to scrub the system every week or month, is it possible just to make a script that will do this so I dont have to be there? Also I know ZFS can use blank hard drives that will This is trivially easy via entries in crontab: # crontab -l | grep scrub 20 4 * * 1 /usr/sbin/zpool scrub Sun_2540 15 2 * * 0 /usr/sbin/zpool scrub USB_Pool It is useful to check for faults and send an email to someone in case there is a problem. I use this script which is also executed via crontab: #!/bin/sh REPORT=/tmp/faultreport.txt SYSTEM=$1 rm -f $REPORT /usr/sbin/fmadm faulty 2>&1 > $REPORT if test -s $REPORT then /usr/ucb/Mail -s "$SYSTEM Fault Alert" root < $REPORT fi rm -f $REPORT Since I have multiple systems sending email to the same address, I supply the identification of the system via an script argument. The name could be obtained from `uname -n` or `hostname` instead. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work
On 26-Jul-09, at 11:08 AM, Frank Middleton wrote: On 07/25/09 04:30 PM, Carson Gaspar wrote: No. You'll lose unwritten data, but won't corrupt the pool, because the on-disk state will be sane, as long as your iSCSI stack doesn't lie about data commits or ignore cache flush commands. Why is this so difficult for people to understand? Let me create a simple example for you. Are you sure about this example? AFAIK metadata refers to things like the file's name, atime, ACLs, etc., etc. Your example seems to be more about how a journal works, which has little to do with metatdata other than to manage it. Now if you were too lazy to bother to follow the instructions properly, we could end up with bizarre things. This is what happens when storage lies and re-orders writes across boundaries. On 07/25/09 07:34 PM, Toby Thain wrote: The problem is assumed *ordering*. In this respect VB ignoring flushes and real hardware are not going to behave the same. Why? An ignored flush is ignored. It may be more likely in VB, but it can always happen. And whenever it does: guess what happens? It mystifies me that VB would in some way alter the ordering. Carson already went through a more detailed explanation. Let me try a different one: ZFS issues writes A, B, C, FLUSH, D, E, F. case 1) the semantics of the flush* allow ZFS to presume that A, B, C are all 'committed' at the point that D is issued. You can understand that A, B, C may be done in any order, and D, E, F may be done in any order, due to the numerous abstraction layers involved - all the way down to the disk's internal scheduling. ANY of these layers can affect the ordering of durable, physical writes _in the absence of a flush/barrier_. case 2) but if the flush does NOT occur with the necessary semantics, the ordering of ALL SIX operations is now indeterminate, and by the time ZFS issues D, any of the first 3 (A, B, C) may well not have been committed at all. There is a very good chance this will violate an integrity assumption (I haven't studied the source so I can't point you to a specific design detail or line; rather I am working from how I understand transactional/journaled systems to work. Assuming my argument is valid, I am sure a ZFS engineer can cite a specific violation). As has already been mentioned in this context, I think by David Magda, ordinary hardware will show this problem _if flushes are not functioning_ (an unusual case on bare metal), while on VirtualBox this is the default! ... Doesn't ZIL effectively make ZFS into a journalled file system Of course ZFS is transactional, as are other filesystems and software systems, such as RDBMS. But integrity of such systems depends on a hardware flush primitive that actually works. We are getting hoarse repeating this. --Toby * Essentially 'commit' semantics: Flush synchronously, operation is complete only when data is durably stored. ... ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Help with setting up ZFS
Brian wrote: Im sorry I forgot to ask again if its worth setting to the Time Limited Recovery to its Raid counterpart mode. The reason I ask is because all I can find to do this is a DOS file so Im not sure how I would go about doing it in OpenSolaris. http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery#Western_Digital_Time_Limit_Error_Recovery_Utility_-_WDTLER.EXE All it lists is a .exe file, so is changing these settings something that must be done? I guess I am unclear on how important this is, though I have read that someone lost their data 'possibly' due to this. You should set TLER. You'll have to boot to DOS (via a floppy or CDROM image) - look at the FreeDOS.org website for details about getting a free bootable image. Yea I have a cheap nvidia video card I found that should work with this. I found this MB at Fry's for under 100 dollars so I figured Id try it out. Its probably a discontinued line of server motherboards by SuperMicro so I figured it probably would be an OK board. 1.) Why would I put the boot volumes into a mirrored configuration? I figure if the OS blows up Ill just format it and load it on again. Is it really worth it to have the OS mirrored? You /should/ mirror your OS, especially if you're just leaving at another location and don't want to mess with it very often. You gets lots of benefits from the redundancy it offers (including all those nifty ZFS checksum-based autorecovery ones). I see 100GB 2.5" SATA notebook drives for $50 at my local store all the time. 2.) What is the benefit of hooking the SATA hard drives up to the SAS port? Is it not wise to put the OS hard drives and the data hard drives in the same port? The 1068E (and most other modern SAS controller chips) are really SAS/SATA controllers. They autodetect the drive type attached to them, and react accordingly. I'd use the SAS ports since the 1068E is really a better controller than the ICH9R in terms of performance. I suggested putting the OS drives on the SATA ports for simplicity's sake, since most motherboards make it easy to boot from the SATA drives, and it requires a BIOS reconfig to boot from the SAS ports. Not difficult to do, jut another step. 3.) Ill try to figure that out, shouldn't be too hard as presumably its in the BIOS Yes. It should be in the BIOS. The SATA config will be in the motherboard BIOS, while the SAS controller config will be separate (push CTRL-L or something similar during BIOS init). 4.) Ha thats pretty hillarious that it has trouble operating in the RAM configuration I picked. Who would have thought? I guess Ill pick up two 1 Gig sticks to make it 6 Gigs, as I dont really want to spend another 100 dollars on Ram. 5.) Maybe in a few years 6.) Overkill indeed however who doesn't like power? "You haven't said what you plan to use the server for, which will drive how you want to configure the drives (i.e. RAIDZ or mirror/striped)" This is going to be used for my parents business (Im merely setting it up for them and then leaving it.) So basically what I want is reliability and redundancy. I want there to be very little chance of data loss as the business they are in requires them to keep all documents. Currently they have them all on a precarious external hard drive so I want this thing to basically be equivalent to Raid 6. I also want to be able to leave it and have it perform without touching it for decent periods of time. Usually I would use Linux as its great for that but I decided to try out ZFS. Now I read that its advisable to scrub the system every week or month, is it possible just to make a script that will do this so I dont have to be there? Also I know ZFS can use blank hard drives that will activate when a disk fails, is this feature well made in ZFS? Meaning is it trustworthy? I guess I'm just used to trusting several hundred dollar Raid cards, seems odd to be back to software. Thank you for your help ZFS is great for what you describe. For maximum redundancy, you'll want to use RAIDZ2 (the analogue of RAID-6). To set it up (assuming your drives are on what the OS thinks is controller c2): zpool create tank -m raidz2 c2d0 c2d1 c2d2 c2d3 This will give you 2 drives worth of data space, and 2 redundant drives. Bob already gave you the scrub and monitoring scripts. Personally, I'd look at turning on the Time Slider feature to enable automatic snapshots (probably weekly or so in your case). You also are going to need some form of backup strategy, since you indicated that the data is important to your parent's data. RAID isn't enough - that just helps against disk failure. You need something to protect against SERVER failure, so look into a cheap tape drive or consider the external USB drive. In either case, your parents will need to backup the machine nightly, and take the tape/USB drive home with them at night (and bring it back in th
Re: [zfs-discuss] Help with setting up ZFS
> This is going to be used for my parents business (Im > merely setting it up for them and then leaving it.) > So basically what I want is reliability and > redundancy. I want there to be very little chance > of data loss as the business they are in requires > them to keep all documents. Ok, ZFS is good, but what you really need here is a proper backup strategy. If need be, skimp on the server so that you can create a good backup system. Never, ever, keep all your eggs in one basket. If their data is that important, you need to get a copy off-site, and you need some kind of automated process to do that - people don't realise how important backups are, and if you leave it to a manual system it won't get done or checked. I'd be very tempted to use zfs send/receive to send the data to another machine, even if it's just a virtualbox server you run at home. PS. You're also going to need some kind of remote monitoring of that server - sure, raidz2 will keep your data going when a disk fails, but unless you know that the disk needs replacing, what's going to happen? What's going to happen to that server in a couple of years time when you've forgotten all about it and suddenly get a call from your parents to say it's stopped working? If I were you, I'd write a script to run "zpool status -x", and email you if there are any errors. PPS. Yes, you can and should scrub regularly, running that once a week is as easy as adding a line to crontab. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss