On Wed, Oct 1, 2008 at 8:52 AM, Brian Hechinger <[EMAIL PROTECTED]> wrote: > On Wed, Oct 01, 2008 at 01:03:28AM +0200, Ahmed Kamal wrote: >> >> Hmm ... well, there is a considerable price difference, so unless someone >> says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200 >> drives. By the way, how many of those would saturate a single (non trunked) >> Gig ethernet link ? Workload NFS sharing of software and homes. I think 4 >> disks should be about enough to saturate it ? > > You keep mentioning that you plan on using NFS, and everyone seems to keep > ignoring the fact that in order to make NFS performance reasonable you're > really going to want a couple very fast slog devices. Since I don't have > the correct amount of money to afford a very fast slog device, I can't > speak to which one is the best price/performance ratio, but there are tons > of options out there.
+1 for the slog devices - make them 15k RPM SAS Also the OP has not stated how his Linux clients intend to use this fileserver. In particular, we need to understand how many IOPS (I/O Ops/Sec) are required and whether the typical workload is sequencial (large or small file) or random and the percentage or read to write operations. Often a mix of different ZFS configs are required to provide a complete and flexible solution. Here is a rough generalization: - for large file sequential I/O with high reliability go raidz2 with 6 disks minimun and use SATA disks. - for workloads with random I/O patterns and you need lots of IOPS - use a ZFS multi-way mirror and 15k RPM SAS disks. For example, a 3-way mirror will distribute the reads across 3 drives - so you'll see 3 * (single disk) IOPS for reads and 1* IOPS for writes. Consider 4 or more way mirrors for heavy (random) read workloads. Usually it makes sense to configure more that one ZFS pool config and then use the zpool that is appropriate for each specific workload. Also this config (diversity) future-proofs your fileserver - because its very difficult to predict how your usage patterns will change a year down the road[1]. Also, bear in mind that, in the future, you may wish to replace disks with SSDs (or add SSDs) to this fileserver - when the pricing is more reasonable. So only spend what you absolutely need to spend to meet todays requirements. You can always "push in" newer/bigger/better/faster *devices* down the road and this will provide you with a more flexible fileserver as your needs evolve. This is a huge strength for ZFS. Feel free to email me off list if you want more specific recommendations. [1] on a 10 disk system we have: a) a 5 disk RAIDZ pool b) a 3-way mirror (pool) c) a 2-way mirror (pool) If I was to do it again, I'd make a) a 6-disk RAIDZ2 config to take advantage of the higher reliability provided by this config. Regards, -- Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED] Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss