Hi all, I have an interesting project that I am working on. It is a large volume file download service that is in need of a new box. There current systems are not able to handle the load because for various reasons they have become very I/O limited. We currently run on Debian Linux with 3ware hardware RAID5. I am not sure of the exact disk config as it is a leased system that I have never seen.
The usage patterns are usually pretty consistent in that out of 2-300 concurrent downloads you will find between 15 and 20 different files being fetched. You can guess that when the machine is trying to push a total of 2-300Mbit the disks are going crazy and due to file sizes and only having 4GB of ram on the system caching is of little use. The systems will regularly get into a 80-90% I/O wait mode. The disk write speed is of almost no concern as there are only a few files added each day. The system we have come up with is pretty robust. 2 dual core Opterons, 32GB of ram, 8 750GB SATA disks. The disks are going to be paired off in 2 disk RAID0 sets each with a complete copy of the data. Essentially a manually replicated 4 way mirror set. The download manager would then use each set in a round robin fashion. This should substantially reduce the amount of frantic disk head dancing. The second item is to dedicate 50-75% of the ram to a ramdisk that would be the fetch path for the top 10 and new, hot downloads. This should again reduce the seeking on files that are being downloaded many times concurrently. My question for this list is with ZFS is there a way to access the individual mirror sets in a pool if I were to create a 4 way mirrored stripe set with the 8 disks? Even better would be if zfs would manage the mirror set "load balancing" by intelligently splitting up the reads amongst the 4 sets. Either way would make for a more elegant solution than replicating the sets with cron/rsync. Thanks, Jason _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss