On Fri, Apr 20, 2007 at 11:04:34PM -0700, Clint Pachl wrote: > > > >What do you consider a sane number of front ends, 10, less, more? > > Well, I think that depends on too many variables. I have a movie server > (OBSD) that exports NFS to two home theatre computers (FBSD). The movie > server is a dual P3 1GHz with 4 U320 SCSI disks in RAID0. When > simultaneously playing different DVDs on the two theatre computers, the > movie server is >90% idle; that's with TCP connection. When using UDP > mounts it's >96% idle. Although movie files are large sequential data, > the bottleneck in my network is my 100Mbs LAN. >
I don't have the experience that others here have, but at a small ISP that I worked for used NFS to serve http. It was a Linux shop, they had a netapp NFS exporting 5000 users' /home dirs to a dozen 1U cheapo i386 whiteboxes that ran apache/tomcat/cgi etc. Disk and CPU (for cgi, https, tomcat, php, etc) were seperated. The only problem that they had with NFS was flock when mbox was used for mail storage for the mail farm (same netapp). When courier maildir was used, this was not longer an issue. The web farm was mainly read only, while the mail farm was split read and write, to the same netapp. All eggs were in the one netapp basket....... Maybe not on the same scale as the OP has in mind. > > >May be it's time for me to revisit this yet again, but never been very > >succesful with high traffic. > > All I can say is that I love NFS. You're missing out. Plus it is so > simple. I have wanted to check out AFS for fail-over reasons, but too > many docs for me to read. > > One last note. Holland's disk structuring is very cool (read his earlier > post for details). If I were to serve NFS to dozens or hundreds of > clients I would use his scheme, however, apply his partitioning scheme > at the host level. If an NFS server is saturated, spread the load by > adding another server. The drawback is that each client has multiple NFS > mounts. However, if you have this many machines uniformly accessing an > NFS array, the entire mounting process should be automated. This is > where clever planning takes place. > Now I work for Sun, and they have something like 30,000 employees. Nearly all staff use Sunray work stations, and home directories are NFS mounts over a global WAN. There is not one massive /home box, obviously. There are many home NFS servers, in each of many cities. From here in Scotland, I can work with an engineer elsewhere by cd'ing to /somwhere/holland, /nowwhere/japan, /elsewhere/colorado. Only takes a couple of seconds for the automounter to kick in. The output of "mount" shows the layout of /home something like: /home/user1 box1.uk:/export/home5/28/user1 /home/user2 box9.au:/export/home17/2/user2 So, many average sized boxes are used, that in turn have many average disk packs, that are split. As is expected, LDAP and NIS are used. -- Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]