On Fri, 20 Jun 1997, Rick Hawkins wrote: > > > They will go on a machine with 3 200m ide drives, which will be a > > > poor-man's > > > server. My current thinking is to mount / on the first controller, and > > > use the other pair as /usr on the second interface. /usr will be NFS > > > exported. Or would I be better off putting the two /usr drives on > > > separate controllers?
> > I'd think it was better to mount them across separate controllers. With > > seperate control and data lines, the kernel can issue two simultaneous > > requests and get data from both at the same time. My understanding with > > IDE (and EIDE) is that a single controller can only access a single > > drive at a time and must wait for that request to finish before issuing > > another. > > The reason i'm hesitating to put them on separate controllers is that / > is also on the first controller. Everything that gets nfs exported will > come off /usr, and my concern is that massive hits to the portion that > was slaved could leave / unaccesable to the host. Assuming you don't plan to add an ISA IDE/EIDE controller (I've seen them, and I think someone is running Debian with one (giving them three or even four IDE controllers)), I would suggest running both disks on the second controller and using linear, although that's merely a guess, not a result of actual testing. I had a 2x3.1GB RAID0 md array, with both disks as slaves. I had 2x120M swap areas, one on each of the masters. If mirror was running, it was trying to access the (slave) array while swapping to the (master) swap area(s). Horrible performance! Don't think about putting / on md...without a non-md partition, you can't read the kernel, and without the kernel you can't load the md stuff. _DO_ compile md into the kernel, it'll be much easier to use than if it is modularized. One bad note to put forth: Concurrent with a local storm and power failure (though I don't think it was related), my Linux host choked. Upon reboot, fsck on the md array failed (some sort of internal error). With my limited knowledge of filesystems, I couldn't fix it and was forced to rebuild my data. As a result, I chose to remove the md array and downgrade my disk usage (I had a mirror on it, so I just had to give up breathing room and Debian 1.1, only weeks before the release of 1.3 so I wasn't too bummed). In doing so, I rearranged partitions so that / and /usr were on different controllers, swap was on the same disk as /, and got much better performance than before. Very well worth it, but too many changes (and too many other space commitments) to possibly restore the md array and see how performance was after the repartitioning. Sorry I can't verify that speed. And, make sure your drives DON'T spin down ever (I used hdparm -S 0 I think to stop that behavior). HTH, Pete -- Peter J. Templin, Jr. Client Services Analyst Computer & Communication Services tel: (717) 524-1590 Bucknell University [EMAIL PROTECTED] -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .