Nikolas Britton writes:
Dont get me wrong.. I can get approval to go SCSI since our machines need at least 1T+ (the storage machines)
err.. should have say "can't get approval" to go SCSI.. We are using SATA.
Why? 1TB and up is a SATA niche.
Correct.. that is what we use.
You can buy 3 SATA arrays for the price of 1 SCSI array
Yup. SCSI drives are 3 to 5 times more expensive than SATA.
.... Also... gigabit Ethernet is only 125MB/s (Max) and and a single SATA drive can easily transfer at 50MB/s*.
But RAID can possibly do more than 125MB/sec if doing large sequential files..
When I last tested on a 100Mb switch vs a 1000Mb switch, the performance difference in our case (rsyncing data from Maildir) was around 25% to 30% as measured over a week. And this is mostly lots and lots of small files. That tells me that even with SATA we are able to go over the 100Mb limit.
8 Disks in RAID 10, with 2 hot spares.
limiting factor is probably going to be your bus with arrays/GigE so SCSI is pointless unless you can take advantage of SCSI's TCQ with high random access I/O loads
If we could afford it I still think SCSI would be usefull. It is not only about raw throughput, but how quickly you can get the data to the apps or to disk. Specially in a database or Maildir enviroment where there is lots of I/O going on.
*I just tested this with two Maxtor SATA drives the other day: dd if=/dev/disk1 of=/dev/disk2 bs=4m. It dropped off to about 30MB/s at the end but my average read/write was just over 50MB/s.
But that is mostly sequential work.. I think for sequential work SATA is definitely the way to go.. is when you get into the random I/O that supposedly SCSI outshines SATA.
_______________________________________________ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"