First thing is I have no experience at all with SCSI, so I'll only talk about IDE.
Hardware raid is 'easier' as you don't have to load any driver in booting process. Anyway, there are few pure IDE hardware raids (most need a driver to properly work) and are a bit more expensive than the others. With a 'not pure' hw raid, configuration will be less easier than using software raid. In the other hand, software raid is in general more stable and configurable as it is being proved by many thousand people as it is open source. Moreover, the lost of cpu caused by software raid is neglible with nowadays processors, at least whenenever the number of stripping hardrives is a power of 2 (in raid 5, would be power of 2 plus 1 drive for redundancy). This is obvious as in this case it only has to split/join a chunk of bytes, not cpu intensive at all. I have a HighPoint hardware raid (not a 'pure' hw raid), so I do prefer using software raid. I have tested both in raid 0 (stripping), and notice no difference in perfomance (cpu or memory use and in drive speed) at all between both. If you do IDE raid with for example 2 drives, hdparm will show you no increase in buffered-cache speed as it is limited by your data bus throughput, but almost double speed in buffered disk access. Another point is chunk size. As I use raid to speed accessing to programs and multimedia files, most operations are done to big data chunks, so it takes half the time than before. When chunks read/writen to the drive are small, stripping will improve but not that much. This is the point, you'll have to check if index processing will make short or big chunks accesses to the drive (I'm not sure about it). El dom, 04-04-2004 a las 03:48, Christopher L. Everett escribió: > Kent West wrote: > > > Christopher L. Everett wrote: > > > >> Hi, > >> > >> I do a lot of database work. Sometimes I must do massive batch jobs > >> on my box > >> such as: > >> > >> -- multi-gigabyte database dumps and restores > >> -- tests over millions of records, searching for overlooked cases > >> -- one-off queries for sales & marketing typs that join 8 or 9 tables > >> > >> The problem is that these things often take 10 to 30 minutes to run > >> on my box. > >> When I use the GNU time utility, I see a low PCPU number, typically > >> between 15 > >> and 25%. CPU utilization viewed through top remains at 35% or so, > >> and I never > >> go deeper than a few tens of kilobytes into swap, even though the 1 > >> minute load > >> average climbs to 2 and higher (I've seen peak numbers around 6). > >> > >> I'm using a single Seagate 40GB ATA-133 as my sole hard drive, and my > >> system has > >> an Athlon 2600 processor and 1 GB of RAM. Am I correct in thinking > >> that the > >> bottleneck lies in the HD subsystem? > >> > > That would be my guess, but I'm no expert. I'd try using "hdparm" to > > turn on 32-bit addressing and DMA, to see if that makes a difference. > > > Gosh I should have mentioned my hdparm output. 32-bit addressing and > DMA are already on: > > # hdparm -tT /dev/hda > /dev/hda: > Timing buffer-cache reads: 1400 MB in 2.00 seconds = 698.71 MB/sec > Timing buffered disk reads: 120 MB in 3.01 seconds = 39.93 MB/sec > # hdparm -tT /dev/hda > /dev/hda: > Timing buffer-cache reads: 1396 MB in 2.00 seconds = 697.06 MB/sec > Timing buffered disk reads: 122 MB in 3.04 seconds = 40.12 MB/sec > # hdparm -tT /dev/hda > /dev/hda: > Timing buffer-cache reads: 1404 MB in 2.01 seconds = 700.01 MB/sec > Timing buffered disk reads: 120 MB in 3.01 seconds = 39.89 MB/sec > # hdparm -I /dev/hda > ATA device, with non-removable media > Model Number: ST340016A > Serial Number: 3HS29P4S > Firmware Revision: 3.10 > Standards: > Supported: 5 4 3 2 > Likely used: 6 > Configuration: > Logical max current > cylinders 16383 16383 > heads 16 16 > sectors/track 63 63 > -- > CHS current addressable sectors: 16514064 > LBA user addressable sectors: 78165360 > device size with M = 1024*1024: 38166 MBytes > device size with M = 1000*1000: 40020 MBytes (40 GB) > Capabilities: > LBA, IORDY(can be disabled) > bytes avail on r/w long: 4 Queue depth: 1 > Standby timer values: spec'd by Standard > R/W multiple sector transfer: Max = 16 Current = ? > Recommended acoustic management value: 128, current value: 0 > DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 > Cycle time: min=120ns recommended=120ns > PIO: pio0 pio1 pio2 pio3 pio4 > Cycle time: no flow control=240ns IORDY flow control=120ns > Commands/features: > Enabled Supported: > * READ BUFFER cmd > * WRITE BUFFER cmd > * Host Protected Area feature set > * Look-ahead > * Write cache > * Power Management feature set > Security Mode feature set > SMART feature set > Device Configuration Overlay feature set > Automatic Acoustic Management feature set > SET MAX security extension > * DOWNLOAD MICROCODE cmd > Security: > Master password revision code = 65534 > supported > not enabled > not locked > not frozen > not expired: security count > not supported: enhanced erase > HW reset results: > CBLID- above Vih > Device num = 1 > Checksum: correct > > As far as I can tell, I need buffered disk read numbers about 2.5 to 3 > times to take full advantage of my processor. If I were to up my > processor to a Athlon 3400, I would need to add another 35% of > disk throughput. > > What kind of rig would people suggest, that wouldn't break the bank? > > -- > Christopher L. Everett > > Chief Technology Officer www.medbanner.com > MedBanner, Inc. www.physemp.com > > -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]