On Tue, 2006-06-06 at 19:56 +0300, Gilboa Davara wrote:
> On Mon, 2006-06-05 at 08:00 +0300, Marc A. Volovic wrote:
> > Quoth Shachar Shemesh:
> > 
> > > Marc A. Volovic wrote:
> > > > 2. in case of drive failure, recovery process is a pain
> > > >   
> > > Well, doing "sfdisk -l /dev/sda > partitions" in advance, and then doing
> > > "sfdisk /dev/sdb < partitions" isn't all that hard, really.
> > 
> > Which - especially in the case of complex raid volumes and doubly
> > especially when also running LVM - makes for managing a whole pile of small
> > and very important files which tend to *pooof* when the need is greatest.
> 
> Create a single partition, mark it as FB, turn on GUID in mkraid, and
> you're done.
> 
> I've switched controllers, mixed drive types (SCSI 68/80, IDE/SATA),
> disconnected IDE drivers by mistake (IDE doe not support hot-plug),
> killed the kernel, killed the md drivers... I've managed to screw it
> all, and never saw an MD array die.
> 
> On the other hand, I've had -very- bad experience with older 3ware
> (SATA) and Adaptec (SCSI) raid controller.
> 
> > 
> > > > 3. in case of device move, reintegration of volume is also a pain and 
> > > > may
> > > >    lead to data loss
> > > >   
> > > Not if you configure MD properly. Properly configured, it finds the
> > > partitions based on GUID, which means that a move is a no-brainer.
> > 
> > I have yet to see a case when GUID-based mounting helps rather than
> > hinders. Especially - again, on post-failure - when re-integrating two
> > devices with two.... errr... I mixed this up with disk-labels. Doh. Ok,
> > GUID is possible. But same counter as above - managing GUID labels in a
> > crisis is error-prone and each mistake increases crisis.
> 
> Why should -I- manage the GUID by hand?
> 
> > 
> > > > 4. lower performance than any hardware raid
> > > >   
> > > ANY hardware raid?
> > > You obviously have not seen some of the shitty stuff that floats around.
> > 
> > Yes - ANY hardware raid. and I do NOT mean that crap BIOS-based raids. I
> > mean normal raids - Mylex, Vortex, Raidcore, LSI, etc.
> 
> A year ago I compared 3ware 9500 with 6 250GB drives to MD5 raid using
> two el-cheapo  SIL3114 (?) SATA controllers.
> In most cases, non-static benchmarking (using the in-house data
> streaming application) showed the 3ware to be ~5-15% faster. In several
> tests it was actually slower.
> Upgrading the machine to a faster dual Opteron (instead of the original
> dual Xeon machine) seem to indicate that the software RAID is scaling
> better then the hardware one.
> 
> > 
> > > A RAID controller that will have almost not buffers.
> > > A controller that will restart resync in the middle if the drive is
> > > being accessed too much.
> > 
> > That is flaky or borken (sic) hardware.
> 
> You call 3ware 8500/9500  flaky?
> 
> ...
> 
> I do... but that's me ;)
> 
> > 
> > > better performer than MD. If you take a GOOD raid controller, MD will
> > > have poorer performance, but then it's really a question of budget,
> > > isn't it?
> > 
> > A raid controller (4 ports, SATA-I) will cost some US$350. Hardly a budget
> > breaker and well worth it.
> > 
> > > http://oss.metaparadigm.com/safte-monitor/
> > 
> > Ooooh - never saw that. Nice.
> > 
> > > I do believe you are either biased or always buying from someone else's
> > > pocket. Either way, all the above do not relate to my situation.
> > 
> > I am indeed biased and I am not buying from someone elses pocket in all
> > cases (in many, but not all)... For my personal and professional use, I buy
> > from my pocket. It is too expensive to buy poor shit or rely on
> > labour-intensive stuff - my labour (and, in fact, almost everyone's, too)
> > is too expensive to expend on coddling weird stuff. I do not do system
> > admin as a hobby, only as a necessary evil.
> 
> In FC/RHEL you can build the MD array (including the required partition
> label) from within Anaconda. 
> I doubt that LSI MegaRAID BIOS is easier to operate.
> 
> > 
> > > On the other hand, it also has some nice things about it, the most
> > > obvious being that you can use different partitions on the disk at
> > > different RAID levels.
> > 
> > Reminds you of my point above on managing little clitical files, no?
> > Surely, you cavil...
> > 
> > > As I stated above, it's all a question of budget and trade offs.
> > 
> > Assume a SATA RAID controller costs US$400. Assume you cost US$80/h (I am
> > being a cheap bugger). Assume life-cycle at 36 months. The math is NOT
> > complex.
> > 
> > 
> 
> My old workspace ate bundles of crap from 3ware... I wouldn't touch a
> 3ware controller if it was free.
> Last time I tested LSI's SATA MegaRAID it was dog slow and don't get me
> started about Adaptec's SATA RAID controller.
> 
> Gilboa

Just a small addition:
What is the going price of a hot-spare, hot-plug, resize-supporting,
RAID6 capable SCSI RAID controller?
I doubt that you'll be able to find one for 400$; eBay included.

Gilboa


=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to