This is true, however it's a temporary measure only, and I have backups.
Once the prices drop again, I'll buy another 1.5TB disk and convert back
to a RAID5.

On Tue, 2012-01-10 at 13:14 +0700, Pandu Poluan wrote:
> 
> On Jan 10, 2012 8:48 AM, "Jeff Cranmer" <j...@lotussevencars.com>
> wrote:
> >
> >
> > > >
> > > > Me too.
> > > >
> > > > mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty.
> > > > I'm not sure whether it's really faulty, or just that my setup
> for RAID
> > > > is screwed up.
> > > >
> > > > How do I get rid of an existing /dev/md0?
> > >
> > > you stop it. Override the superblock with dd.. and lose all data
> on the disks.
> > >
> > >
> > > >
> > > > I'm thinking that I can try creating a RAID1 array using the two
> > > > allegedly good disks and see if I can make that work.
> > >
> > > yeah
> > >
> > > >
> > > > If that works, I'll get rid of it and try recreating the RAID1
> with one
> > > > good disk and the one that mdadm thinks is faulty.
> > > >
> > >
> > > you don't have to. You can migrate a 2 disk raid1 to a 3 disk
> raid5. Howtos
> > > are availble via google.
> > >
> > >
> > > just saying - box in suspend to ram. I change the cable (and
> connector on
> > > mobo) on a disk with two raid 1 partitions on it.
> > >
> > > One came back after starting the box.
> > >
> > > The other? Nothing I tried worked. At the end I dd'ed the
> partition.. and did
> > > a complete 'faulty disk/replacement' resync....
> > >
> > > argl.
> > >
> > >
> > OK, so lesson learned.  Just because it builds correctly in a RAID1
> > array, that doesn't mean that the drive isn't toast.
> >
> > I ran badblocks on the three drive components and, surprise,
> > surprise, /dev/sdc came up faulty.  I think I'll just build the two
> > non-faulty drives as a RAID0 array until the hard drive prices come
> back
> > down to pre-Thailand flood prices and backup regularly.
> >
> > Thanks for all the help.
> >
> > Jeff
> >
> >
> >
> 
> RAID 0?!?! 
> 
> Please reconsider. 
> 
> With RAID 0, *any* single drive failure will result in *total* data
> loss.
> 
> Rgds, 
> 
> 



Reply via email to