On Thu, 17 Apr 2003 18:48, I. Forbes wrote: > Am I correct in assuming that every time a "bad block" is discovered > and remapped on a software raid1 system: > > - there is some data loss
I believe that if drive-0 in the array returns a read error then the data is read from drive-1 and there is no data loss. Of course if the drive returns bad data and claims it to be good data then you are stuffed. > - one of the drives is failed out of the array Yes. > I assume there are repeated attempts at reading the bad block, before > the above actions are triggerd. Yes, this unfortunately causes things to block for a while... > Hopefully these will trigger remapping > at the firmware level before the above happens. My experience is that IBM drives don't do this. It could be done but would require more advanced drive firmware. > Do you think there would be any benefit gained from "burning in" a > new drive, perhaps by running "fsck -c -c", in order to find marginal > blocks and get them mapped out before the drive is put onto an array? Maybe. > What about doing this on a aray drive that has "failed" before > attempting to remount it with "raidhotadd". Generally such a "burn-in" won't achieve any more benefit than just doing a new raidhotadd. Although it has worked once for me and is something to keep in mind. -- http://www.coker.com.au/selinux/ My NSA Security Enhanced Linux packages http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark http://www.coker.com.au/postal/ Postal SMTP/POP benchmark http://www.coker.com.au/~russell/ My home page