LÉVAI Dániel @ 2017-06-20T10:22:27 +0200: > Joel Sing @ 2017-06-19T18:14:30 +0200: [...]
Hit reply too fast. > > > You in fact gave the advice at a so lucky time, that I was about to > > > return the disk for a warranty replacement -- had I done that, I could > > > not have been able to repair the array. So thanks again, and I guess > > > you'll have a beer on me when you're around Budapest ;) > > > > Just to clarify, you're saying that when you plugged all of the original > > disks > > back in the array came up again correctly? And if this is correct, was this > > at > > boot time? > > Yes, when I plugged back the 'broken' disk, the array came up in > degraded state during boot. > > The order of events were the following: > First, one of the disks went offline, then the array became degraded. > Then after numerous reboots it always came back degraded with the > failing disk being Offline, but after the very first reboot (after the > fail) softraid couldn't read eg. the size of the failed disk anymore, > when I ran `bioctl softraid0` it showed something like this: > (sorry, this is not the actual output, I'm just trying to remember this) > > softraid0 1 Degraded 9001777889280 sd8 RAID5 > 0 Online 3000592678912 1:0.0 noencl <sd2a> > 1 Online 3000592678912 1:1.0 noencl <sd3a> > 2 Online 3000592678912 1:2.0 noencl <sd4a> > 3 Offline 0 1:3.0 noencl <sd5a> > > Softraid could however still read eg. the serial number of the failed > disk. And then what I did actually was booting into bsd.rd from a USB drive, 5 3TB disks connected (the three original ones, the failing one, and the new/clean disk), kicked off the failed drive with the rebuild (bioctl -R ...), then shutdown the machine (mid-rebuild), removed the failed drive (so the three original and the new one remained), then booted into the system from the system disk(s). Then rebuild resumed and continued at boot time. Daniel -- LÉVAI Dániel PGP key ID = 0x83B63A8F Key fingerprint = DBEC C66B A47A DFA2 792D 650C C69B BE4C 83B6 3A8F