I just experienced this bug with Ubuntu Lucid Alpha 2.

Version: mdadm 2.6.7.1-1ubuntu15

I have 3 RAID 6 arrays: /dev/md0, /dev/md1 and /dev/md2

/dev/md0 was created by the installer and was ok.  md1 and md2 were
created after installation.

At boot, my /proc/mdstat looked as follows:

md_d2 : inactive sdi2[3](S)
      871895680 blocks
       
md_d1 : inactive sdf1[4](S)
      1465047552 blocks
       
md0 : active raid6 sdb1[0] sdh1[2] sdi1[3] sdg1[1] sdj1[4]
      314592576 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]

I then proceeded to try and assemble the missing RAID arrays with the
following command:

mdadm --assemble --scan

After that, md1 and md2 were visible, but degraded:

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] 
[raid10] 
md2 : active raid6 sdb2[0] sdj2[4] sdh2[2] sdg2[1]
      2615687040 blocks level 6, 64k chunk, algorithm 2 [5/4] [UUU_U]
      
md1 : active raid6 sda1[0] sde1[3] sdd1[2] sdc1[1]
      4395142656 blocks level 6, 64k chunk, algorithm 2 [5/4] [UUUU_]
      
md_d2 : inactive sdi2[3](S)
      871895680 blocks
       
md_d1 : inactive sdf1[4](S)
      1465047552 blocks
       
md0 : active raid6 sdb1[0] sdh1[2] sdi1[3] sdg1[1] sdj1[4]
      314592576 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]


Following some of the comments, I recovered without rebuilding the arrays with 
the following commands:

1.  Stop the degraded arrays and the strange md_d* arrays:
mdadm -S /dev/md1
mdadm -S /dev/md2
mdadm -S /dev/md_d1
mdadm -S /dev/md_d2

2.  I checked to see that my mdstat looked clean:

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] 
[raid10] 
md0 : active raid6 sdb1[0] sdh1[2] sdi1[3] sdg1[1] sdj1[4]
      314592576 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
unused devices: <none>

Yep, just md0, so that's all good.

3.  I tried to restart md1 and md2:

mdadm --assemble --scan

4. Check mdstat

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] 
[raid10] 
md1 : active raid6 sda1[0] sdf1[4] sde1[3] sdd1[2] sdc1[1]
      4395142656 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md2 : active raid6 sdb2[0] sdj2[4] sdi2[3] sdh2[2] sdg2[1]
      2615687040 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md0 : active raid6 sdb1[0] sdh1[2] sdi1[3] sdg1[1] sdj1[4]
      314592576 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
unused devices: <none>

5.  Make sure mdadm.conf is correct.

I previously only had an entry for md0 in mdadm.conf, so I needed to add
definitions for md1 and md2:

mdadm -Es | grep md1 >> /etc/mdadm/mdadm.conf
mdadm -Es | grep md2 >> /etc/mdadm/mdadm.conf

6. For good measure I updated my initrd in case the mdadm conf is stored there:
update-initramfs -u ALL

All seems good :)  Thanks for all the comments, it helped me fix a 7TB
array quickly (after having a heart attack).

-- 
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to