I, as many others, have the same problem. And honestly this bug is
starting to make me wonder whether or not ubuntu is going to make a
stable production server. [EDIT: I found a solution to my problem at the
bottom]

I have "good" news for those who have been hoping to pinpoint where/how
this bug occurs. I've been able to successfully reproduce this error
many times using the same technique. Read below;

Test System
Ubuntu 9.04 server (Linux 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 
UTC 2009 x86_64 GNU/Linux)
1 500gb Sata drive (Boot only) - sda
4x Western Digital WD400BB (sdb, sdc, sdd, sde)

First i created a mirrored raid md0 with disks sdb1 and sdc1 which is working 
fantastically even now. Here is how i did it;
1. Booted a Live CD (Ubuntu 8.10 amd64)
2. Used gparted to partition/format each drive to ext3 and set the raid flag
3. Rebooted to ubuntu server
4. created the array: sudo mdadm --create /dev/md0 --level=mirror 
--raid-devices=2 /dev/sdb1 /dev/sdc1
5. allow raid to build on boot: sudo mdadm -Es | grep md0 >> 
/etc/mdadm/mdadm.conf
6. added the next line to /etc/fstab to allow raid to mount on boot: /dev/md0 
/media/raida ext3 auto,user,rw,exec 0 0


Then i made another mirrored raid md1 with disks sdd1 and sde1 which is the one 
i am having problems with.
1. created the array: sudo mdadm --create /dev/md1 --level=mirror 
--raid-devices=2 /dev/sdd1 /dev/sde1
2. it created fine. So i let it rebuild, which took 20 minutes. After rebuild, 
it realized disk sdd failed.
3. powered off the machine, swapped the failed hard drive for a replacement 
[which was previously formated ext3 and raid flag set] (notice i did not add 
anything to mdadm.conf or fstab)
4. powered the machine back on.
5. Tried to recreate the array, and it failed with;

administra...@testserver:~$ sudo mdadm --create /dev/md1 --level=mirror 
--raid-devices=2 /dev/sdd1 /dev/sde1
mdadm: /dev/sdd1 appears to contain an ext2fs file system
    size=39078080K  mtime=Wed Dec 31 18:00:00 1969
mdadm: Cannot open /dev/sde1: Device or resource busy
mdadm: create aborted


sdd1 is the replacement drive, and sde1 is from the original raid. 

It is a good thing this is only a test server, and nothing with
important data on it. It is virtually pointless if one cannot rebuild a
raid1 if the only disk that surives is the one that can't be used! This
is really pathetic that this bug has been living for 3+ years and no FIX
has been discovered for it. I do not want to try any dirty workarounds
for the simple reason that eventually my test server will be put into
production. I am just testing out the system, and i'm not liking what
i'm seeing.


Here are some more outputs.
administra...@testserver:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md_d1 : active raid1 sde1[1]
      39078016 blocks [2/1] [_U]
      
md0 : active raid1 sdc1[1] sdb1[0]
      39078016 blocks [2/2] [UU]
      
unused devices: <none>


HEY! I GOT IT TO WORK!!

I want to apologize now for writing this post as a "stream of though,"
but i just got it working. I wasn't going to post this entire comment,
but i figured since i got here using google, maybe someone else will
befit from it.

I just checked cat /proc/mdstat just to paste the code here, and as you
can see md_d1 showed up as the old part of the array! So i simply typed
in "sudo mdadm --stop /dev/md_d1" and it said it stopped it, so i typed
in sudo mdadm --create /dev/md1 --level=mirror --raid-devices=2
/dev/sdd1 /dev/sde1 and it worked. md1 is backup up and running and
resyncing now.

I am wondering if my problem has anything to with this bug or not. I
hope so, but if not, i hope this sheds some light on a few people's
problems.

-- 
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to