This "ubuntu feature" has even earned a special note at linux-raid wiki:
https://raid.wiki.kernel.org/index.php/RAID_setup#Saving_your_RAID_configuration
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
https://bugs.launchpad.ne
I just ran into this and found a solution that actually makes some sense
at least for full volumes:
In some kernel release ... presumably around when this cropped up, the
default device id naming changed from /dev/sdXX to /dev/xvdXXX - it
seems both are still supported but that mdadm is looking fo
Ok, I just had the same issue - or at least a similar one. Unfortunately
the box has some other problems and I don't have network with me.
It affected a RAID0 as well and I'm running 2.6.36. I was rebooting into
2.6.38 and after that reboot my RAID array was gone. I have to admit
that array has a
It seems that still didn't fix it permanently.
At a certain ppoint things got really messed up.
Normally I have sda1+sdb1=md0 sda2+sdb2=md1 and sda3+sdb3=md2 on said system.
At one point md2 got somehow created from sda and sdb, md0 and md1 were then
created from md2p1 and md2p2 (!!).
And this th
I'm experiencing the same problem on a host with lucid. I even tried
reinstalling hardy (thinking it was a problem with the partition
creation - gparted bug) but the same.
What I'm experiencing is pretty much well described in
http://serverfault.com/questions/209379/what-tells-initramfs-or-the-
u
I just rebooted my server and ran into this, so I filed bug 599135
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.
--
ubuntu-bugs maili
II suggest that we try to identify the bug before discussing its
status.
II have not addressed te bug for a long time, but i saw that we can nowadays
read how things are supposed to work in
./linux-2.6.33.4/Documentation/md.txt
--
mdadm cannot assemble array as cannot open drive with O_EXCL
h
I would open a new bug, but I already reassembled the array (RAID0,
without data loss), so I can no longer give relevant output. Of course,
I could use more or less the same description as this one.
The array was broken after the second last reboot. I have no intention
of trying to reproduce this
Hey, this would make a good Dilbert.
Dilbert: Because Wally couldn't fix that critical bug, the customer gave up
after a few years and switched to a competitor's product.
PHB: Mark the bug as fixed; we've got targets to meet.
Ian
--
mdadm cannot assemble array as cannot open drive with O_EXCL
HAHAHA fix released? Two people just reproduced the bug in the most
recent and second most recent distro within the last 24 hours!
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of
Alvin,
Would you mind opening a new bug for me? our policy is to close bugs when the
original reporter is unable to replicate the conditions of the bug whether
through resolution due to updates or changes in their local environment.
Ian,
thanks for following up on this. I'll mark it as Fixe
There is no comment about this bug affecting Lucid, so I'll confirm that
now. I encountered it on a RAID-0 array and used the mentioned
workaround. (stopping and reassembling the array.)
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You rece
I'm the original reporter but I haven't personally experienced this bug
since completely switching hardware and moving to Hardy, so I can't
provide any more information. Sorry about that, but after six years
things were bound to have changed.
I guess people need to open their own bugs, and I'm not
Jeffrey,
As it is clear you do not understand the way the Ubuntu Kernel Team works,
I'd like to explain a little bit. The Ubuntu Kernel Team regularly pull stable
updates from the upstream kernel. As such, quite a number of bugs do not get
active work in Launchpad but do get resolved due to t
There is really no reason to believe this bug does _not_ exist in the
current release, since it has existed for years in past releases and has
not been specifically addressed. Setting this bug to "incomplete"
repeatedly is how we managed to have this longstanding bug for so long.
It's a form of in
Hi Ian,
Please be sure to confirm this issue exists with the latest development release
of Ubuntu. ISO CD images are available from
http://cdimage.ubuntu.com/daily-live/current/ . If the issue remains, please
run the following command from a Terminal
(Applications->Accessories->Terminal). I
@Jeremy Foshee I ran my apport-collect on a Karmic server above. I had
the same workaround as one of the above commenters. I had a useless,
incomplete md_d0 listed in mdstat which had claimed three of the four
devices in my RAID. I stopped md_d0 and assembled the RAID again with
success.
$ cat
My problem was slightly different in that the "busy device" would change
every time that I re-booted.
Removing the fake MD device allowed me assemble correctly and I am now
working. Thanks for the help.
I have seen posts elsewhere regarding having a Fake RAID controller in
the machine also causi
The last couple of comments sound like it's bug #252345
The following will recreate a static mdadm.conf (possible workaround)
but is not a fix to the issue (disfunctional hotpluging):
# /usr/share/mdadm/mkconf force-generate /etc/mdadm/mdadm.conf
# update-initramfs -k all -u
--
mdadm cannot as
Boyd, do you have an entry for all of your arrays in
/etc/mdadm/mdadm.conf?
It should look like the output of:
sudo mdadm -Es
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubunt
Cannot access RAID5 array upon reboot.
# uname -a
Linux bwaters-desktop 2.6.32-16-generic #25-Ubuntu SMP Tue Mar 9 16:33:12 UTC
2010 x86_64 GNU/Linux
1. manually assemble the array (named vla) as root
# mdadm --assemble --scan --auto=md
2. mount the array (it contains a large ext4 filesystem)
#
Since i can not edit my recent post, here i go again.
Have you seen what strange raid-things grub2 contains?
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a
The kernel does'nt acess any of /etc/mdadm/mdadm.conf /etc/mdadm.conf
/etc/default/mdadm nowadays.
What worries me is that nobody seems to have cared about this package the last
2 years.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You re
I have to agree. I set up the two failing RAID devices *after* I
installed the system and did not define the devices in mdadm.conf. I
would have thought that the kernel would find the arrays regardless of
the mdadm.conf though.
Either way, the problem is fixed after fixing mdadm.conf and the pro
Thanks, Mark, for relaying your experience.
I'd venture that Mark's experience confirms my theory that the problem
stems from the additional configuration lines not making it in to
/etc/mdadm/mdadm.conf.
When are those lines supposed to be automatically entered in to that
file?
--
mdadm cannot
I just experienced this bug with Ubuntu Lucid Alpha 2.
Version: mdadm 2.6.7.1-1ubuntu15
I have 3 RAID 6 arrays: /dev/md0, /dev/md1 and /dev/md2
/dev/md0 was created by the installer and was ok. md1 and md2 were
created after installation.
At boot, my /proc/mdstat looked as follows:
md_d2 : in
Unassigned from Ben Collins.
If one of the people in this bug are still experiencing the same bug as the
original reporter in Karmic or (preferably) Lucid could run apport-collect -p
linux 27037
This will enable the team to narrow the issue with the use of the logs.
Thanks in advance.
-JFo
*
Like to report another bug concerning the formatting of what i just
wrote.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.
--
ubuntu-bu
Hej Andrew,
I have had no trouble with my last two arrays on basically debian /etc/rcS.d
setup,
even with the CONFIG_BLK_DEV_DM flag turned on while kompiling the latest
"stable" kernel.
(but i now have the flag turned off for paranoidomal safty reasons) - i do not
need it on.
Why do'nt You
Hmm... ok follow up. It seems that after i ran:
sudo mdadm -Es | grep md1 >> /etc/mdadm/mdadm.conf
and rebooted the raid came back on it's own. md0 was already in
mdadm.conf. Is it that simple?
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
Ok. I know this issue is really old. But I just wanted to say thanks to
everyone for getting some solutions.
I built my second (key I think) mirrored array last night and moved a
bunch of stuff over. After rebooting, the second (new) raid was gone and
I thought I had lost it all.
The loop worked
Thanks, Dusty, that helps.
- Vartan
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubun
@vk: cat /proc/mdstat indicated a different array - one that I did not
build. It must have build it automatically (md_d1). Upon deleting this
"other" array (md_d1), i was able to build a new array. It seems as if
mdadm tried to build a broken array from one of the disks that still had
the previous
I was also testing raid5, using Ubuntu 9.04 x64 Alternate, and hit this
problem after rebooting. After reading post #24, I checked /proc/mdstat
and it indicated that there was an inactive array md_d0 which had one of
my drives listed. After stopping this array, the problem went away and
the raid
@Dustyn Marks: What do you mean by you got it to work? Did, all of a
sudden, cat /proc/mdstat indicate an active raid or did you do something
to make that happen?
In any case, I was hit with this bug as well and cannot afford to use
software raid on Ubuntu as long as this bug exists. Is there any
I, as many others, have the same problem. And honestly this bug is
starting to make me wonder whether or not ubuntu is going to make a
stable production server. [EDIT: I found a solution to my problem at the
bottom]
I have "good" news for those who have been hoping to pinpoint where/how
this bug o
I have the same problem and the loop device work around doesn't work for
me (same error)
[r...@xen ~]# cat /etc/redhat-release
Fedora Core release 6 (Zod)
[r...@xen ~]# uname -a
Linux xen.galpin.net 2.6.20-1.3002.fc6xen #1 SMP Mon Aug 13 14:21:21 EDT 2007
x86_64 x86_64 x86_64 GNU/Linux
Yes thi
Hi Pele.
You can find the contents of my "/boot/config-2.6.28-11-server" file @
http://pastebin.com/m4ed33d4a
2009/5/17 Pelle :
> As i understand it the RAID support and the Device Mapper Support do not
> work well together.
> My problems are gone since disabeling CONFIG_BLK_DEV_DM in the kerne
As i understand it the RAID support and the Device Mapper Support do not work
well together.
My problems are gone since disabeling CONFIG_BLK_DEV_DM in the kernel config.
What does your /boot/config-xxx say?
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad
No idea, I'm afraid. It went away for me some time ago when I upgraded
Ubuntu, but clearly it's still deep down in there and keeps leaping up
and biting someone.
Sadly, when it does occur, no-one is really able to keep trying things
to localise the issue - they just want their raid array back and
I agree. Do we know when this Bug emerges?
I created this array within another machine running Ubuntu 8.10, and
didn't have any problems like this then.
2009/5/16 Ian Oliver :
> This bug is now over three years old so has been ignored for many
> versions of Ubuntu. Yes, there's a work-around, but
This bug is now over three years old so has been ignored for many
versions of Ubuntu. Yes, there's a work-around, but it's a pretty nasty
thing to have to do on a production server.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received
Just ran into the same problem on Ubuntu 9.04 x64 Server. I had to adapt
the workaround a little so that it worked for me
kaef...@blechserver:~$ uname -a
Linux Blechserver 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009
x86_64 GNU/Linux
kaef...@blechserver:~$ sudo losetup /dev/loop0
Running 2.6.27-7-server x86_64 on Intrepid and had the same problem.
Loop workaround works for me.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct su
I spent a couple of hours on this as well. I'm running with the loop workaround
now.
I am running 2.6.27-9-server x86_64.
Feel free to ask me to test stuff.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification
I got rid of my problems after compiling the kernel without LVM
That is:
# CONFIG_BLK_DEV_DM is not set
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/27037
You received this bug notification because you are a member of Ubuntu
Bugs, which is a dire
I can confirm I have encountered this bug on two separate machines. One
running Ubuntu 6.06 (kernel 2.6.16) and one running Debian 4.0 (kernel
2.6.18).
The loopback workaround works for me as well.
--
mdadm cannot assemble array as cannot open drive with O_EXCL
https://bugs.launchpad.net/bugs/2
I have tried several things in /etc/mdadm/mdadm.conf without sucess.
And i think that the only way out of this is to rewrite the superblocks.
But if that can not be done with the mdadm tool i am not willing to try,
since i have no backup at this moment!
One strange thing is that no fs type is set
What have you got in /etc/mdadm/mdadm.conf? Is the ARRAY definition for
the old array in there? If so, comment it out and reboot (or unmount
everything and restart mdadm, but reboot is maybe easier!)
It might also be worth removing the entry that creates the array on your
loop devices and assembl
I have a similar problem, now running the 2.6.27 kernel.
Ian Olivers nice workaround solves it, but there is a spooky md0 array
that i can not get rid of.
>mdadm --examine --brief --scan --config=partitions
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=7d97e292: .../* /dev/sd[c-d]
- won't g
The Ubuntu Kernel Team is planning to move to the 2.6.27 kernel for the
upcoming Intrepid Ibex 8.10 release. As a result, the kernel team would
appreciate it if you could please test this newer 2.6.27 Ubuntu kernel.
There are one of two ways you should be able to test:
1) If you are comfortable
51 matches
Mail list logo