Moving to dmraid given "nodmraid" comment
** Package changed: udev (Ubuntu) => dmraid (Ubuntu)
** Summary changed:
- udev causes raid to degrade after update to Karmic beta
+ raid degraded after update to Karmic beta
--
raid degraded after update to Karmic beta
https://bugs.launchpad.net/bugs/
Ah, problem solved: The reason is that mdadm thinks the partitions are
controlled by fake raid on motherboard. Adding nodmraid to boot options
solves the problem.
However, this is still an issue, as the motherboard raid is disabled on
bios and the mdadm should not use dmraid then
--
udev causes
I don't think it's related to the intel controller because I'm on nvidia
controller (disabled in bios) and feell the same bug.
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a member of Ubuntu
Bu
Oh, and the motherboard chipset is Intel P35 (Asus P5K Pro)
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ub
Final out, still no change.
However, could this be related to the Intel fake raid controller I have
onboard? It's disabled in bios (the SATA mode is AHCI, not RAID), but
still worth noting
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You recei
Just a quick update: After RC came out, the partitions are still missing
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs ma
** Attachment added: "Output of blkid"
http://launchpadlibrarian.net/33843775/blkid.txt
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
>robegue: your collected data does not support your assertion, all your drives
>are detected and your RAID arrays are assembled and active. If you
>have a problem, it's with mdadm
Of course my RAID are assembled and active: as stated above I'm using the
2.6.27 kernel with a working initrd image!
>From your collected data:
UDEV [1255421703.174825] add
/devices/pci:00/:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda1
(block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/pci:00/:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda1
SUBSYSTEM=block
DEVTYPE=partition
SEQNUM=2041
ID
Also
UDEV [1255421699.902552] add /devices/virtual/block/md0 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/virtual/block/md0
SUBSYSTEM=block
DEVTYPE=disk
SEQNUM=2781
MD_LEVEL=raid1
MD_DEVICES=2
MD_METADATA=00.90
MD_UUID=4f0243d1:01835e33:a49e0bc1:f24d2d5a
ID_FS_UUID=3e651827-866a-44b5-922a-
robegue: your collected data does not support your assertion, all your
drives are detected and your RAID arrays are assembled and active. If
you have a problem, it's with mdadm
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug
blkid output:
/dev/sda1: UUID="7ce8485c-8898-4850-5671-009ebb5df50a" TYPE="linux_raid_member"
/dev/sda2: UUID="4f0243d1-0183-5e33-a49e-0bc1f24d2d5a" TYPE="linux_raid_member"
/dev/sda5: UUID="a3d5f583-4764-d3b5-fadd-18e4d3a667fa" TYPE="linux_raid_member"
/dev/sda6: UUID="c478e150-9856-9965-1d58-0bf
#cat /etc/fstab
proc/proc procdefaults
0 0
usbfs /proc/bus/usb usbfs defaults,devmode=666
0 0
/dev/md0/ ext3noatime,errors=remount-ro
0 1
/dev/md1
Could you run "apport-collect 449876" and also provide the output of
"blkid" Thanks
** Changed in: udev (Ubuntu)
Status: New => Incomplete
** Changed in: udev (Ubuntu)
Importance: Undecided => Medium
** Project changed: udev => null
--
udev causes raid to degrade after update to Kar
I'm also encountering this bug after upgrading to karmic (2.6.31-13).
I can still boot using an older kernel (2.6.27-14).
the /proc/mdstat says that the arrays are inactive and 'mdadm --A -s' in
the busybox doesn't work. If i stop the md0 with 'mdadm -S /dev/md0' and
then reassemble, it is assembl
Change to correct Project
** Project changed: ubuntu-on-ec2 => udev
** Also affects: udev (Ubuntu)
Importance: Undecided
Status: New
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a m
Version of udev is 147~-5
--
udev causes raid to degrade after update to Karmic beta
https://bugs.launchpad.net/bugs/449876
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https:/
17 matches
Mail list logo