I see and I'll try to flie the new report as soon as possible, even though I think it is the same cause as this one.
I've had a look at the kernel log and see that only two patches affected the md component between 2.6.32-46 (working kernel) and 2.6.32-47 (broken kernel, reported in this bug): https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-2.6.32.y&id=c28f366a6ef9b6e14e069e7d750c32d73544444e https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-2.6.32.y&id=372994e9fd5cadbacdfcc8724b590193d136c947 A notification to the author might help solve this even quicker. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1190295 Title: 2.6.32-47 kernel update on 10.04 breaks software RAID (+ LVM) Status in “linux” package in Ubuntu: Incomplete Bug description: Been running 10.04 LTS on 8 similar AMD Opteron x86_64 servers for several years. The servers have been kept up-to-date with patches as they come out. These servers have been running 2.6.x kernels. Each server has some form of Linux software RAID running on it as well as 3Ware hardware RAID card using SATA disks. Software RAID is configured as RAID1 for all but one server running software RAID10. All servers had software raid configured to use single partitions on each disk of types of 0xFD (Linux Software Raid Autodetect). All servers were configured with LVM over the top of /dev/md0. In past year, mysterious problems have been happening with software RAID after applying system patches. Upon reboot, server is unable to mount LVM partitions on Linux software RAID and boot is interrupted with "Continue to wait; or Press S to skip mounting or M for manual recovery" requiring intervention from an operator. Upon pressing 'M' and logging in as root, the LVM slices on the software RAID partition are not mounted and sometimes appear to be missing from LVM. Oftentimes pvs, vgs and lvs will complain about "leaking memory". Germane to the issue, LVM will sometimes show the problem partitions as "Active" while other times during the login, they will simply be gone. With LVM and /dev/md0 unstable, there is no way to discern the true state of the partitons in question. Starting the system from alternate boot media such as CDROM or USB drive, sometimes shows the software RAID and LVM in proper state which leads to suspicion of a kernel update on the afflicted system. Historically and subjectively, best practice in this instance seems to be booting from live media and starting the array degraded mode, and backing up the array. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1190295/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp