Hello, I upgraded one of my servers to debian/lenny recently, and unfortunately I forgot to remove the apt pinning for mdadm from /etc/apt/preferences, so an old mdadm from backports.org was kept installed, while the rest of the system was updated to debian/lenny. this lead to a broken initramfs, and the server didn't boot any more.
The server has two 500gb disks in software raid (md0 = swap, md1 = root). after some (helpful) conversation with waldi from the debian-kernel team I found out what the reason was (see bug #498029), upgraded mdadm to latest version after booting with /dev/sda2 as root instead of /dev/md1. after recreating the initramfs the system indeed booted again with software raid enabled, but now the filesystem on /dev/md1 seemed corruped. fsck failed in the boot process and i had to run it manually, but that didn't fix all issues either, instead fsck repeated to start from beginning infinitely. so I stopped that, configured the system to again use only /dev/sda2 as rootfs and booted. but somehow things got mixed up: /var/lib/dpkg/status is missing, some parts of it are found in /var/lib/dpkg/info/molly-guard.conffiles instead etc. in short, the fs seems to be mixed up. Currently I'm running 'fsck -y /dev/sdb2', and hopefully that system isn't mixed up as bad as /dev/sda2 is. anyway, once I managed to restore one of the two filesystems, how can I start the raid again? how do I tell mdadm which one is the correct and up-to-date device, and which one needs to be synced? or is it even possible to automaticly restore the full filesystem from the two raid devices? greetings, jonas
signature.asc
Description: Digital signature