On Mon, Jan 08, 2007 at 03:04:28AM +0100, "J?rgen P. Tjern?" wrote: > I recently reinstalled my (outdated) Debian Unstable to Debian Testing. > When this was completed, I attempted mounting the datapartitions from > my old installation, and I had a bit of problems. > > Basically, I have a software raid (raid1, mirror) /dev/md0, on top of > it I have a lvm, and that lvm is set up to two differentl logical > volumes; "encrypted" and "plain". > "encrypted" is a blockdevice for cryptsetup, and after I reinstalled it > seemed as though I've got the wrong passphrase (could just as well be > true, because I haven't entered the unique password for 330 days) - > this in itself wasn't that alarming. The real issue was with "plain" - > which is a simple ext3-partition. When I mounted it, I first got a bit > of garbage in my ls -l output. I.e. uid and gid was shown as "?" > (iirc), and the permissions were nonsensical. Still, it gave me a > listing of the dirs and files on it. I just left it for later, it being > the middle of christmas. Now, it shows it as empty (only "lost+found" > still onit), dmesg has some peculiar output and fsck gives a *lot* of > errors regarding i_blocks (see below). This could be some kind of > hardware malfunction, but I *am* running raid1 which should prevent > that, and it seems peculiar that it occured at the same time as I > reinstalled my system (no physical activity that could cause any direct > damage). And to mention it, I went from kernel 2.6.8 to 2.6.18-3. :-) > > Any suggestions would *really* be appreciated. :-) >
Preface: I've never tried what you tried and I've never run unstable. So I'll guess and we'll see if someone else who knows for sure has better ideas. If it were me, I'd just make new partitions with the tools you now have installed and restore from the backups you made prior to the reinstall. If I understand corectly, you were running unstable (Sid) and you reinstalled testing (Etch) and tried to mount the partitions on Etch that were made by Sid. I wonder if you've run into the downgrading-not-supported (or just a pain in the butt) issue. Keep in mind all the layers that may have incompatibilities between Sid and Etch: md raid, lvm, ext. dmesg shows kernel stuff, start there. What kind of "peculiar output"? Raid doesn't prevent hardware malfunction. In fact, it doubles the risk that one drive will fail but gives you redundancy for when one does. Keep in mind that block errors from fsck won't refer to physical blocks on the drive but extants on the LV which in turn are blocks on the md when then are blocks on the drive. Any of those layers failing will give you fsck errors. If the drives have S.M.A.R.T. then install smartmontools and check out the drives themselves. Good luck. Doug. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]