Miles Fidelman wrote:
Hello again Folks,
So.. I'm getting closer to fixing this messed up machine.
Where things stand:
I have root defined as an LVM2 LV, that should use /dev/md2 as it's PV.
/dev/md2 in turn is a RAID1 array built from /dev/sda3 /dev/sdb3 and
/dev/sdc3
Instead, LVM is reporting: "Found duplicate PV
2ppSS2q0kO3t0tuf8t6S19qY3ypWBOxF: using /dev/sdb3 not /dev/sda3"
and the /dev/md2 is reporting itself as inactive (cat /proc/mdstat)
and active,degraded (mdadm --detail)
I get this all the time with servers connected to SAN (mulitpath). You
need to look at the filter line in /etc/lvm/lvm.conf
The default (on my Lenny box) is to accept all devices:
filter = [ "a/.*/" ]
You need to either replace that and accept md devices only or exclude
your sda/b/c explicitly
In your case this won't help you fix things, but it will make sure LVM
doesn't grab the wrong device.
1. stop changes to /dev/sdb3 (actually, to / - which complicates things)
2. rebuild the RAID1 array, making sure to use /dev/sdb3 as the
starting point for current data
I'm guessing if you fail and remove sda3 and sdc3, it won't try to
rebuild anything, and you should be able to boot cleanly with a degraded
raid. Then add each drive and rebuild. I've never done this though.
3. restart in such a way that LVM finds /dev/md2 as the right PVM
instead of one of its components
This is where the LVM filter comes in.
--kj
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org