I'm not sure if I had the same problem you saw, but I wasn't able to get FAI to reinstall over an LVM install either. The disk_config I tried was:
*********************************************** disk_config disk1 preserve_always:1,2 primary /boot 500- - - primary - 0- - - disk_config lvm preserve_lazy:all vg vg01 sdb2 vg01-wheezy_root / ... *********************************************** ...with various values for preserve_lazy. vg01 exists and has all my squeeze VGs too. Unfortunately I didn't save the exact error message I got, but it was related to storage-setup failing to create the LVs or possibly not knowing how to preserver the existing vg01. I ended up manually creating, formatting and mounting the VGs, installing via FAI to a VM, rsyncing the bits over and running update-grub while chroot'd to the manually partitioned VGs. -- Adam Carheden On 07/10/2013 02:05 PM, Ken Hahn wrote: > Hello, > > I'm trying to get FAI working for an install of several labs using > Debian Wheezy. I'm using the latest wheezy install of FAI (which is > version 4.0.6). > > My install process has worked fine when I empty out the disk (dd > if=/dev/zero of=/dev/sda bs=1024 count=512 is my friend) of the client > machine. However, when I try to reinstall on top of a previous system > which used LVM, I continually have failures. This led me to a few > questions specifically about setup-storage: > > 1. Is there any documentation on all the names for the pre and post > dependencies for a command? I'm having a very hard time deciding if > there's a bug, or if my config has problems because it's hard for me to > decode these strings. Specifically, what is self_cleared_* and why does > it sometimes have a dev node suffix, and other times have a logical > volume name? > > 2. Has anybody had luck with installing where an lvm setup previously > existed? I see that the wipefs command always depends on a vgchange -a > n command, and I don't understand how that could work, as the vgchange > removes the device node. With no device node, there's no device to wipe. > (Also, I see that for lvm, wipefs refers to a path like vg/fscache > instead of /dev/vg/fscache. I'm not sure how that would ever work, either.) > > One of the few things that I can think of is that the kernel causes > different behavior as to the dev nodes appearance/disappearance. I am > using a stock debian kernel instead of the grml one because the grml one > was crashing randomly on my test machine (which is similar to my lab > machines). > > I appreciate any relevant feedback. > > -Ken Hahn >