Hello,

It looks like this fix doesn't resolve the problem when a preserved
partition is desired.  Still looking into it.

-Ken

On 07/11/2013 11:40 AM, Ken Hahn wrote:
> Hello,
> 
> Thanks, indeed that patch fixed the problem and made sense to me.  I am
> still, however, curious if there is indeed any documentation on the
> various pre and post states, just so I can understand what they are
> supposed to mean?  The new vg_enabled_for_destroy_* makes sense to me,
> but I'm curious what the original, vgchange_a_n_VG_* is supposed to mean
> exactly.
> 
> I think confusion in diagnosing this kind of problem is related to the
> lack of this information and also a lack of a way to see the full graph
> generated. (we see a topologically sorted dump in the debug, which does
> get most of the way).
> 
> Anyway, thank you, again, for the pointer at the patch.
> 
> -Ken
> 
> On 07/11/2013 03:26 AM, Bjarne Bertilsson wrote:
>> Hi,
>>
>> I think the patch posted in this bug report will fix the problem with lvm, 
>> haven't tested it yet. Notice there are two bugs in that report but the one 
>> you want is the one posted on github.
>>
>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=676882
>>
>> https://github.com/kumy/fai/commit/fbdde8f6707f35bed3a377d901389a2d67e7de37
>>
>> Not sure why this hasn't been addressed yet on upstream.
>>
>> BR
>> / Bjarne
>>
>>
>> On Wed, Jul 10, 2013 at 10:05:10PM +0200, Ken Hahn wrote:
>>> Hello,
>>>
>>> I'm trying to get FAI working for an install of several labs using
>>> Debian Wheezy.  I'm using the latest wheezy install of FAI (which is
>>> version 4.0.6).
>>>
>>> My install process has worked fine when I empty out the disk (dd
>>> if=/dev/zero of=/dev/sda bs=1024 count=512 is my friend) of the client
>>> machine.  However, when I try to reinstall on top of a previous system
>>> which used LVM, I continually have failures.  This led me to a few
>>> questions specifically about setup-storage:
>>>
>>> 1. Is there any documentation on all the names for the pre and post
>>> dependencies for a command?  I'm having a very hard time deciding if
>>> there's a bug, or if my config has problems because it's hard for me to
>>> decode these strings. Specifically, what is self_cleared_* and why does
>>> it sometimes have a dev node suffix, and other times have a logical
>>> volume name?
>>>
>>> 2. Has anybody had luck with installing where an lvm setup previously
>>> existed?  I see that the wipefs command always depends on a vgchange -a
>>> n command, and I don't understand how that could work, as the vgchange
>>> removes the device node. With no device node, there's no device to wipe.
>>>  (Also, I see that for lvm, wipefs refers to a path like vg/fscache
>>> instead of /dev/vg/fscache.  I'm not sure how that would ever work, either.)
>>>
>>> One of the few things that I can think of is that the kernel causes
>>> different behavior as to the dev nodes appearance/disappearance. I am
>>> using a stock debian kernel instead of the grml one because the grml one
>>> was crashing randomly on my test machine (which is similar to my lab
>>> machines).
>>>
>>> I appreciate any relevant feedback.
>>>
>>> -Ken Hahn
>>
> 

Antwort per Email an