Re: this is not the fai nfsroot >.
On Tue, 2012-06-12 at 20:24 +0200, Thomas Neumann wrote: > Hello dear list > > This is just a small word of caution. Please do not: > > cd /srv/fai/nfsroot > tar czf /tmp/fai-nfsroot[...].tar.gz * An all-inclusive alternative that works in most cases is tar czf /tmp/fai-nfsroot[...].tar.gz . The "." instead of "*" adds an extra "./" to the beginning of all filenames, which may be undesirable in some cases, and the mode and ownership of "." may change upon unpacking, but for just moving a whole tree around, as the same user (typically root), it works fine and includes all the dot-files. Toomas
HOSTNAME variable
hello i seem to have some problems with my DHCP/DNS settings. when trying a FAI installation, the HOSTNAME variable always resolves to "fai-live-host". i tried to create a workaround that gets the hostname from the ipaddress, assigns it to $HOSTNAME (and exports HOSTNAME). but that does not work: FAI ends up in a shell and $HOSTNAME is still "fai-live-host". where does FAI assign the HOSTNAME variable? can't i change it? or what else could have gone wrong? (this is on FAI 3.4.0ubuntu2) thanks in advance!
Re: Still having problem with configuring FAI
Hello, I note the same problem since a few days, only on amd64 arch : the name of the NFS server is missing in front of :/srv/fai/nfsroot. It's not an FAI bug, as FAI has not started at this time, it looks more to me like a kernel bug. FAI 4 has been working fine for me before this bug has shown up, and is still working on i386 arch with the same configuration (Server is using Squeeze, and uses multiple nfsroots). I haven't figured out a workaround for now, if anyone has an idea I would be glad to try it. -- Nicolas Le 12/06/2012 19:14, Steve B. a écrit : I keep getting the error when the target boots; Trying netboot from :/srv/fai/nfsroot .. Begin: Trying nfsmount -o nolock - ro :/srv/fai/nfsroot /live/image .. nfsmount: can't parse IP address ' ', then an endless loop of error can't parse IP address ' '. Now these issue started after Fai upgrade to v4 and not sure what to config since v4 changes a lot of the config files and the online doc is still for v3. Thanks Steve B.
Re: Still having problem with configuring FAI
> On Wed, 13 Jun 2012 13:05:35 +0200, Nicolas Courtel > said: > I note the same problem since a few days, only on amd64 arch : the name > of the NFS server is missing in front of :/srv/fai/nfsroot. > It's not an FAI bug, as FAI has not started at this time, it looks more > to me like a kernel bug. FAI 4 has been working fine for me before this > bug has shown up, and is still working on i386 arch with the same > configuration (Server is using Squeeze, and uses multiple nfsroots). Are you using dracut or live-boot inside the nfsroot? -- regards Thomas
Re: Still having problem with configuring FAI
Le 13/06/2012 13:56, Thomas Lange a écrit : > I note the same problem since a few days, only on amd64 arch : the name > of the NFS server is missing in front of :/srv/fai/nfsroot. > It's not an FAI bug, as FAI has not started at this time, it looks more > to me like a kernel bug. FAI 4 has been working fine for me before this > bug has shown up, and is still working on i386 arch with the same > configuration (Server is using Squeeze, and uses multiple nfsroots). Are you using dracut or live-boot inside the nfsroot? AFAIK I use vanilla live-initramfs, haven't added any extra feature. -- Nicolas
FAI 3.4.7 vs. 3.4.8 (was: LVM is "inactive" on first reboot after installation)
On Tue, June 12, 2012 23:20, n43w79 wrote: >> What release of Debian, and what version of "lvm2" are you using? >> > My FAI server: [...] > Linux fai0 2.6.32-5-686 #1 SMP Thu Nov 3 04:23:54 UTC 2011 i686 GNU/Linux > # dpkg --list | grep fai > ii fai-client 3.4.8 Fully Automatic Installation client package > ii fai-doc 3.4.8 Documentation for FAI > ii fai-quickstart 3.4.8 Fully Automatic Installation quickstart package > ii fai-server 3.4.8 Fully Automatic Installation server package > ii fai-setup-storage 3.4.8 automatically prepare storage devices [...] I am using FAI 3.4.7, which is the default version for Debian 6. Were there fixes regarding LVM in .8? There were a lot of changes, so it's hard to tell if this was one of them: http://packages.debian.org/changelogs/pool/main/f/fai/fai_4.0.1/changelog
FAI 3.4.7 is broken (?)
Hello all, I'm trying to use FAI 3.4.7, as included in Debian 6, and am having problems with it. As mentioned in another thread, I'm having issues with LVM being activated on boot while it works for "n43w79" using 3.4.8. Another issue that I'm having is that when /boot is on an MD mirror, GRUB does not run properly. For example, this configuration does not work: disk_config disk1 fstabkey:uuid primary - 512 - - primary - 32G - - primary - 8G - - disk_config disk2 fstabkey:uuid primary - 512 - - primary - 32G - - primary - 8G - - disk_config raid raid1 /boot disk1.1,disk2.1 ext3rw,noexec,nodev raid1 / disk1.2,disk2.2 ext3rw,errors=remount-ro raid1 swapdisk1.3,disk2.3 swaprw The installation goes through, but on reboot, the BIOS 'hangs' trying to book off of the disks. GRUB never loads (no menu or anything). The same result if I get rid of /boot and have everything live in /. scripts/GRUB_PC/10-setup returns with an exit code of 1: [...] ++ SWAPLIST=/dev/md2 ++ BOOT_DEVICE=/dev/md0 ++ ROOT_PARTITION=/dev/md1 + '[' -z /dev/md0 ']' + chroot /target grub-mkdevicemap --no-floppy ++ chroot /target grub-probe -tdrive -d /dev/md0 + GROOT='(md/0)' ++ echo '(md/0)' ++ sed s:md/:md:g + GROOT='(md0)' + chroot /target grub-install --no-floppy '(md0)' /usr/sbin/grub-setup: error: unknown device number: 104, 17. ++ error=1 [...] What version of FAI should I be using, because the one in Debian 6 seems to have all sort of problems for the things we want to do. Thanks for any info.
[LVM+dracut] Damned if you do, damnded if you don't (was: Any plans to get rid of nfs+overlayfs for netboot?)
[This is a repost of a message I sent earlier. It contained 2 screenshot attachments and was ignored by the ML-software due to its size.] > I filed Bug#676882 (LVM) [...] I now have the option to decide between a n[e|o]tbooting dracut-faiclient. If I do not provide rd_NO_LVM via tftp, then the fai-client does not boot. [compare evidence a) ] http://www.fluffbunny.de/fai-dracut-lvm-1.jpg If I do provide rd_NO_LVM, then fai starts (finally), but setup-storage bugs out rather drastically [compare evidence b) ] http://www.fluffbunny.de/fai-dracut-lvm-1.jpg If I wipe the disc manually first, then there's no problem. If a LVM-Volume exist then dracut refuses to proceed. I tested this with 4.0.1 from Debian repository / fai-project.org and 4.0.1+0~1337780587.49~1.gbpbd6de4 from jenkins.grml.org just to make sure there isn't already a workaround/fix in setup_storage. disk_config is: disk_config disk1 disklabel:msdos #part.type usagesize filesystem options primary/boot100M ext2rw primaryswap 8Gswapsw primary-4G- - - disk_config lvm vg vg_system disk1.3 vg_system-root / 4G-12Gxfs rw (I'm using this config for 3.4.7/3.4.8 too.)
Re: [LVM+dracut] Damned if you do, damnded if you don't (was: Any plans to get rid of nfs+overlayfs for netboot?)
> If I do provide rd_NO_LVM, then fai starts (finally), but setup-storage > bugs out rather drastically [compare evidence b) ] > > http://www.fluffbunny.de/fai-dracut-lvm-1.jpg sorry. copy&paste error. correct link is http://www.fluffbunny.de/fai-dracut-lvm-2.jpg
Working example of LVM + RAID?
Dear all, With all sorts of problems popping up concering setup-storage, can someone please share a solution which actually works for the following (or similar) scenario: The machine has two 1TB SATA disks. The disks should contain typical partitions for /, /usr, etc, a swap area, and a large LVM volume, which shall be later partitioned manually for use by virtual machines. I would not mind if the /, /usr, etc also reside on the LVM, should this make things easier. The data should be mirrored in a RAID1 fashion. The system must be able to boot(!) and run if either of the disks fail. What would be the best path to follow? Mirror the whole disks and put all the partitioning onto the md0 device? Or create partitions on both disks independently and then pair these up in RAID? At what level is it best to introduce LVM? Since this is a one and unique server, I might even consider doing the whole disk partitioning manually or in a custom hook to work around any existing bugs in setup-storage. Any suggestions and recommendations (including working examples of setup-storage config files) welcome! Toomas