Re: [lfs-support] prerequisites

2012-09-09 Thread Henrik /KaarPoSoft
On 09/10/12 00:05, Mikie wrote:
>
> >  "It replaces the current .config "   wow... now I'm really confused LOL
>
> >trying to understand
>
> >thanks tw3ak
>
> I am a confused newbie too tw3ak but I'm going to guess that each time 
> you run make menuconfig you get a new .config file.
>
> let's see what they say though.
>
Technically, "make menuconfig" replaces the current .config

However, it does so by loading the existing .config
then applying any changes you ask for in the menus
and then overwriting .config with the updated one.

So, copying an existing .config (eg. from your distro)
and then running "make menuconfig",
changing a few settings in the menus,
makes perfect sense.

You can always run a "diff" between the original .config
and the updated .config to see what has changed.

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-10 Thread Henrik /KaarPoSoft
You may try root=/dev/sda1
Sometimes device naming is not consistent between distros.

On 09/10/12 14:01, Khoa Nguyen wrote:
> Hi all,
>
> I've just finished LFS. However, I'm facing with the problem that I 
> can't boot into the LFS kernel.
>
> It displays :
>
> VFS: Cannot open root device "sdb1" or unkonwn-block(2,0)
> Please append a correct "root=" boot option;
>
> 
>
> Kernel panic -  not syncing : VFS: Unable to mount root fs on 
> unknown-block (2-0).
>
> ...
> title LFS7.1 (Linux 3.2.6-lfs-7.1)
> root (hd1,0)
> insmod ext3
> kernel /boot/vmlinuz-3.2.6-lfs-7.1 root=/dev/sdb1 ro
>
>
> Thanks and Regards,
> Khoa Nguyen.
>
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-10 Thread Henrik /KaarPoSoft
On 09/10/12 14:01, Khoa Nguyen wrote:
> Hi all,
>
> I've just finished LFS. However, I'm facing with the problem that I 
> can't boot into the LFS kernel.
>
> It displays :
>
> VFS: Cannot open root device "sdb1" or unkonwn-block(2,0)
> Please append a correct "root=" boot option;
>
> 
>
> Kernel panic -  not syncing : VFS: Unable to mount root fs on 
> unknown-block (2-0).
>
>
BTW: I always install an initramfs with busybox inside.

This way you can boot linux without any physical filesystems mounted
and have a simple system (busybox) available for debugging
(if grub can load the kernel and initramfs, of course).

And you can tell grub to boot with root=

See
http://kaarpux.kaarposoft.dk/packages/l/linux.html

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-11 Thread Henrik /KaarPoSoft
On 09/11/12 04:16, Khoa Nguyen wrote:
> I think there was any problem when i built the kernel for LFS.
> You can send me .config file of LFS ?
>
> When I type "make modules_install", It displays :
>
>  root:/sources/linux-3.2.6# make modules_install
>   INSTALL arch/x86/kernel/test_nx.ko
>   INSTALL drivers/hid/hid-logitech-dj.ko
>   INSTALL drivers/scsi/scsi_wait_scan.ko
>   INSTALL fs/ext4/ext4.ko
>   INSTALL fs/jbd2/jbd2.ko
>   INSTALL lib/crc16.ko
>   INSTALL net/netfilter/xt_mark.ko
>   DEPMOD  3.2.6
> Warning: you may need to install module-init-tools
> See http://www.codemonkey.org.uk/docs/post-halloween-2.6.txt
>
> I don't know it is problem, isn't it ?
>
> I've read some information on the internet . They told the 
> module-init-tools is replaced by kmod package.
> So It is not problem .
>
You must build all drivers related to the root filesystem into the 
kernel, not as modules.
If you build something as a module, the kernel can load it from disk, 
but obviously only AFTER the file system has been mounted.

If you build anything as modules, you should install kmod, as the module 
install uses the depmod command from there.

My config file is here:
http://sourceforge.net/p/kaarpux/code/ci/HEAD/tree/master/packages/l/linux.files/config?force=True
but it is NOT for a basic LFS system.

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-11 Thread Henrik /KaarPoSoft
On 09/11/12 11:27, Khoa Nguyen wrote:
> I've already select all SCSI driver, all fs driver but I don't find 
> out CONFIG_VMWARE .
> CONFIG_VMWARE_PVSCSI=y CONFIG_VMWARE_BALLOON=y"
>
run make menuconfig
type /VMWARE
> I built the kernel again. But problem is same.
You could try to install another linux distro into vmware, and see how 
the devices are named and what kernel config options are used.
Or, as mentioned before: use busybox in an initramfs
/Henrik

-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-12 Thread Henrik /KaarPoSoft
On 09/12/12 05:03, Khoa Nguyen wrote:
> Can you tell me more about busybox ?
> What do you mean "busybox in an initramfs" ?
>
> I understand that due to failed boot process because the kernel can't 
> mount the root file system to read /etc/inittab.
> So we use busybox to mount manually . Right ?
>
When the kernel boots, it mounts the root file system and executes 
/sbin/init from there.
/sbin/init becomes process number one, and is responsible for system 
initialization (eg. running rc.d scripts and more).

But the kernel does not understand boot=, only 
boot=, and if for some reason the kernel can not mount 
the root file system, you are stuck.

Initramfs is a compressed archive of files. The initramfs file is read 
from disk by grub and passed to the kernel. The kernel is then mounting 
this (as a ramfs ) as the root filesystem and executes /sbin/init.
See
http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Introducing-initramfs-a-new-model-for-initial-RAM-disks/
http://lugatgt.org/content/booting.inittools/downloads/presentation.pdf
http://kaarpux.kaarposoft.dk/packages/l/linux.html

The /sbin/init in the initramfs can then do whatever is necessary to 
mount the real root filesystem from disk, and finally run /sbin/init on 
the real root file system. Or it can run a whole system directly from 
the initramfs, never touching your disk.

My init file in the initramfs looks like this:
http://sourceforge.net/p/kaarpux/code/ci/HEAD/tree/master/packages/l/linux.files/init?force=True

It basically mounts the /proc /sys and /dev special directories, finds 
the real root file system and mounts it, and finally executes /sbin/init 
on the real root file system.

However, if something goes wrong (or if you specify busybox on the 
command line) it will drop into a /bin/sh shell from busybox.

BusyBox combines tiny versions of many common UNIX utilities into a 
single small executable. It provides replacements for most of the 
utilities you usually find in GNU fileutils, shellutils, etc.
See
http://busybox.net/
http://kaarpux.kaarposoft.dk/packages/b/busybox.html

My way of building the initramfs:
http://kaarpux.kaarposoft.dk/packages/l/linux.html
http://sourceforge.net/p/kaarpux/code/ci/HEAD/tree/master/packages/l/linux.yaml 
(line 128-145)

Good luck...

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-12 Thread Henrik /KaarPoSoft
On 09/12/12 14:27, Khoa Nguyen wrote:
> Hi Henrik,
>
> Thank you very much. Your explanation is clear. I get a lot 
> of knowledge from you.
> I've already made a initramfs as following:
>
> http://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html
>
Great!
(Not using busybox as I suggested but copying a lot of files instead - 
perfectly fine)
> And The computer boot into LFS kernel by UUID.
>
> However, when it is booting, it don't know what is UUID or device to boot.
>
> And go to sh shell ( busybox ) like you said.
>
> I try to cat /etc/partitions but I don't see /dev/sdb1.
> So I think the VMWARE has problem.
>
When you are in the shell in the initramfs, try
cat /proc/filesystems
cat /proc/partitions
ls -l /dev
blkid

If that does not help you to figure out what happened, you may post the 
result here on the list.

You may also have a look at the kernel log, and see if it gives any hints...

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] Can't boot into LFS kernel.

2012-09-13 Thread Henrik /KaarPoSoft

On 09/13/12 04:03, Khoa Nguyen wrote:

Hi Henrik,


When you are in the shell in the initramfs, try
cat /proc/filesystems
cat /proc/partitions
ls -l /dev
blkid

I got your point. But I don't see anything which related to /dev/sdb1.

Maybe the kernel don't recognize about new HDD ( dev/sdb1 ) which 
i added by vmware.

   That's why i told you the problem maybe related with vmware.

Which is why I suggested the above commands.
I do not know much about vmware, but under xen the harddisks you would 
normally see as /dev/sd* are named /dev/xvd*
 I will setup LFS again. I will not add new HDD anymore , i 
will divide a new partition base on /dev/sda.

Good luck
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] What Is "The" LFS Partition?

2012-11-06 Thread Henrik /KaarPoSoft
On 11/05/12 14:24, Alan Feuerbacher wrote:
> Howdy,
>
> I've done a major reset by giving up on installing an LFS system on my
> old 32-bit computer, and am now installing it on a new 64-bit system.
> The new system now has Fedora as the host system. It's installed on
> /dev/sdb and I want to put LFS on a blank 256G SSD -- /dev/sda.
>
> In trying to format /dev/sda I'm running into a conceptual problem. I
> partition the disk into:
>
> /dev/sda1 for /boot
> /dev/sda2 Extended partition
> /dev/sda5 swap
> /dev/sda6 for /
> /dev/sda7 for /usr
>
> and so forth. This is following the suggestions in the LFS book, section
> 2.2.1.3.
>
> When I go to section 2.3 to create a file system "on the partition", the
> book says:
>
> 
> To create an ext3 file system on the LFS partition, run the following:
>
> mke2fs -jv /dev/
>
> Replace  with the name of the LFS partition (hda5 in our previous
> example).
> 
>
> What should "" be in the above example?
>
> I'm confused because the book speaks of "THE" LFS partition, as if there
> were just one, but there are obviously a number of partitions.
>
> Alan
Hi Alan,

For your grandmother (-:

1) DISKS

A disk is a piece of hardware. When you boot the kernel, it shows
up as a block device such as /dev/sda.
This block device represents the whole disk, basically seen as an
array of bytes. Each byte can be read and written by the kernel.


2) PARTITIONS

Since you may want to use different parts of the disk for different
things, you can partition it.
The kernel shows each partition as e.g. /dev/sda1, /dev/sda2, etc.
Each of those partitions are again block devices, basically seen as an
array of bytes. Each byte can be read and written by the kernel.

There are different systems which can be used to partition disks,
such as the MBR scheme (fdisk) and GPT.

Suggestion: Stick to MBR for now!
When you have learned how it works, you may try out GPT.

[ I guess I might receive some flaming on this, so just a disclaimer:
I am dualbooting with Windows, and have always used MBR.
Maybe I should look into GPT... ]


2a) TRADITIONAL PARTITIONS

When you partition a disk, e.g. using fdisk, a "partition table" is
written to the first few sectors of the disk.

When the kernel boots up, it reads the first few sectors of the disk,
and if it looks like a (MBR or GPT) partition table, it creates the
block devices corresponding to each partition.

Basically, a partition table is very simple:
It just states that partition X starts at position N on the disk,
and continues for M bytes.

For historical reasons, the MBR scheme only allowed four partitions.
And (using e.g. fdisk) you can create exactly four PRIMARY partitions.
So, if you need four partitions or less, that's fine. But if you need
more, one of the four partitions (usually the last) can be an EXTENDED
partition. This extended partition (usually) covers the rest of the
disk, and in the beginning you will have an extended partion table.

So, in the end, you will have the one physical disk partitioned into
a number of partitions, each of them acting like an array of bytes,
plus a few sectors of management stuff:

+---
|  /dev/sda
|  +
|  |  MBR + partition table
|  +
|  |  /dev/sda1
|  +
|  |  /dev/sda2
|  +
|  |  /dev/sda3
|  +
|  |  /dev/sda4
|  |  +-
|  |  |  extended partition table
|  |  +-
|  |  |  /dev/sda5
|  |  +-
|  |  |  /dev/sda6
|  |  +-
|  |  |  ...
|  |  +-
|  +
+---

Now the kernel / you (root) / programs can access several block
devices, i.e. arrays of bytes.


3) SWAP

Now, for some uses, accessing a disk, or a part(ition) of a disk
as an array of bytes is just what you need:

The kernel uses swap, which is just images of memory written to disk,
and for this an array of bytes is just what it needs.


4) FILESYSTEMS

However, accessing storage as just an array of bytes is usually not
terribly convenient. So filesystems where invented.

When you format a block device as a filesystem (e.g. with mke2fs),
the first few bytes of the (block device seen as an) array of bytes
gets initialized with some "magic" values.

When the "mount" command is used on a block device
(e.g. "mount /dev/sda7 /mnt"), it looks at the first few bytes
for those "magic" values, to figure out which type of filesystem
is there. It then instructs the kernel to interpret the block device
as a filesystem of a certain kind.

A filesystem is basically an organisation of the block devices' array
of bytes in such a way, that you can access e.g. "mydir/file.txt",
and the kernel will ask the file system drivers to give you some
of the bytes in the b

Re: [lfs-support] What Is "The" LFS Partition?

2012-11-06 Thread Henrik /KaarPoSoft
On 11/05/12 17:12, Bruce Dubbs wrote:
> For an SSD drive, I suggest getting gptdisk (fdisk syntax) or gparted 
> (challenging syntax) and partitioning the drive as a gpt drive. 

I have been building on an old 32bit box with rotating disks.
I am considering buying a new 64bit box with SSD.

So I would like to know why you suggest GPT for SSD?
I know - in general - differences between legacy MBR and GPR, but what 
specifics may be related to SSD vs rotating?

> For an ssd drive, you will want to disable atime *after* completing LFS. 

Why after?
How about relatime?

--- oOo ---

If I ever get a new 64bit box with SSD, I was thinking about:
Using an insane amount of swap space and creating a tmpfs and building 
there...
But maybe Linux' disk buffering is good enough to just build off an SSD 
directory, hoping for Linux to keep most of it in memory?

Any insights on this approach?

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] What Is "The" LFS Partition?

2012-11-06 Thread Henrik /KaarPoSoft
On 11/06/12 16:35, Feuerbacher, Alan wrote:
> Bruce wrote:
>
>>> sda2 is not really a partition. It contains the extended partitions.
>>> In your case, sda5 and sda6.
>> Actually an extended partition is a partition, but it has sub-
>> partitions.  All this stuff is avoided with a GUID Partition Table
>> (GPT) which is a lot more sane in the world of large disk drives.
> Ok, Bruce, I liked what you said about a GPT, did some research on it, and 
> set up a new hard disk with it. Following are some results, and I'll 
> appreciate any comments.
>
> I found a very useful article here on installing a GPT:
>
> http://www.ibm.com/developerworks/linux/library/l-gpt/index.html?ca=dgr-lnxw07GPT-Storagedth-lx&S_TACT=105AGY83&S_CMP=grlnxw07
>
> The article gives several choices of programs to partition a disk; I chose 
> gdisk. That's fine for me because I have Fedora 17 as a host system, and it 
> uses both GPT and LVM to format the filesystem.
>

My suggestion - and really not more than just a suggestion trying to be 
helpful:

Do *not* try to play around with LVM before you understand the basics!

FIRST:
Partition the disk with fdisk (or use GPT if you so prefer), and set 
aside ONE partition for all of LFS (and BLFS).
Then build LFS (and maybe some of BLFS).

THEN:
When you are 100% confident you understand what goes on, then play with 
RAID, LVM, and what have you...

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] What Is "The" LFS Partition?

2012-11-06 Thread Henrik /KaarPoSoft
On 11/06/12 19:59, Feuerbacher, Alan wrote:
> Seriously, this is among the most useful articles -- and I do mean 
> "article" -- that I've read about this topic. Thank you! 
Thank you for the nice words, I just hope it can help you...
>> Suggestion: Stick to MBR for now!
>> When you have learned how it works, you may try out GPT.
> Now I'm waffling about what to do, given that I've gone to the trouble of 
> learning about and installing both GPT and LVM systems. I feel a lot more 
> comfortable with the terminology and concepts now, given all the help from 
> this list and other reading.
If you really understand LVM, fine, go ahead!
As for GTP, maybe I am just a bit old-fashioned.
I use the legacy MBR partitioning because "everyone" understands it.
If you and your systems understand GPT, go ahead!
[ I guess I will use GPT for my next system... ]
>> [...]

>> When you format a block device as a filesystem (e.g. with mke2fs), the
>> first few bytes of the (block device seen as an) array of bytes gets
>> initialized with some "magic" values.
>>
>> When the "mount" command is used on a block device (e.g. "mount
>> /dev/sda7 /mnt"), it looks at the first few bytes for those "magic"
>> values, to figure out which type of filesystem is there. It then
>> instructs the kernel to interpret the block device as a filesystem of a
>> certain kind.
> So "mount" essentially associates a physical location in a particular 
> partition (say, the first few magic bytes of /dev/sda7) with a directory name 
> ("/mnt"), no?
Yes.
> Why does one have to create a directory with that name before executing the 
> "mount" command? Just because that's the traditional way mounting is done? Or 
> are there more basic reasons? In other words, why could the "mount" command 
> not create an actual directory (in the sense that "/usr" is an actual 
> directory) if the directory requested in the mount command (say, mount 
> /dev/sda7 /mnt/lfs) does not exist? I'm asking because this has caused me no 
> end of trouble, because I've not seen in any documentation the requirement 
> that said directory exist before doing the mount command.
I cannot tell you the history behind this; but this is how it is:
mount DEV DIR
requires that DIR exists and is a directory.
>> Suggestion: Stick with ext4 until you understand more!
>>
>>
>> 5) LVM
>>
>> LVM is great in many situations, however:
>>
>> Suggestion: Do not play with LVM until you understand the basics!
> But LVM is touted as being easier to deal with than the older system. Is that 
> not the case?
>
> For example, last night I managed to make GPT partitions and an LVM 
> filesystem on my new hard disk (earlier today I emailed this list with a 
> blow-by-blow account of doing this), based only on material from Net 
> searches. Now, if I can do that, I would think that the process is 
> substantially easier than the old methods.
It might me that GPT is easier than MBR-partioning.
But whatever you have to do and understand after venturing into LVM, you 
also have to do and understand before.
That is why I suggested looking into a basic system before venturing 
into LVM.
>> 8)  DISCLAIMER AND MY 5 CENTS
>>
>> I have build LFS and BLFS a couple of times, and now created
>> http://kaarpux.kaarposoft.dk/
> Pretty cool!
Yes! Check it out...
It is somewhere between LFS/BLF and ArchLinux
>
>> On my disk I usually have
>> - possibly a Windows partion
>> - a swap partition
>> - one or two partitions with Fedora/Gentoo or other "stock" linux.
>> - several partitions for several LFS/BLFS/KaarPux systems
>> -- each of those partitions are ususally 64G to fit a whole system
>>  on one partition/filesystem
> Something along those lines is what I'd like ultimately to do. But my wife 
> keeps telling me I should get a Mac. :-)
Good advice (-:
[ I mean, not the Mac as such, but doing what your wife tells you (((-; ]
>> Although using several partitions for one system (e.g. /, /boot, /usr,
>> /var, /opt) has it's merits - in particular if you have a power break
>> and the filesystems are broken -
> Can you elaborate?
This is very much biased on my own experience, and very simplified.
Others may have differet opinions / facts / etc...

Functionally, every filesystem is a map from path to content.

But, the "path" is different in each file system, in particular when it 
comes to character encoding.
Some systemes are use mostly ASCII path names (with DOS pages thrown 
in): FAT*
Some systems use utf-8 (e.g. ext*).
This is a big mess, and I shall try not elaborate here, as I am still 
confused.
In any case, read this:
http://www.joelonsoftware.com/articles/Unicode.html
(Not related to filesystems, but to encoding).

Also, content is different:
Every filesystem allows a sequence of bytes as content.
But some filesystems (like ext*) allows symbolic links as well, FAT* 
does not.
And some systems (like NTFS) allows multiple "streams" inside a file.
Again, too much to cover here.

Then there are differences in how to store 

Re: [lfs-support] What Is "The" LFS Partition?

2012-11-06 Thread Henrik /KaarPoSoft
On 11/06/12 21:21, Feuerbacher, Alan wrote:

> Ok, so mounting IS a lot more than just a way of looking at things. It's 
> doing something *physically*. That clears up a lot.
>
>>> Why does one have to create a directory with that name before
>>> executing the "mount" command?
>> The system has to know where to attach the data structures in the file
>> tree.  You could create a script to do a 'mkdir -p ;
>> mount...', but that's overkill.
> Now I'm confused again. I thought that creating a directory actually writes 
> data into a place on a hard disk that the kernel allocates for the directory. 
> Something about inodes, if I remember right. But if that's so, and a 
> filesystem is not yet mounted, where does that data get written? It looks 
> like the cart is before the horse.
>
> Specifically, if you want to do "mount /dev/sda5 /mnt/lfs", but you have to 
> create the directory "/mnt/lfs" BEFORE you do the mount, then where does the 
> inode information about "/mnt/lfs" get written? I'm sure I'm missing some 
> details.
>
In the beginning, you just have the root filesystem.

As any filesystem, it is "just" a map from paths to content.

When you create the directory /mnt/lfs, this writes something to the disk.

[ as it is a directory it does not touch inodes, but that's besides the 
point ]

So now you have a root filesystem (/) saying that "/mnt/lfs" is a directory,
(You could create files in there if you wanted)

When you say "mout /dev/whatever /mnt/lfs" it just creates some 
structure in the kenel,
telling he kernel to access some data on your /dev/whatever, when you 
access /mnt/lfs/foobar.

So, after the mount, the kernel has some new data,
but neither your root filesystem, nor /dev/whatever has any data written 
on it.

/H
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


[lfs-support] How to build a multilib LFS?

2012-12-12 Thread Henrik /KaarPoSoft
Dear all,

I have build LFS and much of BLFS on my ancient 32bit machine.
With a bit of tweaking, I have also got most of it to work on my new 
x86_64 machine.
However, I do have a problem:
I need xen virtualization. xen builds a BIOS using the dev86 package.
This worked fine on i?86, but dev86 relies on a 32bit glibc, which is 
not available on a native x86_64 build.
I have looked at xen package definitions for other distros, and they 
also seem to want dev86 at build-time.
So, it seems I need a multilib LFS.

If I try to build gcc pass1 with --enable-multilib, I end up with:
checking whether the gcc  -m32 linker (ld -m elf_x86_64 -m elf_i386) 
supports shared libraries... yes
checking dynamic linker characteristics... configure: error: Link tests 
are not allowed after GCC_NO_EXECUTABLES.
make: *** [configure-zlib] Error 1

I tried to google this, and got a few hits, but nothing that enlightened 
me...

I also tried to build binutils first with target x86_64 then i686, but 
gcc still fails.

I have also looked a CLFS.
They build a couple of packages before binutils and then gcc.
But I cannot see any substantial differences in the gcc build itself...

So: Any help / suggestions / info regarding LFS multilib would be most 
appreciated!

I am new to multilib, so I a wondering about stuff like:
- where in the toolchain building sequence should I target multilib? I 
tried pass 1, but maybe it would be enough to target multilib later?
- what options should be used to configure the different packages at 
each phase?

Thanks in advance!

/Henrik


-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] How to build a multilib LFS?

2012-12-14 Thread Henrik /KaarPoSoft
On 12/13/12 04:55, Bruce Dubbs wrote:
> William Harrington wrote:
>> On Dec 12, 2012, at 6:38 PM, Henrik /KaarPoSoft wrote:
>>
>>> I am new to multilib, so I a wondering about stuff like:
>>> - where in the toolchain building sequence should I target multilib? I
>>> tried pass 1, but maybe it would be enough to target multilib later?
>>> - what options should be used to configure the different packages at
>>> each phase?
>>
>> You could also try qemu-kvm instead of xen.  It's in BLFS and does not
>> need any 32-bit libraries.
>>
>> -- Bruce
I want xen so my distro can run on amazon EC2.
I spend quite some time getting the kernel config right for this...
(Works now for the 32bit version).
I *do* realize that I can have a xen kernel working with amazon EC2, but 
use KVM locally, but I would (for now) like to stick to just *one* 
virtualization method.

Anyways: thanks for pointing out kvm (which I have been using before), 
it might be my fallback solution.

/Henrik


-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] How to build a multilib LFS?

2012-12-14 Thread Henrik /KaarPoSoft
On 12/13/12 01:38, Henrik /KaarPoSoft wrote:
> Dear all,
>
> I have build LFS and much of BLFS on my ancient 32bit machine.
> With a bit of tweaking, I have also got most of it to work on my new
> x86_64 machine.
> However, I do have a problem:
> I need xen virtualization. xen builds a BIOS using the dev86 package.
> This worked fine on i?86, but dev86 relies on a 32bit glibc, which is
> not available on a native x86_64 build.
> I have looked at xen package definitions for other distros, and they
> also seem to want dev86 at build-time.
> So, it seems I need a multilib LFS.
Thank you very much Eric, Ken, William and Bruce for your comments!

I will give CLFS another go...

/Henrik

-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page


Re: [lfs-support] lfs-support Digest, Vol 2802, Issue 1

2013-02-26 Thread Henrik /KaarPoSoft
On 02/26/2013 04:58 PM, Tobias Gasser wrote:
> Am 26.02.2013 15:24, schrieb Rick Berube:
>
>>>
>>
>> Just as a guess, I moved gmp to the Real Machine and re-attempted the
>> process.  This time it was successful.  I would infer that LFS doesn't
>> play well on virtualized hosts.
>>
>> Thanks.
>>
>
> i use qemu since about 6 months. before i used virtualbox (first from
> ubuntu, later i installed it in a blfs environment). i never had any
> problems to build lfs inside qemu or virtualbox.
>
> can you give more details where you guess to have problems with
> virtualisation?
>
> tobias
>

FWIIW:

I am building a system quite similar to LFS:
http://kaarpux.kaarposoft.dk/

When I started to use qemu-kvm I ran into a bootstrap problem.
Sorry, but I do not remember the exact circumstances or error messages.

But the solution was to run qemu-kvm with:
qemu-system-x86_64 -cpu SandyBridge [...]
(My host is also SandyBridge)

Since then, I have had no problems at all building/bootstrapping in the 
qemu-kvm VM.

/Henrik
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page