I'm sure someone else may be able to give a more detailed explanation,
but the "simple" answer:
GCC is used to build software. Correct. So, any software you compile on
your own system will work just fine. However, you are in essence,
compiling software on YOUR system (host) for ANOTHER system (
Anthony Price wrote:
> 07-01-2009
>
> Host System: Linux 2.6.24-21-generic #1 SMP Tue Oct 21 23:43:45 UTC 2008
> i686 GNU/Linux
> This is a 32 bit OS on 64 bit hardware.
> I have run the version-check.sh script and the system meets the
> requirement for a build.
>
> Host gcc is: gcc (GCC) 4.2.4
LANG= _.<@modifiers>
ll = language (ie. en for english)
CC=country (ie. CA for Canada)
charmap=character map (ie. iso88591)
@modifiers=modifiers( ie. @euro if you need it, omit if you don't)
So, as example:
en_CA.iso88591
[EMAIL PROTECTED]
Rob
Ralph Porter wote:
> Folks,
>
> I read and re-read
This is my third time through LFS and I always receive the following
failure in the GCC tests in chapter 6.12 within the 'gcc Summary':
FAIL: gcc.target/i386/fastcall-sseregparm.c execution test
# of expected passes38984
# of unexpected failures 1
# of expected failures 99
# of unt
Randy McMurchy wrote:
> Rob Thornton wrote:
> > There is a bug in GCC 4.3.2 which will cause tests to fail if the system
> > has a stack size of 8MB or less in limits-structnest.c
> > (gcc.c-torture/compile/compile.exp for example). The devs are aware of
> > the p
Rodolfo
No, the error I reported should only effect the test suite, and is a
regression in 4.3.2.
It is also recommended that you do not set any environmental variables
for optimizing the toolchain if you don't know exactly what you're doing
as the minimal speed gains are outweighed by the pot
There is a bug in GCC 4.3.2 which will cause tests to fail if the system
has a stack size of 8MB or less in limits-structnest.c
(gcc.c-torture/compile/compile.exp for example). The devs are aware of
the problem and are working on a fix but may be worth mentioning in the
book (svn).
I know I ha
, recompiling the toolchain against the new headers, then
installing a new kernel?
3c) How do other distributions get around this? Like Debian or Ubuntu,
which install new kernel images without updating the toolchain? Are they
just foolish or have I missed something completely?
Rob Thornton
--
http