Re: Build fail

2007-02-09 Thread Kai Ruottu

Ferad Zyulkyarov wrote :

To build a GCC corss compiler it would be good to use a specail tool
that is called "crosstool". You may look at the following links:

1. http://kegel.com/crosstool/
2. http://kegel.com/crosstool/current/doc/crosstool-howto.html

Everyone always building the target Linux system oneself from absolute
scratch?  Or that everyone should always do this thing, never use any
existing Linux distros for anything?  Oh you great guru tell your bright
idea to us who have used only existing Linux distros and have made
crosstoolchains only for existing Linuces using their existing original
thoroughly tested components, what is the wisdom behind all this
"from absolute scratch always" suggestion!

The late 1800 and early 1900 thinkers like Marx & Engels, V.I.Lenin
etc. seemingly had similar ideas about the necessity to start everything
from absolute scratch  as Mr. Kegel nowadays...

Ok, the traditional "evolutionary" method is to not reinvent the wheel
with the already tested target components but let  then be as they are
and produce only the stuff required for the new $host,  the GNU
binutils and the GCC sources. NOT the target C libraries because
they already are there for the existing targets!  The Kegel's idealism
says that also all these MUST be built with the new GCC.  The glibc,
the X11 libraries, the Gnome libraries, the KDE libraries, the termcap,
the ncurses,  Horrible "bolshevism/bullshitism" I would say



Re: Build fail

2007-02-12 Thread Kai Ruottu

Ian Lance Taylor wrote :

Kai Ruottu <[EMAIL PROTECTED]> writes:

  

Ok, the traditional "evolutionary" method is to not reinvent the wheel
with the already tested target components but let  then be as they are
and produce only the stuff required for the new $host,  the GNU
binutils and the GCC sources. NOT the target C libraries because
they already are there for the existing targets!  The Kegel's idealism
says that also all these MUST be built with the new GCC.  The glibc,
the X11 libraries, the Gnome libraries, the KDE libraries, the termcap,
the ncurses,  Horrible "bolshevism/bullshitism" I would say



What in the world are you talking about?
  
Some stupid typos like 'then' instead of 'them', 'sources' appearing 
after 'GCC',

could have caused you to write this, but I don't think it being the case...


crosstool is a one stop shop for people who don't want to figure out
how everything fits together.  Given that, it clearly must build the
target libraries.  It can't assume that they already exist.
  
This has no typos I think, but despite ot that it sounds like asking 
that "if the sun
is a star, why we don't see it among the other stars at night"?  Quite 
clear question

but very hard to answer...

 So what in the world are you then talking about?  That everyone should 
rebuild the
target glibc already having its runtimes already installed on the target 
system?  That
those "installed runtimes" really don't exist there?  One sees them 
there but what one
sees isn't really true?  Everyone can see the sun go around the earth 
but that isn't the
case?  When everyone knows how a star looks like and that stars can be 
seen only at
night, the sun cannot be a star when it is so big and it really cannot 
be seen at night?


Maybe I'm stupid when for instance seeing a glibc for Linux/PPC in many 
Linux distros
and I really believe it working as the "target C library" when producing 
a crosstoolchain

for any Linux/PPC :-(

For which existing targets the prebuilt C libraries are missing?  Or 
which are the

targets which don't have any "suitable", "compatible" or something C library
which could serve as that temporary bootstrap "target C library" during 
the GCC

build?  In those "special optimized for something" toolchain cases...


I'm sorry crosstool doesn't fit your philosophical notions (which are,
moreover, incorrect as stated).  If you have something substantive to
say about crosstool--like, say, it doesn't work, or doesn't do what
people want--then say that.
  
Ok, crosstool doesn't work and it doesn't do what the ignorant people 
think it

doing!

 The cases are many where it has no sanity, but lets take two very 
common from

the "embedded Linux" world for which "crosstool" was made :

1.  remote debugging - the libs on the cross-host and the target must be 
identical


2. Java - for some totally unclear reason mixing a self-made 'libgcj.so' 
with existing
   installed C and C++ runtimes doesn't work.  Not even in cases where 
both the
   installed system and the crosstoolchain have used totally identical 
sources, the
   libraries for the cross-host only being prebuilt on some other 
system than the

   libraries on the installed system.

Generally the case is about producing a crosstoolchain for an existing 
Linux system,
and the goal is to produce apps for this system,  not replace it with 
something totally
self-built.  And in these cases the replacement of the target libraries 
with self-built
ones is not recommended by me.  Others may disagree but I have the right 
to tell

my own




Re: Building without bootstrapping

2007-03-18 Thread Kai Ruottu

Paul Brook wrote:

How can I get the build scripts to use the precompiled gcc throughout
the build process ?



Short answer is you can't. The newly build gcc is always used to build the 
target libraries.
  

Nice statement but what does this really mean?

Does this for instance mean that: "The newly build gcc is always used 
to build the
target C libraries ?  A native GCC builder now expecting those 
'/usr/lib/*crt*.o'
startups, '/lib/libc.so.6', '/lib/ld-linux.so.2', '/usr/libc.a' etc. 
"target libraries" being

rebuilt with the new "better" GCC  to be "smaller and quicker" ?

Even some fanatic "people should always rebuild the target C library 
from its
pristine (FSF) sources"-guys have publicly given statements like "nobody 
should
consider replacing the native runtime target libraries with something 
built oneself!".
So maybe in a native GCC build it is not expected that people really 
would rebuild

"the target libraries" with the new GCC...

Meanwhile with a crosscompiler this thing seems to be totally on the 
contrary, even
in cases where the target C library with those runtime libraries exists, 
still everyone
should replace it with a self-made C library.  Nowadays there even 
aren't any
instructions available for producing a crosscompiler with an existing 
target C library!





Re: Toolchain for Maverick Crunch FPU

2007-04-12 Thread Kai Ruottu

Claudio Scordino wrote:

Kai Ruottu wrote:

Claudio Scordino wrote:
  I'm looking for a toolchain capable of compiling floating point 
operations for the Maverick Crunch Math engine of Cirrus EP93xx 
processors.


Cirrus provides some scripts to build a gcc-3.4 toolchain with such 
support, but unfortunately they don't work.


I heard that gcc 4 has the support for such FPU as well. Is it true ?
A quick look at the current gcc-4.1.2 sources told that there is  
support for Maverick as the FPU in ARM...


Good!


Does it work ?

Don't know how well it works...


That remains my main concern...


Things seem to be that the '-mcpu=ep9312 -mhard-float' combination will 
crash the GCC build in both gcc-4.1.2 and gcc-4.2.0-20070316 prerelease 
like :


/data1/home/src/gcc-4.2.0-20070316/build/./gcc/xgcc 
-B/data1/home/src/gcc-4.2.0-20070316/build/./gcc/ 
-B/usr/local/arm-elf/bin/ -B/usr/local/arm-elf/lib/ -isystem 
/usr/local/arm-elf/include -isystem /usr/local/arm-elf/sys-include -O2  
-O2 -Os  -DIN_GCC -DCROSS_COMPILE   -W -Wall -Wwrite-strings 
-Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition  
-isystem ./include  -fno-inline -g  -DIN_LIBGCC2 
-D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc -I. -I. -I../../gcc 
-I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include  
-I../../gcc/../libdecnumber -I../libdecnumber -mcpu=ep9312 -mhard-float 
-DL_addsubdf3 -xassembler-with-cpp -c ../../gcc/config/arm/lib1funcs.asm 
-o libgcc/ep9312/fpu/_addsubdf3.o

../../gcc/config/arm/ieee754-df.S: Assembler messages:
../../gcc/config/arm/ieee754-df.S:454: Error: selected processor does 
not support `mvfeqd f0,#0.0'
../../gcc/config/arm/ieee754-df.S:476: Error: selected processor does 
not support `mvfeqd f0,#0.0'
../../gcc/config/arm/ieee754-df.S:530: Error: selected processor does 
not support `ldfd f0,[sp],#8'

make[3]: *** [libgcc/ep9312/fpu/_addsubdf3.o] Error 1
make[3]: Leaving directory `/data1/home/src/gcc-4.2.0-20070316/build/gcc'

when enabling the mhard-float/msoft-float multilibs additionally with 
the 'mcpu=ep9312' ones in the
'gcc/config/arm/t-arm-elf' Makefile-fragment  Without the 
'-mhard-float' those fp-bit/dp-bit based
soft-float routines will be generated into the 'ep9312/libgcc.a', so the 
bare '-mcpu=ep9312' will mean
"use this CPU with soft-float as default", which would sound being the 
same as generating soft-float
always for 'i586', 'i686' etc. which always have that built-in 'i587', 
'i687' etc.  Assuming that those 'ep93xx'

series  members always that "Maverick FPU"

Producing GCC with only that '-mcpu=ep9312' addition and then having 
only soft-float routines for it
in the used 'libgcc.a' may or may not work with self-built objects 
produced with both '-mcpu=ep9312'
and '-mhard-float'.  But using  '-mcpu=ep9312 -mfpu=maverick' DOESN'T 
generate  any  "Maverick

specific" FPU instructions :-(

So could someone tell if the '-mcpu=ep9312' should work in gcc-4.1 and 
gcc-4.2 and how it should
work so that using the Maverick FPU could be possible?  Or is the FPU 
itself somehow broken and

people shouldn't try to use it at all?





Re: some problem about cross-compile the gcc-2.95.3

2005-04-15 Thread Kai Ruottu
On 15 Apr 2005 at 14:56, zouq wrote:

> first i download the release the version of gcc-2.95.3, binutils 2.15,

 This message should go to the crossgcc list... But it is nowadays
filled with bolsheviks demanding everyone to start from absolute
scratch, so wacky advices expected there :-(

 So maybe this list starts to be the one for those who cannot
understand the ideas about starting from scratch, avoiding
preinstalling anything from the (usually proprietary/custom) target
systems and replacing it with totally self-built (library) stuff...
So people who need crosstools for Solaris2, AIX, LynxOS, RedHat,
SuSE etc. are adviced to build glibc as the C library, of course the
C library being an essential part of GCC just like the binutils and
everyone being obliged to build all these GCC components from
scratch... "People are like chaff in the wind...", these people
really believe to their bullshitism/bolshevism and think they are
creating a better and perfect world this way, with these ideas...

 Ok, here we are talking about "recycling", "re-use" and other
things some others think the "creating a better and perfect world"
means. Like using existing C libraries for the target instead of
being obliged to rebuild them

> 2. cp -r ../../lib /opt/gcc/mipsel-linux/
>cp -r ../../include /opt/gcc/mipsel-linux/

 There is a well-known bug in GCC and putting the target headers to
the '$prefix/$target/include' is not enough, some of them or maybe
them all must also be seen in the '$prefix/$target/sys-include'.  At
least the 'limits.h', 'stdio.h', 'stdlib.h', 'string.h', 'time.h' and 
unistd.h' are required with the current GCCs.  Seeing them all may
work, sometimes the 'fixinc*' stuff makes garbage from the original
headers... I don't remember the case with gcc-2.95.3. My advice is
to symlink/copy only those six headers into the '.../sys-include'.

> 3. compile the gcc
> mkdir gcc-build;
> cd gcc-build;
> .../../gcc-2.95.3/configure --prefix=/opt/gcc --target=mipsel-linux

 Are you simply not telling the truth here or what?  Please see
later...

> --enable-languages=c -enable-shared -disable-checking -v;
> make;
>
>   /home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc/gcc/xgcc
> -B/home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc/gcc/
> -B=/opt/gcc-2.95//mipsel-linux/bin/

 This '-B' option tells that you used a "--prefix=/opt/gcc-2.95", not
the "--prefix=/opt/gcc" you told earlier, when configuring GCC !!!
Using JUST THE SAME PREFIX in both configures is the expected thing,
otherwise you are asking serious problems...

> -I=/opt/gcc-2.95//mipsel-linux/include  -DCROSS_COMPILE -DIN_GCC 
> -I./include -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED
> -I/usr/include

 Trying to get (native) headers from '/usr/include' is a bug here...
Especially when the host isn't a Linux.
 
> _muldi3
> as: unrecognized option `-O2'

 An option understood by a MIPS target 'as' was given to the native
'as'... This is one of the "serious problems" I told about...

> the as should be mipsel-linux-as

 When we are talking about "FBB" ("FSF's Bad Boy"), an AI being
a "step forwards" from GCC (like IBM -> HAL), then this FBB would
use those '$target-' things... The situation now is that only
humans and scripts written by humans use them... GCC uses those
seen with the './xgcc -print-search-dirs' with the new GCC driver...
(Hmm, when GCC becomes FBB, then GCJ becomes FBI, maybe
called like this because cops like coffee :-)

 Just fix your $prefix and try again after writing 'make distclean'
in your build directory:

/home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc

Then set the 'sys-include' stuff and use the aimed
'--prefix=/opt/gcc'  in the GCC configure...

Cheers, Kai



Re: FW: GCC Cross Compiler for cygwin

2005-05-02 Thread Kai Ruottu
E. Weddington kirjoitti:
I don't know if the specific combination will work, but one could 
always try. At least it's sometimes a better starting point for 
building a lot of cross-toolchains.
If building more than 1000 cross-GCCs is already "a lot of", then the 
experience got from that says it
is not a very good starting point

Or then I have misunderstood the meaning for a cross-GCC being 
something made for something already
existing, the native system being that for the system targets.  Or in 
the newlib case for something which
uses newlib and has  no native tools at all.  Both cases are built only 
in one phase for GCC... Dan's
crosstool doesn't accept anything existing for the target platform, it 
must be created from absolute
scratch and therefore requires all kind of dirty tricks when trying to 
avoid using anything existing.




Re: FW: GCC Cross Compiler for cygwin

2005-05-02 Thread Kai Ruottu
James E Wilson kirjoitti:
Amir Fuhrmann wrote:
checking whether byte ordering is bigendian... cross-compiling...
unknown
checking to probe for byte ordering... 
/usr/local/powerpc-eabi/bin/ld: warning: cannot find entry symbol 
_start; defaulting to 01800074

Looking at libiberty configure, I see it first tries to get the 
byte-endian info from sys/params.h, then it tries a link test.  The 
link test won't work for a cross compiler here, so you have to have 
sys/params.h before building, which means you need a usable C library 
before starting the target library builds.  But you need a compiler 
before you can build newlib.

You could try doing the build in stages, e.g. build gcc only without 
the target libraries, then build newlib, then build the target 
libraries. Dan Kegel's crosstool scripts do something like this with 
glibc for linux targets.
A "complete" (with libiberty and libstdc++) newlib-based GCC should be 
got when using the '--with-newlib' in the
GCC configure.  But there are long-standing bugs in the GCC sources and 
workarounds/fixes are required. But
only two :

1.  For some totally wacky reason the GCC build tries to find the target 
headers in the '$prefix/$target/sys-include'
instead of the de-facto standard place (where the newlib install 
will also put them), '$prefix/$target/include'.  So
   either all the target headers should be seen in the 'sys-include' 
too --  but one cannot be sure about the newlib
   case -- or only the absolutely required ones: 'limits.h', 'stdio.h', 
'stdlib.h', 'string.h', 'time.h' and 'unistd.h', the rest
   being only in the '$prefix/$target/include'.  Putting only these 6 
headers into the 'sys-include', by symlinking or
   copying them there, is my recommendation for current GCC sources 
(3.2 - 4.0).

2. The 'libiberty' config stuff claims that newlib has not the functions 
'asprintf()', 'strdup()' and 'vasprintf()'. Finding
   the place for the "list of missing functions in newlib" is easy, 
just using 'grep newlib' in the 'libiberty' subdir and
   checking the shown files and then fixing them.  So the 
'--with-newlib' works now like if some 5 or so years old
   newlib would be used...  Not fixing may work sometimes, sometimes 
the protos in the newlib headers will clash
   with the function (re)implementations in libiberty...

Otherwise the GCC build follows the instructions given in the GCC 
manual (when it still was only one, with gcc-2.95).
Preinstalling binutils and what one has available for the target C 
library, in the newlib case the generic newlib headers
found in 'newlib-1.13.0/newlib/libc/include' in the current '1.13.0' 
sources, are the prerequisites for the GCC build..
Of course the used '$prefix' should replace the '/usr/local' (the 
default $prefix) used in the GCC manual instructions.
After doing the previous two workarounds and using '--with-newlib' in 
the configure command a newlib-based GCC
build should succeed nicely...  At least the 'powerpc-eabi' case plus 
all those tens of other targets I have tried "from
scratch"

Although one  would already have a self-built newlib (with some older 
GCC) when updating the GCC, using the
'--with-newlib' may still be obligatory. Targets like 'powerpc-eabi' are 
not real targets, one cannot create executables
for them because there is no default target board with default glue 
library (low-level I/O, memory handling etc.)
But believe me, quite many still expect all kind of elves being real 
targets, 'arm-elf', 'sh-elf', 'h8300-elf' etc. maybe
behave like being 'real' targets, but 'm68k-elf', 'mips-elf', 'arc-elf' 
etc. are totally unreal...  The 'eabi' being just one
kind of 'elf' and it being real is also very common. But one must use 
options for '-mads', a Yellow Knife
(-myellowknife) or something in order to get the right libraries taken 
with the linking...   Shortly said the '--with-newlib'
should remove all the checks for the target libraries and describe the  
'newlib' properties in the GCC config scripts, so
building a "complete" GCC with 'libiberty' and 'libstdc++' should 
succeeed when only having the generic newlib
headers preinstalled (plus the binutils) before starting the GCC build.

After getting GCC built and installed, one can build and install the C 
library, newlib'.  And then try the almost
obligatory "Hello World" and see if one's "elf" is a real creature or 
then not... As told the "eabi" is not and one must
use a wacky command like :

powerpc-eabi-gcc  -mads -O2  -o hello_ppc-eabi.x hello.c
or
powerpc-eabi-gcc  -myellowknife -O2  -o hello_ppc-eabi.x hello.c
when trying to  compile and link the "Hello World"...  The GCC manual 
documents the supported boards for PPC
EABI quite well, for other targets one may need to know more about 
adding linker scripts, libraries etc. in the
command line, how to fix the 'specs' and add a default target into it... 
Or something





Re: FW: GCC Cross Compiler for cygwin

2005-05-02 Thread Kai Ruottu
Amir Fuhrmann kirjoitti:
1. If I am ONLY interested in the compiler, and do NOT want to build
libraries, what would be the process ??
 

Be happy with what you already have?
Ok
-  'make all-gcc' builds ONLY GCC
-  'make install-gcc' installs ONLY GCC
The "ONLY GCC" of course means the stuff built from the 'gcc' 
subdirectory sources... Before the
gcc-2.9 the GCC sources had only the GCC sources, now they include also 
'libiberty' and 'libstdc++'
and some other 'extra' packages...   GCC is only a compiler collection, 
but the 'libgcc*' stuff still being
produced using the GCC for the selected target (usually the just built 
one).  Projects like the Berkeley
Nachos have had instructions for "How to build a naked-GCC, without any 
libraries for the target
system", generally anyone who can use 'touch' or something for creating 
'libgcc.a's, 'libgcc_s.so*'s
etc. "stubs" from scratch, and so get 'make' happy, can build a 
"naked-GCC"...

2. I looked at newlib, but wasn't sure of the process of including it as
a combined tree .. Which subdir should I move over to the gcc tree ??
 

The 'newlib' (the generic C library) and 'libgloss'  (glue libraries 
for target boards)

This is another way to build the C library, but sometimes like when 
using an experimental and unstable
GCC snapshot, building the C library with the new GCC can be a little 
questionable

Generally the binutils, GCC and the C library builds should not have 
much to do with each others, if one
needs newer or older binutils, one builds them, if a newer GCC then one 
builds that, and if wanting to
rebuild the C library with a better GCC then doing that is highly 
motivated  But companies like MS
have got people to buy their product again although one already having 
it, the same idea followed with
free software means that one must rebuild what one already has,  "the 
goal isn't important, the continuous
rebuilding is...".  I cannot understand why a separately (or with a PC) 
bought Win2k couldn't be moved
into a new PC after replacing it with Linux, just like moving a 
separately (or with a PC) bought hard disk
People think that  it is not allowed just as that they must build 
everything again when building for a new
host

I used to build everything for both Linux and Windoze hosts, Windoze 
being the secondary host and
therefore never requiring rebuilding 'libiberty' and 'libstdc++' or the 
C library...  Only new binutils and
GCC for Windoze host  (and also GDB/Insight but this wan't seemingly  
required by Amir) after having
everything already for the Linux host.  So the 'make all-gcc' was very 
familiar command in those builds
for the secondary host .

"If At Last You Do Succeed, Never Try Again" (Robert A.Heinlein) is the 
rule after the first "bootstrap"
phase...




Re: libgcc_s.so.1 exception handling behaviour depending on glibc version

2005-05-23 Thread Kai Ruottu

Jonathan Wilson kirjoitti:


> Neither does Linux - by linking against a recent library you are
> *asking* for a binary that requires that library.  If you understand
> that you might understand why everyone is saying you should build on 
the

> lowest common denominator of the systems you're targetting.
>
> If you insist on shipping executables not just source then you have 
to be

> prepared to make a bit more effort to make them distributable.  You're
> aware of the problems, but seem to be resisting everyone's advice on 
how

> to avoid them.
On windows, it is possible to build a binary using a compiler on 
Windows XP


> that can then run on older versions of windows simply by not using 
any features
> specific to the newest versions of windows XP (or by using 
LoadLibrary and

> GetProcAddress to see if those features are available or not).


Can you do the same thing on linux?

The only prerequisite is a GCC & glibc & other required libraries 
combination
which produces apps for Linux/XYZ instead of only for the Linux 
version/distro
happened to be installed as the development platform...  Simple solution 
but not
generally liked by the Linux distributors Why?  Otherwise this kind 
of toolchain
would be always present in a Linux distro. Just like the tools on 
Windoze are,

they are not only for WinXP or 2k or Win9x, but for Win32

My habit is to produce everything for RHL7.3, or even for RHL6.2 although
working on newer Linuces. Then I have some hope that the binaries will run
also on newer RHLs, and hopefully also on SuSEs, Mandrakes and so on...

Surprisingly things like OpenOffice, Mozilla, Acrobat Reader etc. seem 
to work,

so some companies really have these kind of "generic tools"









Re: some compile problem about gcc-2.95.3

2005-06-19 Thread Kai Ruottu

Steven J. Hill kirjoitti:


zouqiong wrote in 15.4.2005 10:16:


> i download the release version of gcc-2.95.3, and binutils 2.15,
> then i did the following things:
> 1. mkdir binutils-build;
> .../../binutils-2.15/configure --prefix=/opt/gcc
> --target=mipsel-linux -v;
> make;make install;
>
> 2.i copy the o32 lib, o32 include to the /opt/gcc/mipsel-linux/lib,
> /opt/gcc/mipsel-linux/include,
>
> 3. mkdir gcc-build;
> .../../gcc-2.95.3/configure --prefix=/opt/gcc --target=mipsel-linux
> --enable-languages=c --disable-checking -enable-shared -v;
>

i am surprised about it.


 I am surprised where this message lied since 15.4.2005 before appearing
now...


You seem surprised, and I am terrified you are using a compiler that
old. Please go look at:

   http://kegel.com/crosstool/

which automatically builds cross toolchains and even still has
scripts to build your ancient (IMHO) combination.


... but much more I am surprised why anyone here could advice people,
who would like to get petrol to their old custom ancient cars, to get
gasoline and a modern fully standard car (but self-built from standard
parts) where to use that gasoline...


Re: some compile problem about gcc-2.95.3

2005-06-19 Thread Kai Ruottu

zouqiong kirjoitti:


.../../gcc-2.95.3/configure --prefix=/opt/gcc --target=mipsel-linux
--enable-languages=c --disable-checking -enable-shared -v;


 This is not true at all 


-B=/opt/gcc-2.95//mipsel-linux/bin/
-I=/opt/gcc-2.95//mipsel-linux/include


 Because these rows tell that a '--prefix=/opt/gcc-2.95/' was used 


as: unrecognized option `-O2'


 The assembler, 'as', was tried to be used from the chosen
'$prefix/$target/bin', here '/opt/gcc-2.95//mipsel-linux/bin/', but not 
 found from it.


i am surprised about it. 


 I'm not... Using different $prefix values in the binutils configure and
the GCC configure is a quite common newbie mistake... Maybe because some
people really believe GCC being a human being and therefore having the
'$target-as' and '$target-ld' being in PATH being enough... The opsys
will find them when a human being uses them, so also GCC must find them
or how?

 Those who have opened the GCC manual or tried the '-print-search-dirs'
option with GCC, don't believ anything like that...



Re: How to make an application look somewhere other than /lib for ld-linux.so.2

2005-07-15 Thread Kai Ruottu

Mark Cuss kirjoitti:

Hello all

I apologize if this is off topic for this list - I wasn't sure exactly 
where to ask but I thought this would be a good place to start:


 Something for newbies like gcc-help?

I'm trying to get myself a group of libraries that I can distribute with 
my program so that they'll run on any distro.


 On SuSEs, Mandrakes, Debians,...? Or on only Red Hats and Fedoras?

I run into problems all 
the time when different distros have different versions of system 
libraries like libstdc++, libgcc, libc, etc.


 Excluding libstdc++*.so's all these should be backwards compatible
with old apps produced on old systems... But usually only among the
same "company". So Red Hat 8.0 should run apps made for Red Hat 6.x
and 7.x and maybe even older apps made for "Red Hat".  This is what
one could expect, an opsys made for company use must be "stable" in
the sense that in some time frame all apps made for it should run...

 In the (bad?) Windoze, SCO, UnixWare, Solaris2 etc. worlds this
backwards-compatability issue has always taken seriously. But I
remember in 1994 the (Finnish) Linux experts to tell that Linux
WILL NEVER have any "backwards compatability" or any binary
compatability between its distros! People are assumed to rebuild
all apps to the new Linux installations! People like me who thought
things like the then new Novell UnixWare to be "a good thing" because
it claimed compatability with almost every source and binary made
that far, were doomed as being "heretics"...

 But Linux entering into the company world has caused pressure for
"binary compatability" a'la MS, SCO,... So nowadays one can download
a new "Firefox 1.0.5" for Linux/x86 and expect it to work OK on one's
Red Hat, SuSE, Debian and so on !!! That this is possible, must be a
horrible disappointment to the old Linux users...


To make this application run on a Red Hat 7.3 machine,


 At least the backwards compatability for apps made for Red Hat 7.3
has this far seemed perfect... Alls the apps made for RHL 7.3 on RH 8.0
by me have run OK both on RH 8.0 and on RH 7.x too...

 So my advice is to produce the apps for the "least common nominator".
Maybe Red Hat 7.3 could be this for RH 7.x, 8.0 and 9, and for the
Fedora Cores.  But I have no experience about the SuSEs. Maybe one
must have a similar "least common nominator" toolchain for those...

So - the question is:  How do I do this?  Even though LD_LIBRARY_PATH 
points to ./ as it's first entry, ldd still looks in /lib first for 
ld-linux.so.2. I've tried the rpath option to ld at link time, but that 
doesn't work either.  It seems that thing is somehow hardcoded to look 
in /lib for this library.


 Trying the backwards compatability sounds being the natural thing
for me.  Generally those "native GCCs" used as the production tools
for any apps (excluding the GCCs) is from the ass, IMHO... Instead
one should produce "portable GCCs" so that when one has produced
several GCCs for the then installed host, one could just copy them
to the (backwards compatible) new Linux distro. Without rebuilding
them. Heretics maybe, but struggling with trying to produce egcs-1.1.2
for RHL 6.2 target using gcc-3.4 or something on Fedora is really
not seen as any alternative by me if needing to produce apps for RH 6.2
on Fedora... Sometimes in the past I understood that "native GCCs are
from the ass" and after that I have produced only crosscompilers for
everything, even for the "native" target... On my RHL 8.0 all the GCCs
for RHL 8.0 target are crosscompilers for RHL 8.0. And made for the
current "least common nominator" runtime host, RHL 7.3...

Is there a way to somehow configure gcc build executables that look 
elsewhere for ld-linux.so.2, or is what I'm trying to do simply not 
possible?  I'd really like to have a set of libraries with my program so 
that it's binary compatible with other distros...  there must be a way.  
If anyone has any tips or advice I'd appreciate it.


 What about learning the "--help" option?  When you link, you can force
the resulted apps to search their shared libraries from anywhere :

F:\usr\local\i486-linux-gnu\bin>ld --help
Usage: ld [options] file...
Options:
  -a KEYWORD  Shared library control for HP/UX 
compatibility

  -A ARCH, --architecture ARCH
  Set architecture
  -b TARGET, --format TARGET  Specify target for following input files

  -rpath PATH Set runtime shared library search path
  -rpath-link PATHSet link time shared library search path

 Finding the option for "Set runtime shared library search path" needs
only newbie level skills, ie capability to write "--help" and read :-)
RTFM then tells more, again capability to read is required... Producing
nice manuals for the GNU tools from their texinfo sources is also quite
newbie level skill. Some can even use MS Word or the text processor in
OpenOffice and invent their own clauses, only converting existing things
to some 

Re: How to make an application look somewhere other than /lib for ld-linux.so.2

2005-07-15 Thread Kai Ruottu

Kai Ruottu kirjoitti:

Mark Cuss kirjoitti:
So - the question is:  How do I do this?  Even though LD_LIBRARY_PATH 
points to ./ as it's first entry, ldd still looks in /lib first for 
ld-linux.so.2. I've tried the rpath option to ld at link time, but 
that doesn't work either.  It seems that thing is somehow hardcoded to 
look in /lib for this library.


 Seems that also I must learn to read much better!!! Not thinking that
"others cannot ever even read". Not very good results yet got...  My
apologies for not reading what Mark wrote!!!

 So '-rpath' was tried but not yet the '-dynamic-linker', which must
be used for the 'ld-linux.so.2', 'ld.so.1' or something depending on
the Linux, SVR4 etc. variation...

 Finding the "dynamic linker", "program interpreter" or whatever the
official name then is, from the right place at runtime can be a problem,
but it is also a problem at linktime!!!  Here is my clue for those who
yet don't know how on earth the search place for those "NEEDED" shared
libraries (told inside other shared libraries being needed too when
linking) :

-- clip --
F:\usr\local\i486-linux-gnu\bin>ld -verbose

GNU ld version 2.10.1 (with BFD 2.10.1)
  Supported emulations:
   elf_i386
   i386linux
using internal linker script:
==
OUTPUT_FORMAT("elf32-i386", "elf32-i386",
  "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(_start)
 SEARCH_DIR(/usr/local/i486-linux-gnu/lib);
/* Do we need any of these for elf?
   __DYNAMIC = 0;*/
SECTIONS
-- clip --

 The "SEARCH_DIR" told in the internal linker script seems to be the
place where the stuff will be searched...

 Sometimes there was a bug in this and using '-rpath-link' was
obligatory.  Nowadays there can be a problem with the bi-arch
(64/32-bit) targets like 'x86_64-linux-gnu', using '-m elf_i386'
in the linker command gives something like :

-- clip --
GNU ld version 020322 20020322
  Supported emulations:
   elf_x86_64
   elf_i386
   i386linux
using internal linker script:
==
/* Default linker script, for normal executables */
OUTPUT_FORMAT("elf32-i386", "elf32-i386",
  "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(_start)
 SEARCH_DIR("/usr/local/i386-linux-gnu/lib");
-- clip --

 So the linker was made for 'x86_64-linux-gnu' but it searches from
'.../i386-linux-gnu/lib' for the 32-bit 'ld-linux.so.2' at linktime.
Who could have guessed this?

 I haven't though checked how the uptodate linkers behave...


Re: Problem compiling libstdc++ is current 4.0.2 cvs (volatile strikes again)

2005-08-22 Thread Kai Ruottu

Haren Visavadia wrote:


You missed "The GCC team has been urged to drop
support for SCO Unix from GCC, as a protest against
SCO's irresponsible aggression against free software".


 When starting my Unix learnings with SCO Xenix/286,
SCO Xenix/386 and SCO Unix (all some kind of trademarks),
I have always wondered what on earth this really meaned.

 Did it mean that the support for SCO Unix, the '3.2.[2-4]'
was dropped but the support for SCO OpenServer5 and SCO
UnixWare were continued or what?  Can it be so hard to ask
someone who knows something about the SCO products to write
some sane clauses to the FSF documents ?

 Maybe "SCO Unix" means something else for Haren and some
FSF people but my thought is that the majority understands
it to mean the SCO's SVR3.2 release from the late 80's and
early 90's... Ok, the heir for SCO Unix is the OSR5 but the
UnixWare got from Univel/Novell has always been called as
"UnixWare" so "SCO Unix" cannot be called so... If the "SCO
Unix" means all the SVR4 and SVR5 based Unices, then also
AIX, Solaris2, HP-UX, Irix and others were dropped from all
support




Re: Cross Compiler Unix - Windows

2005-08-26 Thread Kai Ruottu

Mike Stump wrote:

configure --with-headers=/cygwin/usr/include --with-libs=/cygwin/usr/ 
lib target=i386-pc-cygwin && make && make install


would be an example of how I used to build one up, see the gcc  
documentation for details.  --with-sysroot or some such might be  
another way to to do it now-a-days.


 That the 'native' Cygwin GCC mimics some unexisting proprietary native
'cc' in its headers and libraries directories instead of just being only
another "C/C++/Java/Fortran/..." compiler set on Windoze, like those
GCCs for 'h8300-*', 'sh-*, 'arm-*' etc. GCCs without any GCCs on their
'native' area, has always been very weird... The same thing happens with
the 'official' MinGW GCC, it too tries to mimic some still unknown
native 'cc' !  Not even mentioning Linux and its GCC idea: "There can
be only one!", seemingly borrowed from the "Highlander" -- that all the
GCCs on a host system should use a common $prefix has seemingly been
totally unknown by the Linux people and they really expected the native
GCC to be the only GCC ever on that host! Or that if one needs more
GCCs, they can only be other versions for the native GCC...

 Platforms like Solarises, AIXes, HP-UXes, Irixes, SVR[3-5],... really
have their proprietary native 'cc's which GCC has some sane reasons to
mimic and so try to access their installed headers and libraries from
their original places. But the native-GCC will be installed into some
'local' or 'options' place, '/usr/local', '/opt/gcc' or something, where
one can add as many other GCCs as one wishes. Not to the '/usr' as has
been the rule on those "no native 'cc' seen ever here" platforms. Is
there any sane reasons for this on systems which never have had that
non-GNU native 'cc' ?

 The '--with-sysroot' tries to keep the 'proprietary' layouts even on
the cross-hosts, where people could always use the "standard install
layout for GCC", every GCC installed using just the same rules. So the
situation where all crosscompilers use their own proprietary layouts
has somehow been seen being better that trying to standardize the GCC
installation layout.

 The current cross-GCC install layout has its problems : there is only
one $target dependent place for the libraries when a typical native GCC
has at least two, '/lib' and '/usr/lib'. Meanwhile a cross-GCC has two
places for the headers: the '$tooldir/include' for the standard (posix)
headers and the '$tooldir/sys-include' for the system-specific
(non-posix etc.) headers. And maybe the last 10 years or so the GCC
developers have mixed these apples and oranges, standard and system
things, so the cross-GCC build has been a continuous mess, target
headers being searched from the 'sys-include' when the de-facto place
is the 'include' (For instance the newlib install puts the target 
headers to the 'include' and they are there when one wants to try to

build a newer GCC.) And such... If most of the native GCCs have only
the '/usr/include', the STANDARD_INCLUDE_DIR, and there is no place for
the SYSTEM_INCLUDE_DIR (please search the GCC manuals for this), is it
so hard to leave the 'sys-include' unnoticed?

 However anyone who has built more than 10 GCCs for more than 10 targets
and then installed them on the same development platform, has somehow
got used to the current (but limiting) layout, and has solved the
problems somehow. For instance what to do with the Solaris2 '/usr/lib',
'/usr/ccs/lib', '/usr/ucblib', '/usr/ccs/ucblib' and so on library
places someone recently had some problems with. And before the
'--with-sysroot' appeared at all...

 Before trying to move the proprietary layouts into the peaceful?
land of cross, it could have been better to ask the crosscompiler
builders how they have solved these "copy the target headers and
libs from the native system and put them to work with the cross-GCCs
too" problems. Maybe then there had no reason for the '--with-sysroot'.
Does it even work as one would expect it to work, solving those '/lib'
and '/usr/lib' in the 'libc.so' script problems and so on?

 Ok, as long as there are those stupid installs to '/usr' on the native
front, as long people must think how on earth the natively installed
C libraries can be copied to the cross host. Linux is a good example
about this stupidity in the very beginning. Instead of thinking how
one could produce apps for Linux easily on ANY host, it was thought
how one could produce apps for Linux ONLY on the Linux host and so
trying to make cross-compiling to Linux as hard as possible.

 Not using '--with-sysroot=' at all, but simply putting the '/lib' and
'/usr' stuff below a '$sysroot' and then symlinking the 
'$sysroot/usr/include' and '$sysroot/usr/lib' to be seen as 'include'

and 'lib' on the $gcc_tooldir, adding a couple more symlinks to the
'lib' and editing the absolute paths away from the 'libc.so', enables
one to get a Linux-targeted GCC to work. With 64-bit bi-arch targets
one of course uses the default 'lib64' as the place here the
$gcc_tooldir/li

Re: Cross Compiler Unix - Windows

2005-08-30 Thread Kai Ruottu

Dave Korn wrote:


 What becomes to Cygwin and MinGW, the same attitude as followed with
Linux, that "producing any apps for Windoze should happen only on
Windoze, or that when one does it on some other host, it still should
happen just like on Windoze!", is totally weird to me. 


  It seems weird to me too.  Especially considering that at least one of the
main cygwin developers builds everything on linux with a linux-x-windows
toolchain.  So perhaps you have misunderstood the situation with cygwin;
cross-development is certainly possible, and _intended_ to be possible.  It
certainly isn't any kind of policy to _deliberately_ make development only
possible on native hosts.


 Recommending Cygwin for 'ordinary users' as the preferred place for
building GNU apps for Windoze, sounds weird. Just as doing the same
with MinGW/MSYS.  The developers can have Linuces etc. better platforms
available and may require to produce everything for Linux etc. first and
for Windoze too... Only building can be enough, no very hard testing or
debugging in order to get the application to work is expected...

 This is quite the same as recommending people to build their own sport
cars from Volkswagens in garages instead of doing this in car factories
because only real Porches will be built in factories. People keep their
self-built cars there so of course these must be built there. Or 
something...


 If one wants to produce tens of binutils, GCCs etc. GNU stuff for the
Windoze host, the native Windoze shouldn't be the recommendation. Not at
least when the recommendation comes from Red Hat or from any other Linux
company. If Red Hat delivers the Cygwin tools for only the Windoze host,
what else this is than a recommendation to use Windoze instead of their
own Linux for the Windoze target development?



Re: Cross Compiler Unix - Windows

2005-08-30 Thread Kai Ruottu

Mike Stump wrote:


On Friday, August 26, 2005, at 12:59  AM, Kai Ruottu wrote:


Is there any sane reasons for this on systems which never have had that
non-GNU native 'cc' ?


Consistency.  This is only bad if one abhors consistency and 
predicability.  No?


 I understand people coming from all kind of native cultures and
so the unified 'one culture' world seen with cross-GCCs, sounds
strange. For me the cross world is the familiar and all those native
worlds are the strange ones...

 A better approach could have been to think with every $target, how
on earth those proprietary headers & libraries could be installed
into the "standard GCC" install layout. Linux could have served as
the example for the standard GCC just as well as those GCCs which
produce stuff for Windoze.

 All the complaints about apps requiring them being built natively,
not by cross-compiling them, could have been rare. If GCC would have
had its standard search places and the stuff in these found
automatically, no need to put any

 -I/usr/X11R6/include -L/usr/X11R6/lib

like options into the GCC command line in the sources when compiling
them with GCC would have been necessary. The need for these seemingly
comes from the need to use those proprietary 'cc's to produce the same
sources.

 Ok, I don't even remember when I have needed the original native GCCs
for any builds. Maybe my repertoire is seriously limited nowadays, only
GCCs, binutils, GDB/Insights and glibcs/newlibs... If requiring to build
bash, tcsh etc. shells, whole Linux distros, maybe I would meet the
problems. But were the problems there already or has the "native only"
attitude caused them?  What if crosscompiling had been the default
everywhere?



Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Kai Ruottu

David Daney wrote:


Ian Lance Taylor writes:
 > Jonathan Day <[EMAIL PROTECTED]> writes:
 >  > > My question is simple enough - has anyone built a
 > > toolchain for a MIPS64-Linux-GNU target?
 >  > Yes, I did, last year.
 >  > But I did it through a tedious iterative process--build the 
binutils,

 > build the compiler until it fails building libgcc, install parts of
 > it, build glibc enough to install the header files (with the kernel
 > header files there too), build the rest of the compiler, build the
 > rest of glibc.  Various things broke along the way and needed
 > patching.  I think it took me about a week, interspersed with other
 > things.


 My 'from scratch' method was quite the same a year or two again. So
when trying to update last July I already had some 'mips64-linux-gnu'
headers and libraries and could use them when updating the GCCs...

Dan Kegel's crosstool does it for many different platform/tool version 
combinations (or so I have herd).  I think the problem is that it (and 
other solutions like it) have ad hoc hacks/patches for each combination 
to make it work and perhaps that mips64-linux-gnu is not well supported.


 How one gets the first toolchain made shouldn't have the importance
many people think it has... My opinion (clashing badly with Dan's) is
that the first build has no importance at all, if one knows the basics
for Linux, for compiling and for other newbie-level things, one easily
succeeds to get the first toolchain. What Ian and I did, is mostly based
on 'trivial' understanding like :

 - the headers for 'mips-linux-gnu' can be similar to or even identical
   with those for 'mips64-linux-gnu', so if one has the previous, they
   are at least a very good starting point for getting the right
   headers.

 - searching with 'mips64' in the glibc's 'sysdeps' tree quite easily
   reveals the few places where there could be some different headers
   for 'mips64'

when trying to collect the required "target headers" for the GCC's 'libgcc'
compile.  Using '--disable-shared-libgcc' etc. options in the first GCC
build, enables one to succeed in the 'make all-gcc' with the bare target
headers. And lets one to continue with the glibc build...

 My estimate for a 'from scratch' build would be 4 hours if one already
has the required glibc version made for some other Linux, but of course
the 'mips' one in the 'mips64' case would be the best. And if one has
the required newbie-level know-how about GCC, compiling and the Linux
glibc components (crt*.o, libc.so.6, libc.so, ld.so.1 etc.). If one
hasn't, one can cause quite the same situation as those Greenpeace
people who put gasoline into the tank in a diesel car and were then
angry when they got no help in solving the mess they had caused with
their total ignoracy... This happened in Finnish Lapland and even the
children here know that using gasoline instead of diesel can be very
dangerous for the engine, or doing vice versa. My experience collected
from the crossgcc list tells that many cross-GCC builders will come to
the roads just as ignorant as the Greenpeacers. They hate cars so why
they would like to learn anything about them?  But why the GCC builders
hate GCC, Linux, glibc etc. and therefore don't want to learn anything
about them, has been an eternal mystery for me...

 Building from absolute scratch can be a challenge for many, but some
can think: "When At Last You Do Succeed, Never Try Again" and never any
more try that... And start to think if there even is any reason for that
during the first toolchain build.  People who build native-GCCs, never
(or very seldom) start from scratch, the target C library is already
installed and one only builds the new binutils and the new GCC when
wanting to produce a "self-made toolchain"...  The same idea works also
with cross-GCCs...

 If one hasn't the target C library, one can always borrow that or
something... A minimal 'glibc-2.3.5' for 'mips64-linux-gnu' probably
takes 1 - 2 Mbytes as a '.tar.gz' package so anyone who has a direct
net connection and has glibc-2.3.5 made for 'mips64-linux-gnu', can
email pack the base stuff and email it... Including myself. One only
needs to have the right 'lazy' attitude and ask someone to send...

I did similar with mipsel-linux-gnu using headers lifted (and hacked) 
from glibc on i686-pc-linux-gnu as a starting point.


There is a definite chicken-and-egg problem here.  But once you have a 
working toolchain you never suffer from the problem again.  The result 
is that there is no motivation to solve it once you know enough to fix it.


 David seems to have the same "When At Last You Do Succeed, Never Try
Again" attitude which I have...

 Okeydokey, I haven't any clue what the 'mips64-linux-gnu' target SHOULD
be... But I know what will be the result when one builds the toolchain
using the current defaults in GNU binutils, GCC and glibc-2.3.5. Let's
start with binutils, the 'ld -V' may show :

GNU ld version 2.16.91.0.1 20050622
  Supported emulat

Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Kai Ruottu

Kai Ruottu wrote:


 How one gets the first toolchain made shouldn't have the importance
many people think it has... My opinion (clashing badly with Dan's) is
that the first build has no importance at all, if one knows the basics
for Linux, for compiling and for other newbie-level things, one easily
succeeds to get the first toolchain. What Ian and I did, is mostly based
on 'trivial' understanding like :


and so on...

 Please believe me, I saw my thoughts once again flying wildly and my
aim was to strip the message being a little shorter. But somehow I
clicked the "Send" instead of putting this into Drafts


Re: Howto Cross Compile GCC to run on PPC Platform

2005-11-03 Thread Kai Ruottu

Jeff Stevens wrote:


.../gcc-3.4.4/configure
--build=`../gcc-3.4.4/config.guess`
--target=powerpc-linux --host=powerpc-linux
--prefix=${PREFIX} --enable-languages=c

and then a make all.  The make went fine, and
completed without any errors.  However, when I ran
'make install' I got the following error:

powerpc-linux-gcc: installation problem, cannot exec
`/opt/recorder/tools/libexec/gcc/powerpc-linux/3.4.4/collect2':
Exec format error

How do I install the native compiler?


You shouldn't ask how but where !

You cannot install alien binaries into
the native places on your host !  This
is not sane at all...

Ok, one solution is to collect the components
from the produced stuff, pack them into a
'.tar.gz' or something and then ftp or something
the stuff into the native system.

If you really want to install the stuff into your
host, you should know the answer to the "where"
first, and should read from the "GCC Install"
manual the chapter 7, "Final installation" and
see what option to 'make' you should use in order
to get the stuff into your chosen "where"...


Re: Howto Cross Compile GCC to run on PPC Platform

2005-11-03 Thread Kai Ruottu

Jeff Stevens wrote:

I am creating the target tree on my host, so that I
can later transfer it to a USB storage device.  I was
going to manually move everything, but only saw one
binary, xgcc.  Is that all, or aren't there some other
utilities that go along with it?


 The 'cpp', 'cc1*', 'collect2', 'g++',... BUT your stuff
is not a 'normal' native GCC... The '--prefix=/usr' is
the normal setting for that. So probably you must make
distclean, reconfigure and rebuild the GCC for the normal
native system...

> I just didn't know

exactly what to copy and where to copy it to.  When I
built glibc, those were built for the target system,
but installed to the target directory structure that I
am creating.

The 'make install' command that I ran for glibc was:

make install_root=${TARGET_PREFIX} prefix="" install

where TARGET_PREFIX is the target filesystem tree.  I
used the same make install command for the native gcc
that I compiled.


 The install command for GCC is :

   make DESTDIR=$SYSROOT install

and found via the RTFM method... Building GCC should
mean producing binaries and documents, not only the
first. But maybe I'm alone with this opinion and the
only one ever using 'pdftex' etc. document tools...

With the glibc configure you of course used the same
'--prefix=/usr' or how?  The native GCC and glibc normally
use that but the 'make install' has those 'install_root='
and 'DESTDIR=' options for installing into the $SYSROOT.

Why you used '${TARGET_PREFIX}' as your '$SYSROOT', is
a little odd, but I don't know about those LFS (Linux
From Scratch) oddities anything, maybe the book writer
didn't know so well :-) Maybe it suggests producing glibc
twice, once for the crosstools, once for the target system,
which of course is some kind of "bullying", doing it once
and for the target system of course is enough... The same
glibc works also on the cross-host, installed into the
target root directories... The '--with-sysroot=' or simple
symlinks to the '$(tooldir)' let it being used with a cross-
compiler... A already built glibc taken from an existing
Linux target of course must be installed somehow for an
usual crosscompiler for some Linux distro.




Re: should _GNU_SOURCE be used in libiberty if glibc is present?

2005-11-24 Thread Kai Ruottu

Rafael Ɓvila de Espƭndola wrote:
I noticed that in the config.h generated for libiberty has HAVE_DECL_ASPRINTF  
defined as 0. This hapens because _GNU_SOURCE is not used when running the 
test programs.


Another long-staying bug in the libiberty configure system is that it
claims newlib missing 'asprintf()', 'strdup()' and 'vasprintf()' and
therefore tries to produce these although one has used the
'--with-newlib' in the GCC configure !  AFAIK, only those functions
which are missing from the C library, should be built into libiberty.
The result from this has been the GCC build crashing with the clashing
newlib headers prototypes and the function reimplementations in
libiberty. Sometimes I even made diffs for the fixes in 'configure.ac':

*** configure.ac.orig   2004-01-10 04:17:41.0 +0200
--- configure.ac2005-05-02 18:36:18.0 +0300
***
*** 265,280 
# newlib provide and which ones we will be expected to provide.

if test "x${with_newlib}" = "xyes"; then
- AC_LIBOBJ([asprintf])
  AC_LIBOBJ([basename])
  AC_LIBOBJ([insque])
  AC_LIBOBJ([random])
- AC_LIBOBJ([strdup])
- AC_LIBOBJ([vasprintf])

  for f in $funcs; do
case "$f" in
!   asprintf | basename | insque | random | strdup | vasprintf)
  ;;
*)
  	  n=HAVE_`echo $f | tr 'abcdefghijklmnopqrstuvwxyz' 
'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`

--- 265,277 
# newlib provide and which ones we will be expected to provide.

if test "x${with_newlib}" = "xyes"; then
  AC_LIBOBJ([basename])
  AC_LIBOBJ([insque])
  AC_LIBOBJ([random])

  for f in $funcs; do
case "$f" in
!   basename | insque | random)
  ;;
*)
  	  n=HAVE_`echo $f | tr 'abcdefghijklmnopqrstuvwxyz' 
'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`


Re: Xscale big endian tool-chain (how to build it?)

2006-01-03 Thread Kai Ruottu

Richard Earnshaw wrote:


Next, I suggest you add --with-cpu=xscale when configuring GCC.  You can
then drop the -mcpu=xscale when compiling (this should also give you
better libraries for your system).  However, beware that you libraries
will now only run on ARMv5 or later processors.
 

IMHO using the separate 'xscale-elf' target template in the 
'gcc/config.gcc', this defining its
own things, instead of the generic ARM files, could sound being the most 
clean solution :


 xscale-*-elf)
 tm_file="arm/xscale-elf.h dbxelf.h elfos.h arm/unknown-elf.h 
arm/elf.h arm/aout.h arm/arm.h"

 tmake_file=arm/t-xscale-elf
 out_file=arm/arm.c
 md_file=arm/arm.md
 extra_modes=arm/arm-modes.def
  use_fixproto=yes
 ;;

This was taken from gcc-3.4.5 sources but I would assume gcc-4.x keeping 
the separate 'xscale-*'
targets.  Anyway when the primary target is XScale, those 'xscale-*' 
target names should be the

first to try, not the 'arm-*'.





Re: C++ xgcc for arm-none-eabi fails in 7.1, 7.2 due to undefined identifiers

2017-08-18 Thread Kai Ruottu

R0b0t1 kirjoitti 18.8.2017 klo 19:17:

On Fri, Aug 18, 2017 at 1:09 AM, Freddie Chopin  wrote:

On Thu, 2017-08-17 at 22:27 -0500, R0b0t1 wrote:

On Thu, Aug 17, 2017 at 4:44 PM, R0b0t1  wrote:

When compiling libssp, ssp.c, function __guard_setup:
O_RDONLY is undeclared (ssp.c:93:34),
ssize_t is an unknown type name (ssp.c:96:7), and
size_t is an unknown type name (ssp.c:113:25).

../../src/gcc-7.2.0/configure --target=$TARGET --prefix=$PREFIX
--with-cpu=cortex-m4 --with-fpu=fpv4-sp-d16 --with-float=hard
--with-mode=thumb --enable-multilib --enable-interwork
--enable-languages=c,c++ --with-system-zlib --with-newlib
--disable-shared --disable-nls --with-gnu-as --with-gnu-ld

A bootstrap C compiler is generated properly when passing --
without-headers.


All this talk about a "bootstrap C compiler", "magic scripts" etc. puts 
me to think there being
serious ignorance about how to simply produce a newlib-based 
crosstoolchain using the
traditional method : Putting gmp, mpc and mpfr sources for the host 
libraries and newlib &
libgloss subdirs from the newlib sources, among the GCC sources and the 
building everything

in only one stage...

I tried the traditional method and there were no problems. No weird 
"bootstrap stage", no weird
scripts required, only building binutils and then GCC with newlib after 
symlinking the extra srcs

into the main GCC source directory...

Here is the last rows from the build and then 'ls' for the $target stuff 
and trying the new GCC

driver :

make[5]: Siirrytään hakemistoon 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/thumb/libquadmath"

make  all-am
make[6]: Siirrytään hakemistoon 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/thumb/libquadmath"

true  DO=all multi-do # make
make[6]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/thumb/libquadmath"
make[5]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/thumb/libquadmath"
make[5]: Siirrytään hakemistoon 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/fpu/libquadmath"

make  all-am
make[6]: Siirrytään hakemistoon 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/fpu/libquadmath"

true  DO=all multi-do # make
make[6]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/fpu/libquadmath"
make[5]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/fpu/libquadmath"
make[4]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/libquadmath"
make[3]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/libquadmath"
make[2]: Poistutaan hakemistosta 
"/home/src-old/gcc-7.2.0/build/arm-cortex-eabi/libquadmath"

make[1]: Poistutaan hakemistosta "/home/src-old/gcc-7.2.0/build"
[root@localhost build]# ls arm-cortex-eabi
fpu  libgcc  libgloss  libquadmath  libssp  libstdc++-v3  newlib thumb
[root@localhost build]# gcc/xgcc -v
Using built-in specs.
COLLECT_GCC=gcc/xgcc
Target: arm-cortex-eabi
Configured with: ../configure --build=i686-linux-gnu 
--host=i686-linux-gnu --target=arm-cortex-eabi --prefix=/opt/cross 
--libdir=/opt/cross/lib --libexecdir=/opt/cross/lib --with-cpu=cortex-m4 
--with-fpu=fpv4-sp-d16 --with-float=hard --with-mode=thumb 
--enable-multilib --enable-interwork --enable-languages=c,c++ 
--with-newlib --disable-shared --disable-nls 
--with-gxx-include-dir=/opt/cross/include/c++/7.2.0 
--enable-version-specific-runtime-libs --program-prefix=arm-cortex-eabi- 
--program-suffix=-7.2

Thread model: single
gcc version 7.2.0 (GCC)




Re: Potential bug on Cortex-M due to used registers/interrupts.

2017-11-16 Thread Kai Ruottu

Vitalijus JefiŔovas kirjoitti 16.11.2017 klo 18:54:

On Cortex-M mcu’s, when interrupt happens, NVIC copies r0-r3 and couple
other registers onto the psp stack, and then jumps to interrupt routine,
when it finishes, NVIC restores these registers, and jumps back to user’s
function.
What is happening under the hood, NVIC only stacks 4 registers, r0, r1, r2,
r3. The other ones r4-r12 is developer’s responsibility.
I was looking at assembly code generated by GCC and there are plenty of
instructions using r4-r12 registers.



How does GCC handle scenario when execution is switched to unknown
procedure, which changes all of these registers?


Cortex-M is an ARM-architecture processor or how?Ā  So one uses things 
related to ARM

or how?

With ARM one uses function attributes available for ARM when the 
question is about a
special function like an ISR ("Interrupt Service Routine").Ā  From the 
GCC 7.2.0 manual :


- clip 
6.31.4 ARM Function Attributes

These function attributes are supported for ARM targets:

interrupt
Ā  Use this attribute to indicate that the specified function is an 
interrupt handler.
Ā  The compiler generates function entry and exit sequences suitable for 
use in an

Ā  interrupt handler when this attribute is present.
Ā  You can specify the kind of interrupt to be handled by adding an optional
Ā  parameter to the interrupt attribute like this:
 void f () __attribute__ ((interrupt ("IRQ")));
Ā  Permissible values for this parameter are: IRQ, FIQ, SWI, ABORT and 
UNDEF.


Ā  On ARMv7-M the interrupt type is ignored, and the attribute means the 
function

Ā  may be called with a word-aligned stack pointer.

isr
Ā  Use this attribute on ARM to write Interrupt Service Routines. This 
is an alias

Ā  to the interrupt attribute above.
- clip 

How this attribute works with the NVIC I don't know but using the aimed 
attribute

and seeing the result is one way to see what will happen...



Re: gcc 7.2.0 error: no include path in which to search for stdc-predef.h

2017-12-01 Thread Kai Ruottu

Marek kirjoitti 1.12.2017 klo 10:51:

It seems the last error preceeding the "suffix" error is "no include
path in which to search for stdc-predef.h"
I wonder where to find stdc-predef.h or whether it's generated by gcc
during compile time.


This file comes with newer glibc versions. For instance glibc-2.17 
coming with CentOS 7

has it in its '/usr/include'. Older glibc versions don't have it...


I'm also compiling against musl.


No experience with that libc (https://www.musl-libc.org). Should it 
include the 'stdc-predef.h'

in its '/usr/include'?

configure: error: in 
`/run/media/void/minnow/build/gcc-7.2.0/x86_64-lfs-linux-gnu/libgcc':

configure: error: cannot compute suffix of object files: cannot compile


The '-lfs' hints about a "making a hammer without any existing hammer" 
case being in
question.Ā  Not about a normal case like making a native or cross GCC for 
CentOS 7 where the
target glibc (-2.17) is already made and tested and bug fixes will come 
from the CentOS 7

maintainers for its glibc...

Ok, when your target is the 'x86_64-*-linux-gnu', identical with the 
example CentOS 7/x86_64
and the default libc for it is glibc, how using 'musl' instead has been 
taken care of?Ā  Have you
installed some special patches into your local GCC sources?Ā  Maybe there 
is another target

name one should use like 'x86_64-lfs-linux-musl' in your case?


Re: gcc 7.2.0 error: no include path in which to search for stdc-predef.h

2017-12-01 Thread Kai Ruottu

Kai Ruottu kirjoitti 1.12.2017 klo 12:02:

Marek kirjoitti 1.12.2017 klo 10:51:

It seems the last error preceeding the "suffix" error is "no include
path in which to search for stdc-predef.h"
I wonder where to find stdc-predef.h or whether it's generated by gcc
during compile time.


This file comes with newer glibc versions. For instance glibc-2.17 
coming with CentOS 7

has it in its '/usr/include'. Older glibc versions don't have it...


I'm also compiling against musl.


No experience with that libc (https://www.musl-libc.org). Should it 
include the 'stdc-predef.h'

in its '/usr/include'?


Answering to my own question... Yes, it should include this :

https://git.musl-libc.org/cgit/musl/tree/include

Maybe there is another target name one should use like 
'x86_64-lfs-linux-musl' in your case?


The docs for musl are telling just this, one should use the 
'-linux-musl' triplet!





Re: gcc 4.4.0 on ARM/WinCE float problem

2009-07-23 Thread Kai Ruottu

Dave Korn wrote:

Danny Backx wrote:

On Thu, 2009-07-23 at 10:07 +0100, Dave Korn wrote:



Kai Ruottu wrote :

Comparing the output from some earlier working GCC with the gcc-4.4.0
output would reveal if something was wrong in preparing inputs for
the soft-float routines... Or maybe something was changed in the
soft-float routines... What if you try a 'libgcc.a' taken from some
earlier working GCC ?

Did that, see below. I think this means that the stuff in libgcc.a cause
the issue.


  Could this be related to old-vs-new EABI?  Is the stack aligned to the same
multiple on entry to main in both old and new executables?  The assembler code
looked basically the same, except the stack frame size has changed and a lot
of things that were aligned to an (odd/even) multiple of 8 may now be aligned
to an (even/odd) multiple instead.


Also the message thread started by :

http://gcc.gnu.org/ml/gcc-help/2009-03/msg00107.html

could be checked... Although taking part in it, I don't remember what
was solved or not :( In any case Vincent R. could know something more
now...



Re: How to run gcc test suite in pure mingw32 environment?

2009-11-09 Thread Kai Ruottu
å¾ęŒę’ wrote:
> These days, I’m trying to build gcc-4.4.2 + binutils-2.20 + gmp + mpfr in
> Msys+MinGW and Cygwin environment.
>
> The builds on both environments are OK, but I cannot run "make check", or
> "make check-gcc".
>
> Finally, I found, that, to run test, you must first install guile, autogen,
> tck/tk, expect, dejagnu.
>   

This "self-hosted" idea is quite the same as trying to produce cars in
garages or even on roads because they
will be used there...

I myself would be more interested to get these tests for MinGW-hosted
tools to work on Linux because that is
the "preferred build platform for MinGW-hosted tools" for me. Some years
ago I produced more than 100
binutils+GCC+GDB/Insight toolchains for all kind of targets to be run on
the MinGW runtime host. Just for a
fun... The tests regarding to "does it work" happening on Windoze/MinGW
via compiling apps there and then
possibly running them on the built-in simulator in GDB or using the
standalone "$target-run" simulator on the
console.

When all the $target systems for the MinGW-hosted binutils, GCCs and
GDB/Insights are non-Windoze
targets, the question is about how well these tools would work on
Windoze and are the results from them
identical with their equivalents on the primary Linux host. What maybe
could be usable, would be some
kind of "gdbserver" to run on Windoze and run the MinGW-hosted toolchain
tests "natively" there.

What has been the "problem" is that those very limited tests on the
Windoze/MinGW host have this far
showed the toolchains to work quite identically with their earlier
equivalents on the Linux host, for instance
a toolchain for "arm-elf" on MinGW-host working nicely on Windoze too.
So really no need to wonder how
to get "make check" to work with the Canadian-cross built toolchains...

> Is't it necessary to port newlib to pure MinGW environment ?

I tried to understand what this clause means but didn't "grok" it...
Could you elaborate what the point is?
"Pure MinGW" means "running apps using the native Windoze DLLs"
meanwhile Cygwin (and MSYS?)
try to provide a "Unix layer" for the apps like binutils, GCC and GDB.
For instance the tcl/tk/itcl DLLs are
for Win32 API in the MinGW-hosted Insights...

> If we have test environment on Windows platform, we can greatly improve the
> development process in this platform ,and ensure the quality of gcc and
> companion tools on Windows. I noticed that there are also a MinGW-w64
> project, if we have that test environment, we can impove it, even accelerate
> it.
>   

When producing those 100+ toolchains for MinGW, my conclusion was : "In
the same time when one
developer builds 100 toolchains for MinGW host on a productive build
platform, there must be 100
developers to get just one toolchain (for MinGW target) being built on
the native Windoze build platform :(

Just name your $target(s) and I will report how many minutes it takes to
build gcc-4.4.2 + binutils-2.20 (and
the gmp + mpfr for MinGW host) for it and for MinGW $host on Linux
$build host Producing Insight
6.8 for MinGW host and for a couple of targets like 'mips-elf' seemed to
work nicely on July 2009 but some
targets might still be problems with the MinGW $host. For instance
making a remote debugger for
'sparc-solaris2.11' or some embedded Linux target to run on MinGW host



Re: Problem while configuring gcc3.2

2009-12-28 Thread Kai Ruottu

Jie Zhang wrote:

On 12/28/2009 12:59 PM, Pardis Beikzadeh wrote:

Hi,

Also 'make bootstrap' doesn't work without running configure, so I'm
not sure what the "recommended way" mentioned in the email below
means.

The bootstrap in Jim's reply means, I think, building a minimal (only 
C front-end) gcc-3.2 first using gcc-3.4. Then you can use the minimal 
gcc-3.2 to build a full gcc-3.2.



Maybe installing a gcc-3.3 based Cygwin-release like the '1.6.6' and 
then building gcc-3.2 with it would succeed immediately :


http://www.filewatcher.com/m/cygwin-1.6.6.zip.496537660.0.0.html

Or simply finding a gcc-3.2-based Cygwin-release from the net

It can also be possible that gcc-3.2 expects some old Cygwin-runtime as 
the binutils & C-library components, trying to get gcc-3.2 to work
with uptodate binutils and Cygwin-runtime may simply be impossible or 
very hard :(




Re: GCC 4.5.0 Released

2010-04-22 Thread Kai Ruottu

22.4.2010 1:35, Andreas Schwab kirjoitti:


Paolo Bonzini  writes:


I'm not sure if "nm -g" would work under Linux, since

$ nm -g /usr/lib64/libsqlite3.so
nm: /usr/lib64/libsqlite3.so: no symbols

$ objdump -T /usr/lib64/libsqlite3.so|head -5


The equivalent of "objdump -T" is "nm -D".


Whatever the 'objdump -T' now tries to do during the
'gcc/configure', that it does with the wrong 'objdump',
that for the $target, not that for the $host !

Maybe there was some usual one-eyeness in implementation,
in a native GCC $host == $target, and it never was thought
that someone could make a cross GCC ?

This "feature" appeared when someone tried to build
gcc-4.5.0 for 'arm-elf' on a x86_64 machine, seemingly
the objdump made for arm-elf target and x86_64-linux-gnu
host doesn't grok 64-bit ELF binaries... Meanwhile on a
32-bit i686-linux-gnu host there is no problem :

[r...@dell gcc]# /usr/local/arm-elf/bin/objdump -T xgcc

xgcc: file format elf32-little

DYNAMIC SYMBOL TABLE:
  DF *UND*  0042  GLIBC_2.0   wait4
  DF *UND*  0059  GLIBC_2.0   ferror
  DF *UND*  0167  GLIBC_2.0   strchr
  DF *UND*  01b2  GLIBC_2.1   fdopen
08076300 gDO .bss   0004  GLIBC_2.0   __ctype_tolower
  DF *UND*  0035  GLIBC_2.1   mempcpy



Re: trouble building cross compiler: host x86_64-unknown-linux-gnu -> target hppa64-hp-hpux11.00

2008-12-02 Thread Kai Ruottu

Rainer Emrich wrote:


I try to build a cross compiler host x86_64-unknown-linux-gnu -> target
hppa64-hp-hpux11.00 using gcc trunk.


I run into the next issue.

/usr/lib/pa20_64/milli.a is tried to be linked directly instead be searched in
the sysroot.

- --with-sysroot=/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.1100

#include <...> search starts here:
 
/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc/include
 
/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc/include-fixed
 /opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.11.00/usr/include
End of search list.
 
/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc/collect2
- --sysroot=/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.11.00 -E -u main
- -u __cxa_finalize
/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.11.00/usr/lib/pa20_64/crt0.o
/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc/crtbegin.o
-
-L/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc
-
-L/opt/gnu/cross-gcc/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/cross/HP-UX/hppa64-hp-hpux11.00/B.11.00/gcc-4.4.0/hppa64-hp-hpux11.00/bin
-
-L/opt/gnu/cross-gcc/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/cross/HP-UX/hppa64-hp-hpux11.00/B.11.00/gcc-4.4.0/hppa64-hp-hpux11.00/lib
- -L/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.11.00/lib/pa20_64
- -L/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.11.00/usr/lib/pa20_64
/tmp/ccMHuJkh.o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh -lgcc_stub
/usr/lib/pa20_64/milli.a
/SCRATCH/tmp.ZgrMRs8582/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.4.0/gcc-4.4.0/./gcc/crtend.o
/opt/gnu/cross-gcc/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/cross/HP-UX/hppa64-hp-hpux11.00/B.11.00/gcc-4.4.0/bin/hppa64-hp-hpux11.00-ld:
/usr/lib/pa20_64/milli.a: No such file: No such file or directory
collect2: ld returned 1 exit status


The target config header 'gcc/config/pa/pa64-hpux.h' (in gcc-4.3.2)
has :

/* The libgcc_stub.a and milli.a libraries need to come last.  */
#undef LINK_GCC_C_SEQUENCE_SPEC
#define LINK_GCC_C_SEQUENCE_SPEC "\
  %G %L %G %{!nostdlib:%{!nodefaultlibs:%{!shared:-lgcc_stub}\
  /usr/lib/pa20_64/milli.a}}"

so this absolute pathname will be given to the linker and it should
interpret it as '$sysroot/usr/lib/pa20_64/milli.a' I think... But
should the '--with-sysroot=$sysroot' do this?  Or only with usual
things like the 'crt*.o', the libraries etc ?  Maybe putting only
the bare 'milli.a' there, assuming a '-L/usr/lib/pa20_64' in the
native case and a '-L$sysroot/usr/lib/pa20_64' in the cross case,
on the link command line, could be the required 'fix'...




Re: trouble building cross compiler: host x86_64-unknown-linux-gnu -> target hppa64-hp-hpux11.00

2008-12-02 Thread Kai Ruottu

Kai Ruottu wrote:

Rainer Emrich wrote:


I try to build a cross compiler host x86_64-unknown-linux-gnu -> target
hppa64-hp-hpux11.00 using gcc trunk.


I run into the next issue.

/usr/lib/pa20_64/milli.a is tried to be linked directly instead be 
searched in the sysroot.


- --with-sysroot=/opt/tec/setup/sys-root/HP-UX/hppa64-hp-hpux11.00/B.1100



/usr/lib/pa20_64/milli.a: No such file: No such file or directory
collect2: ld returned 1 exit status


The target config header 'gcc/config/pa/pa64-hpux.h' (in gcc-4.3.2)
has :

/* The libgcc_stub.a and milli.a libraries need to come last.  */
#undef LINK_GCC_C_SEQUENCE_SPEC
#define LINK_GCC_C_SEQUENCE_SPEC "\
  %G %L %G %{!nostdlib:%{!nodefaultlibs:%{!shared:-lgcc_stub}\
  /usr/lib/pa20_64/milli.a}}"

so this absolute pathname will be given to the linker and it should
interpret it as '$sysroot/usr/lib/pa20_64/milli.a' I think... But
should the '--with-sysroot=$sysroot' do this?  Or only with usual
things like the 'crt*.o', the libraries etc ?  Maybe putting only
the bare 'milli.a' there, assuming a '-L/usr/lib/pa20_64' in the
native case and a '-L$sysroot/usr/lib/pa20_64' in the cross case,
on the link command line, could be the required 'fix'...


... or as a workaround the 'milli.a' being renamed to 'libmilli.a' in
the copied libraries for a cross GCC, after which its handling would
be just like usual 'lib*.a' stuff. The spec then being:

 #define LINK_GCC_C_SEQUENCE_SPEC "\
   %G %L %G %{!nostdlib:%{!nodefaultlibs:%{!shared:-lgcc_stub}\
   -lmilli}}"

for a crosscompiler. The separate case choices:

#ifndef CROSS_DIRECTORY_STRUCTURE
...
#else
...
#endif

then being necessary. Weird name for a library, 'milli.a' in any case...


Re: on how to compile gcc-4.6 correctly?

2010-09-06 Thread Kai Ruottu

6.9.2010 6:17, Dennis kirjoitti:


   I'm using gentoo distribution (including gmp/mpfr/mpc) that could compile
gcc-4.5.0, 4.5.1, and many snapshots correctly, including the recent one 
gcc-4.5-20100902,
but when I tried to compile gcc-4.6, any snapshot version, even recent 
gcc-4.6-20100904,
it always failed, the recent one failure compiling is:

../../gcc-4.6-20100904/gcc/c-decl.c: In function 'grokdeclarator':
../../gcc-4.6-20100904/gcc/c-decl.c:5533: warning: format not a string literal 
and no format arguments
../../gcc-4.6-20100904/gcc/c-decl.c: In function 'grokparms':
../../gcc-4.6-20100904/gcc/c-decl.c:6194: warning: format not a string literal 
and no format arguments
../../gcc-4.6-20100904/gcc/c-decl.c:7025:64: error: macro 
"ggc_alloc_cleared_lang_type" passed 1 arguments, but takes just 0
../../gcc-4.6-20100904/gcc/c-decl.c: In function 'finish_struct':
../../gcc-4.6-20100904/gcc/c-decl.c:7025: error: 'ggc_alloc_cleared_lang_type' 
undeclared (first use in this function)
../../gcc-4.6-20100904/gcc/c-decl.c:7025: error: (Each undeclared identifier is 
reported only once
../../gcc-4.6-20100904/gcc/c-decl.c:7025: error: for each function it appears 
in.)
../../gcc-4.6-20100904/gcc/c-decl.c:7308:62: error: macro 
"ggc_alloc_cleared_lang_type" passed 1 arguments, but takes just 0
../../gcc-4.6-20100904/gcc/c-decl.c: In function 'finish_enum':
../../gcc-4.6-20100904/gcc/c-decl.c:7308: error: 'ggc_alloc_cleared_lang_type' 
undeclared (first use in this function)
make: *** [c-decl.o] Error 1

I don't know what happened with that? When I search the 
'ggc_alloc_cleared_lang_type' macro,
it really doesn't exist in the gcc-4.6-20100904 source, nor under /usr/include, 
so what is
that macro real dependency?

>
> Who have successfully build gcc-4.6 please help me, or have any clue, 
> I have searched that ggc_alloc_cleared_lang_type

> through google, but didn't find out any meaningful results,

This seems to be defined in a header generated during the build
into the $BUILD/gcc :

[r...@localhost gcc]# grep ggc_alloc_cleared_lang_type *.h
gtype-desc.h:#define ggc_alloc_cleared_lang_type_u() ((union lang_type_u 
*)(ggc_internal_cleared_alloc_stat (sizeof (union lang_type_u) 
MEM_STAT_INFO)))


On CentOS 5.5/ia32 the build seemed to succeed for the
'x86_64-linux-gnu' target, using gcc-4.1.2 as the host
and build compiler.  Must check the Fedora 13/x86_64
host with its gcc-4.4.4 too but I wouldn't expect any
change with it...

So maybe the Gentoo distro has some problem...


Re: on how to compile gcc-4.6 correctly?

2010-09-13 Thread Kai Ruottu

13.9.2010 10:39, Dennis, CHENG Renquan kirjoitti:


So maybe the Gentoo distro has some problem...


No, I've tried compiling gcc-4.6-20100911 on ubuntu 10.04, the same
problem also happened;

and I also found that macro was defined in a generated header file, in
the gcc build directory;

renq...@flyer-1-1:~/src/gcc-4.6-build$ grep -RsInw
ggc_alloc_cleared_lang_type gcc/
gcc/gtype-desc.h:2451:#define ggc_alloc_cleared_lang_type() ((struct
lang_type *)(ggc_internal_cleared_alloc_stat (sizeof (struct
lang_type) MEM_STAT_INFO)))


In all stages during the native bootstrap this define was :

gtype-desc.h:#define ggc_alloc_cleared_lang_type(SIZE) ((struct 
lang_type *)(ggc_internal_cleared_alloc_stat (SIZE MEM_STAT_INFO)))


in Fedora 13 with the earlier gcc-4.6-20100904 snapshot...


this definition just doesn't accept any arguments, but inside
gcc/c-decl.c:7028 and 7311:

 space = ggc_alloc_cleared_lang_type (sizeof (struct lang_type));

   lt = ggc_alloc_cleared_lang_type (sizeof (struct lang_type));

both passes 1 argument, so the compiler report error, so the problem
is how can you succeed compiling that?


My RedHat-based systems (CentOS, Fedora) will generate a different
'#define' for this. Why the Gentoo and Ubuntu are working differently
is hard to say... Ok, I can try also Ubuntu 10.04 so let's see what
it does...


Re: on how to compile gcc-4.6 correctly?

2010-09-14 Thread Kai Ruottu

14.9.2010 11:29, Dennis, CHENG Renquan kirjoitti:


For anyone could succeed compiling gcc-4.6, could you paste a correct
ggc_alloc_cleared_lang_type macro ?

just run this grep command under your build directory,

gcc-4.6-build$ grep -RsInw ggc_alloc_cleared_lang_type gcc/
gcc/gtype-desc.h:2451:#define ggc_alloc_cleared_lang_type() ((struct
lang_type *)(ggc_internal_cleared_alloc_stat (sizeof (struct
lang_type) MEM_STAT_INFO)))


[r...@hp-pavilion build]# grep -RsInw ggc_alloc_cleared_lang_type gcc/
gcc/gtype-desc.h:3450:#define ggc_alloc_cleared_lang_type(SIZE) ((struct 
lang_type *)(ggc_internal_cleared_alloc_stat (SIZE MEM_STAT_INFO)))






Re: -isysroot option ignored for crosscompiler?

2010-10-18 Thread Kai Ruottu

16.10.2010 14:58, Uros Bizjak kirjoitti:


Trying to use --with-build-sysroot configure option, the build failed
with "error: no include path in which to search for ..." error.

This problem come down to the fact, that -isysroot is ignored when
crosscompiling:

Crosscompiler does not handle -isysroot correctly:

$ ~/gcc-build-alpha/gcc/xgcc -B ~/gcc-build-alpha/gcc -isysroot
/home/uros/sys-root -quiet -v in.c

Target: alphaev68-pc-linux-gnu
Configured with: ../gcc-svn/trunk/configure
--target=alphaev68-pc-linux-gnu --enable-languages=c


The '--with-sysroot=' option is missing. This makes a cross GCC
to behave like a native GCC...


The difference happens in gcc/cppdefault.c:

#if defined (CROSS_DIRECTORY_STRUCTURE)&&  !defined (TARGET_SYSTEM_ROOT)
# undef LOCAL_INCLUDE_DIR
# undef SYSTEM_INCLUDE_DIR
# undef STANDARD_INCLUDE_DIR
#else
# undef CROSS_INCLUDE_DIR
#endif

Commenting out #undef STANDARD_INCLUDE_DIR fixes the problem.


The CROSS_INCLUDE_DIR, '$tooldir/sys-include', is not used in a
native GCC or in a cross GCC configured using '--with-sysroot=',
the '/usr/local/include', a possibly defined SYSTEM_INCLUDE_DIR,
the equivalent to the '$tooldir/sys-include' in a native GCC or
in a cross GCC configured using '--with-sysroot=', and the
'/usr/include' are not used in a traditional cross GCC at all...
A traditional cross GCC searches from the CROSS_INCLUDE_DIR,
'$tooldir/sys-include' and from TOOL_INCLUDE_DIR, '$tooldir/include',
in this order, so using 2 directories for headers when a native
GCC usually uses only one, '/usr/include' (if forgetting the
'/usr/local/include').

The real BUG with these is :

Generally people would expect some equivalency between a native
GCC and a traditional cross GCC (used with the embedded targets
which don't have a native GCC possibility), here :

   /usr/include  ---> $tooldir/include
   /usr/lib  ---> $tooldir/lib

But during the 15+ years I have built and used cross GCCs, this
very simple equivalency has never worked :(  For some very odd
reason the GCC build has expected the target headers being in
the CROSS_INCLUDE_DIR, '$tooldir/sys-include', what comes to
fixinc and searching the existence of some headers in the
standard target headers. These of course being in the equivalent
to the always existing STANDARD_INCLUDE_DIR, '/usr/include'.

The SYSTEM_INCLUDE_DIR and its equivalent '$tooldir/sys-include'
are for some not specified "system specific headers", for instance
the Win32 API headers in a Windoze system could be these, they
aren't any "standard (posix) headers"...

Ok, my conclusion is that if you use the '--with-sysroot=' in
the cross GCC configure, then also '--with-build-sysroot='
(using something else instead of the defined $sysroot) should
work, otherwise it shouldn't have any influence at all, the
cross GCC is a traditional crosscompiler which uses the $tooldir
for the target headers and libraries!  And with a traditional
cross GCC the 'include' and 'lib' in $tooldir should be searched,
the 'sys-include' there shouldn't be used at all!


should work


Re: libstdc++ gets configure error in cross builds

2010-12-03 Thread Kai Ruottu

Paul Koning  writes:


I'm trying to do a cross-build of gcc 4.5.1.

It's configured --target=mips64el-netbsdelf --enable-languages=c,c++, on an 
i686-pc-linux-gnu host.


Can you try sysroot with full mips64el-netbsdelf C library and header
files?


The NetBSD archive maybe haven't them... The stuff for embedde boards :
http://ftp.netbsd.org/pub/NetBSD/NetBSD-5.1/evbmips-mipsel/binary/sets/
had only a 32-bit sysroot but maybe some MIPS system port has the
64-bit ones.


I have newlib and a full set of headers.


Why you think these being suitable for NetBSD/mips64el ?


Please DO use sysroot.


That doesn't help.


So if you will unpack the suitable 'base.tgz' and 'comp.tgz' into
a sysroot and point them using '--with-sysroot=', the build
wouldn't succeed?

I haven't time to check this just now... BUT what I saw in
'gcc-4.5.1/gcc/config.gcc' was that the template was :

mips*-*-netbsd*)# NetBSD/mips, either endian.
target_cpu_default="MASK_ABICALLS"
tm_file="elfos.h ${tm_file} mips/elf.h netbsd.h netbsd-elf.h 
mips/netbsd.h"

;;

so it doesn't care about your 'mips64' at all :(  The GCC being
produced is a 32-bit target one and so the evbmips-mipsel stuff
would be suitable for the default...



Re: mips-elf target

2006-04-06 Thread Kai Ruottu

Niklaus kirjoitti:

Hi,
 Until now i have only build cross toolchains for linux systems.
  
I guess "for totally self-built Linux-from-scratch" systems, not cross 
toolchains for usual Linuces like

RedHats, SuSEs, Fedoras, Ubuntus etc.

  Usually i build crossgcc in 2 parts, one is before glibc is built ,
the other is after glibc is built.
  
For usual Linuces the glibc is already built and included with the Linux 
distro. Just as are all those
X11, Gnome, KDE etc. libraries.  So these are normally NOT built, only 
copied from the target
system or from its install stuff (RPM-packages for glibc etc.).  And the 
GCC build happens in

only one stage after building the target binutils.

 Is there any way where i can skip the step glibc and build the whole
gcc compiler.

  
As told, this is the expected way to build a crosscompiler for Linux - 
to build it in only one stage!



If yes how do i build the whole gcc without glibc. I have binutils for
the target already installed.
I only need c and c++ languages. I doing it for vr4131 (mips-elf) target.
  
The mips-elf target uses (usually) newlib as its target C library. And 
there is that '--with-newlib'
option which should enable anyone to build a GCC (no 'native' choice! so 
it is vain to add the
'cross-' for telling it is not a native-GCC) without the (complete) 
target C library being available
during the GCC build. The 'libiberty' and 'libstdc++-v3' configure 
scripts SHOULD NOT use
any link tests for solving the target C library properties, the 
'--with-newlib' tells what the C
library is and what it has. In this issue the new gcc-4.x releases seem 
to have restored the old
bug : doing link tests with the target C library during the libstdc++-v3 
configure !


So with gcc-3.4.x and older sources using '--with-newlib' should enable 
you to build the complete
GCC with libiberty and libstdc++-v3 when having only the generic newlib 
headers preinstalled
before the GCC build. The place, for instance 
'newlib-1.13.0/newlib/libc/include', is well-known
and copying this into the  final '$prefix/$target/include' (the newlib 
install puts the final headers with
possible additions/fixes for the target there) should be enough. But 
this "should" is optimism, there
has been that "sys-include-bug" mess over 10 years without no-one of the 
GCC developers ever
caring to fix it !  And the libiberty configure stuff has claimed that 
the functions 'asprintf()', 'strdup()'
and 'vasprintf()' are missing from newlib although these appeared into 
it years ago!


So one must take care that the target headers can be seen also in the 
'$prefix/$target/sys-include'
during the GCC build!  And take care that they cannot be seen there 
AFTER the build, IF copying
them there, using symlink is not dangerous, when using the compiler, the 
stuff in the 'sys-include'
will be searched before the 'include'.  The 'sys-include' is the 
equivalent to the SYSTEM_INCLUDE_DIR
in a native GCC and the 'include' is the equivalent to the 
STANDARD_INCLUDE_DIR, or usually
the '/usr/include' in a native GCC.  Somehow the GCC developers have 
mixed apples and oranges
and still think the '$prefix/$target/sys-include' being the equivalent 
to the '/usr/include'


If you don't require the target C library at all, and not use C++, then 
just don't build newlib at all!
And it can be fully possible that you don't need even the newlib headers 
for producing the 'libgcc.a' !

But producing it shouldn't hurt...  So your steps would be :

1. copy the generic newlib headers into the '$prefix/mips-elf/include'. 
Then symlink this to be seen

   also as the '$prefix/mips-elf/sys-include'.

2. configure the GCC sources using '--target=mips-elf --with-newlib 
--enable-languages=c,c++'


3. build the GCC, also the libiberty and libstdc++-v3 subdirs should 
succeed. The protos in the
   newlib headers can clash with the reimplementations in libiberty. If 
so, your homework is to
   search for the 'newlib' in the 'libiberty/configure*' and fix them 
by editing the mentioned three
   functions from the lists where they are claimed to be missing from 
newlib  Install GCC.


4. configure and build newlib using the same '--prefix=$prefix' and 
'--target=mips-elf' as you used

   already with binutils and GCC.  Then install newlib.

5. try compiling and linking your "Hello World"... This is the hardest 
step because only the true
  Legolas, Tinuviel etc. fans believe all elves being real creatures.  
The 'mips-elf' is not a real
  target like 'mips-linux-gnu' and therefore requires some special 
linker script being used to
 describe the 'real target'  (a MIPS based board with monitor firmware 
or something). Newlib
 comes with linker scripts for IDT, PMON-based (a free monitor) etc. 
MIPS boards.  Some

 homework expected before the "Hello World" succeeds...



Re: Crossed-Native Builds, Toolchain Relocation and MinGW

2006-04-23 Thread Kai Ruottu

Ranjit Mathew wrote :

  It seems that toolchain relocation, especially for
crossed-native builds, seems to be broken in mainline
while it used to work for earlier releases. The situation
seems particularly bad for Windows (MinGW).
  


In this issue was something I didn't understand Let's assume one 
does this

thing as assumed :

1. builds a Linux or something (build system) hosted GCC for MinGW 
target. During
this step one produces all  the extra target libraries: libiberty, 
libstdc++, libgcj,...
whatever will be produced during the GCC build for the MinGW target 
after getting
the "GCC" binaries and the "GCC helper library", 'libgcc' The 
new cross-GCC

for the MinGW target produces these libraries.

2. tries to produce another MinGW-target toolchain for some other host, 
for which
   one also has a cross-toolchain.  The MinGW host maybe sounds being 
somehow
   special when the target too would be MinGW, but should that really 
be?  In any
   case all the MinGW target libs would be produced again with the step 
1 cross-GCC...


Because all the MinGW target libs already were produced during the step 
1, it should
sound being "reinventing the wheel" to try to reproduce these during the 
step 2
So one uses the 'make all-gcc', and gets only the "GCC" binaries for the 
new host.
That there would be any problems in reproducing the extra libraries will 
remain totally

unnoticed...

Maybe there really is a difference between a 'native' build for these 
extra libraries and
a 'cross' build for them...  Maybe there is a buggy "$host = $target" 
compare and then
a "native" build is assumed  But why the builder should even try 
this rebuild?


I have quite the same situation: All my MinGW-hosted stuff has been made 
on Linux
for years.  BUT I haven't ever tried to rebuild the stuff coming to the 
'$build/$target'
after getting the '$build/gcc' stuff ready  Why you think the 
rebuild being somehow
necessary and you cannot just copy these extra libraries from your 
$target-targeted

cross-compiler?

The MinGW-target wouldn't be any exception here, if I should produce a MinGW
hosted cross-GCC for Linux/PPC on Linux/x86, surely I wouldn't try to 
reproduce
the 'libiberty', 'libstdc++-v3', 'libgcj' etc. for the Linux/PPC if I 
already have them...
That others think that "reinventing the wheel" is something one must 
always do, is
the weirdest thing for me, not that new bugs would appear  I really 
cannot say
whether there could be any motives to try to fix this, before doing that 
it should be
some reason why using the 'make all-gcc' isn't enough in a "crossed 
native" or in a

"crossed cross" build






Re: Crossed-Native Builds, Toolchain Relocation and MinGW

2006-04-24 Thread Kai Ruottu

Ranjit Mathew kirjoitti:

If I understand you correctly, you're saying that the
target runtime libraries are already created by the
cross-compiler in Phase 1, so I don't need to rebuild
them again in Phase 2 along with the crossed-native
compiler - I can get by by just building the compiler.
  

Yes, once being made and being thoroughly tested, any library shouldn't
be rebuilt. Doing that again only means a test for the compiler producing
the library - the result should by all sanity be identical in size and bytes
with the existing...

Definitely the cross-GCC for the $target on the $build host is the expected
compiler to produce the target libraries, not the new GCC being built for
the new $host and for the $target. In your case it could be possible to have
Wine installed and then trying to run the new MinGW-hosted native GCC
on the $build host, but this isn't the assumption, the $build-X-$target GCC
is the one producing the $target libraries, in your case the 
'i686-mingw32-gcc'
(and all the stuff it uses as subprocesses, headers and libraries) or 
something.



I don't know much about the internals of GCC, but what
you're saying should be possible though a bit cumbersome.
Building everything in Phase 2 (compiler and libraries)
just gives a nice bundle that I can then redeploy as I
wish (but this is precisely the thing that seems to be
broken, on MinGW at least).
  


I would go as far as not even producing that special "native GCC", but to
build instead a "MinGW-targeted and MinGW-hosted GCC" !  I have never
understood why the Windoze-host should cause the MinGW-targeted GCC
being in any way different from a Linux-hosted and MinGW-targeted GCC
in its install scheme...  The MinGW-targeted GCC on Windoze really doesn't
need to mimic any proprietary "native 'cc'" which has its headers in 
'/usr/include'
and its libraries in '/usr/lib' or something  Maybe some Unix 
sources could
require the X11 stuff being in its "native" places, but never that the C 
headers

and libraries would be in some virtual "native" places

After one has 50 or so MinGW-hosted GCCs for all kind of targets, that
very weird "native GCC" has this far sounded being an oddball among all the
other GCCs using the "cross GCC" scheme...  But this opinion of mine seems
to be opposite to the current trend:  If one has 50 or so GCCs, each one of
them should use its own "native" install scheme on the cross-host, the new
"--with-sysroot" tries to enable this new bright idea

Anyway if even standardizing the $prefix for all the GCCs made by oneself,
for instance using  the SVR4-like standard '/opt/gcc' as the $prefix,  
one could
have quite identicallly installed GCCs on any host...  Or the $prefix 
could be

$host dependent, on Windoze/MinGW host for instance that '/mingw' being
the chosen $prefix for all the MinGW-hosted GCCs  So when one has
only cross-GCCs everywhere and has only one standard $prefix in use
everywhere, copying becomes very easy.  If one needs to copy the target
libs from '/opt/gcc/lib/gcc/i686-mingw32/3.4.6' on Linux onto just the same
place on Windoze, how this copying could be in any way cumbersome?

Ok, if the GCC configure command has for instance :

 --build=i686-linux-gnu  --host=i586-mingw32 --target=i686-mingw32

then the resulted GCC is a crosscompiler from 'i586-mingw32' to 
'i686-mingw32'
because the $host is different from the $target  And if the used 
$prefix is the
same as used in the $build ('i686-linux-gnu' here) host, only the GCC 
binaries

(the '.exe's for Windoze) would be different between the two MinGW-targeted
GCCs on Linux and Windoze hosts...

Generally it could be very informative to be capable to rebuild those 
libraries on

more than one $build system using different $build-X-MinGW GCCs (but their
versions and the sources used to produce them being identical) and see 
that the
results are really identical with identical GCC options being used in 
compile.  So
I really aren't against all "reinventions", only thinking that using 
just the same GCC
once again for the same task isn't that "informative".  But if you 
really use the new

MinGW-hosted GCC for the rebuild on Linux using Wine, that could give some
new information about the quality of the new compiler...




Re: a problem when building gcc3.2.1

2006-06-12 Thread Kai Ruottu

Eric Fisher wrote:

/src/gcc-3.2.1/configure --target=mipsel-elf \
 --prefix=/gnutools --enable-languages=c,c++ \
 --with-gnu-as --with-gnu-ld --with-newlib \
--without-headers --disable-shared --disable-threads

Build and install GCC

make

Wrong command, use 'make all-gcc ; make install-gcc', then
configure, build and install newlib, then continue with 'make'!

then it shows the error:

/src/gcc-3.2.1/libiberty/regex.c: At top level:
/src/gcc-3.2.1/libiberty/regex.c:7959: error: storage size of
`re_comp_buf' isn't known
make[1]: *** [regex.o] Error 1
make[1]: Leaving directory `/tmp/build/gcc/mipsel-elf/libiberty'


Can you guess whether producing newlib needs the newlib headers?
Producing 'libiberty' and 'libstdc++' to be in sync with the newlib?
Your choice is either having the newlib headers available or then
trying to  avoid them via the '--without-headers',  but then you will
get "only GCC" :-(



Re: gcc-4.1.0 cross-compile for MIPS

2006-06-19 Thread Kai Ruottu

David Daney kirjoitti:

kernel coder wrote:

hi,
   I'm trying to cross compile gcc-4.1.0 for mipsel
platform.Following is the sequence of commands which i'm using

../gcc-4.1.0/configure --target=mipsel --without-headres
--prefix=/home/shahzad/install/ --with-newlib --enable-languages=c



Perhaps you should try to disable libssp.  Try adding (untested) 
--disable-libmudflap  --disable-libssp
I tried the 'mipsel-elf' target (to which the bare 'mipsel' leads) with 
gcc-4.1.1
and using '--with-newlib --enable-languages=c,c++ --disable-shared'.  
The last
(maybe) required  because earlier  builds with other '-elf' targets  
stopped  when
trying to check the 'libgcc_s.so' existence...   But no 
'--without-headers'  was
used, instead copying the generic newlib headers into the $tooldir 
($prefix/$target).


After that everything succeeded: 'gcc' and 'libiberty', 'libstdc++-v3' 
and 'libssp'
for the target.  So disabling the libssp is vain.  There was no 
libmudflap build...


So, if forgetting that '--disable-shared',  the build worked just as 
earlier with

the earlier GCC versions!  And 'kernel coder' using :

../gcc-4.1.0/configure --target=mipsel --prefix=/home/shahzad/install  \
--with-newlib --enable-languages=c,c++

should have worked after having copied those newlib headers to be  ready 
for the

fixinc, limits.h check etc. the GCC build tries to do with them.




Re: Upgrading a cross compiler from GCC 3.4.6 to 4.1.1

2006-09-22 Thread Kai Ruottu

Rohit Arul Raj wrote:

I am upgrading a cross-compiler from gcc3.4.6 to gcc 4.1.1. i am
getting some errors while trying to build the compiler.

The way in which i am building the compiler is :
$configure --target=  --prefix=/usr/crossgcc/ --with-newlib 
--disable-libssp

"--target="    "Let's the HAL read one's thoughts or what?


fp-bit.c: In function '__extendsfdf2':
fp-bit.c:1513: internal compiler error: in emit_push_insn, at expr.c:3722


It is interesting that the GCC build went this far without knowing what on
earth the $target should be

is there a way out of this?
is the internal compiler error generated due to gcc_assert?


This "building a crosscompiler without defining the $target" must be some
"rocket science" which I haven't yet seen after producing more than 1000
crosscompilers...  I apologize my ignorance but could someone explain what
on earth this kind of configuration should produce?  A "crosscompiler 
for the

native target" or what?



Re: How do I build C++ for xscale-elf?

2006-09-27 Thread Kai Ruottu

Jack Twilley wrote:

When I try to build C++ for xscale-elf, I get this as the last message:

configure: WARNING: No native atomic operations are provided for this 
platform.

configure: WARNING: They cannot be faked when thread support is disabled.
configure: WARNING: Thread-safety of certain classes is not guaranteed.
configure: error: No support for this host/target combination.

What version of gcc should I be trying to build?

Anything with the 'xscale-elf' support...


Here's the configure line I use:

../configure --with-included-gettext --target=xscale-elf 
--enable-languages=c,c++


The '--with-newlib' is obligatory when using 'newlib' as the target C 
library!  If you
have your own custom/proprietary C library for 'xscale-elf', then that 
should be
preinstalled before starting the configure&build!  The '--with-newlib' 
should  remove
all the link tests against the existing target C library from the extra 
'lib*' configures,
the 'libstdc++-v3' being one. Your  error seems to come from the 
libstdc++-v3

configure...

I am trying to build gcc on a FreeBSD 6.1-STABLE system.  If there's 
more information I can give you, please ask.


What was the GCC version tried?   The new gcc-4.1.1 seems to require the 
'--disable-shared'
for instance with ARM, otherwise it tries to link against the "created" 
'libgcc_s.so.1' despite
of using the '--with-newlib'.  A stupid bug and a stupid workaround 
('newlib' neither the target,
'xscale-elf', don't support shared libraries).  With the gcc-4.1.1 also 
the '--disable-shared' is

obligatory...



Is 'mfcr' a legal opcode for RS6000 RIOS1?

2005-02-24 Thread Kai Ruottu
In the crossgcc list was a problem with gcc-3.4 generating the opcode
'mfcr' with '-mcpu=power' for the second created multilib, when the
GCC target is 'rs6000-ibm-aix4.3'. The other multilibs produced as
default are for '-pthread', '-mcpu=powerpc' and '-maix64'... The AIX
users could judge if all these are normally required, but when the
builder also used the '--without-threads', the first sounds being vain
or even clashing with something. Building no multilibs using
'--disable-multilib' of course is possible...
But what is the case with the 'mfcr' and POWER ?  Bug in GNU as (the
Linux binutils-2.15.94.0.2.2 was tried) or in GCC (both gcc-3.3.5 and
gcc-3.4.3 were tried) ?


Re: About ARM-cross-compile

2012-04-11 Thread Kai Ruottu

30.3.2012 19:03, Mao Ito kirjoitti:


I got stuck on a problem.
Actually, I could install "arm-eabi" cross-compiler for c, c++.
The problem is about "arm-eabi-gcj" (i.e. for Java). "arm-elf" version

> cross-compiler was successfully installed for c, c++, Java. But,
> after that, I realized that my simulator does not accept OABI
> binary code (i.e. binary code "arm-elf" compiler generates).
> So, I need to install "arm-eabi" version cross-compiler because
> the simulator can accept EABI binary code.


So, recently, I was struggling to install "arm-eabi-gcj" into my

> laptop. The problem is about "libgcj.jar" file.

Usually people don't build java for their compiler collection, only C
and C++, when the target is an embedded system like '*-elf' or your
chosen 'arm-eabi'. So these cases may be fully or somehow unsupported
:-(

Usually the problem isn't in getting '$target-gcj' or 'jc1' at all!
So your first claim, 'problem is about "arm-eabi-gcj"', sounds
untrue, this and the real Java compiler 'jc1' should succeed easily.
But using them to create 'libjava' ('--enable-libgcj') may be a big
problem and your second claim is more believable...

If looking at your "libjava arm-eabi" case via net search, the:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32340

will appear. For curiosity I looked how well the recent gcc-4.6.3
works with the arm-eabi target. Some of the suggested fixes must
still be installed and producing 'libffi' enabled (compiling libjava
needs 'ffi.h', the only way was to fix 'configure' and remove
'target-libffi' from 'nonconfigdirs' in the 'arm-eabi' case). But
after these fixes the build still stopped at :

libtool: compile:  /home/src/gcc-4.6.3/build/./gcc/xgcc -shared-libgcc 
-B/home/src/gcc-4.6.3/build/./gcc -nostdinc++ 
-L/home/src/gcc-4.6.3/build/arm-eabi/libstdc++-v3/src 
-L/home/src/gcc-4.6.3/build/arm-eabi/libstdc++-v3/src/.libs -nostdinc 
-B/home/src/gcc-4.6.3/build/arm-eabi/newlib/ -isystem 
/home/src/gcc-4.6.3/build/arm-eabi/newlib/targ-include -isystem 
/home/src/gcc-4.6.3/newlib/libc/include 
-B/home/src/gcc-4.6.3/build/arm-eabi/libgloss/arm 
-L/home/src/gcc-4.6.3/build/arm-eabi/libgloss/libnosys 
-L/home/src/gcc-4.6.3/libgloss/arm -B/opt/cross/arm-eabi/bin/ 
-B/opt/cross/arm-eabi/lib/ -isystem /opt/cross/arm-eabi/include -isystem 
/opt/cross/arm-eabi/sys-include -DHAVE_CONFIG_H -I. -I../../../libjava 
-I./include -I./gcj -I../../../libjava -Iinclude 
-I../../../libjava/include -I../../../libjava/classpath/include 
-Iclasspath/include -I../../../libjava/classpath/native/fdlibm 
-I../../../libjava/../boehm-gc/include -I../boehm-gc/include 
-I../../../libjava/.././libjava/../gcc -I../../../libjava/../zlib 
-fno-rtti -fnon-call-exceptions -fdollars-in-identifiers -Wswitch-enum 
-D_FILE_OFFSET_BITS=64 -Wextra -Wall -D_GNU_SOURCE 
-DPREFIX=\"/opt/cross\" 
-DTOOLEXECLIBDIR=\"/opt/cross/lib/gcc/arm-eabi/4.6.3\" 
-DJAVA_HOME=\"/opt/cross\" 
-DBOOT_CLASS_PATH=\"/opt/cross/share/java/libgcj-4.6.3.jar\" 
-DJAVA_EXT_DIRS=\"/opt/cross/share/java/ext\" 
-DGCJ_ENDORSED_DIRS=\"/opt/cross/share/java/gcj-endorsed\" 
-DGCJ_VERSIONED_LIBDIR=\"/opt/cross/lib/gcj-4.6.3-12\" 
-DPATH_SEPARATOR=\":\" -DECJ_JAR_FILE=\"\" 
-DLIBGCJ_DEFAULT_DATABASE=\"/opt/cross/lib/gcj-4.6.3-12/classmap.db\" 
-DLIBGCJ_DEFAULT_DATABASE_PATH_TAIL=\"gcj-4.6.3-12/classmap.db\" -g -Os 
-MT java/net/natVMInetAddress.lo -MD -MP -MF 
java/net/.deps/natVMInetAddress.Tpo -c java/net/natVMInetAddress.cc -o 
java/net/natVMInetAddress.o

java/net/natVMInetAddress.cc:12:1: error: 'jstring' does not name a type
java/net/natVMInetAddress.cc:18:1: error: 'jbyteArray' does not name a type
java/net/natVMInetAddress.cc:24:1: error: 'jstring' does not name a type
java/net/natVMInetAddress.cc:30:1: error: 'JArray' does not name a type
java/net/natVMInetAddress.cc:36:1: error: 'jbyteArray' does not name a type
make[3]: *** [java/net/natVMInetAddress.lo] Virhe 1
make[3]: Poistutaan hakemistosta 
"/home/src/gcc-4.6.3/build/arm-eabi/libjava"


My interest ended here :-(  Continuing from this could be your
homework...

With gcc-4.7.0 the situation could be even worse :

/bin/bash ../../libtool --tag=CC   --mode=compile 
/home/src/gcc-4.7.0/build/./gcc/xgcc -B/home/src/gcc-4.7.0/build/./gcc/ 
-nostdinc -B/home/src/gcc-4.7.0/build/arm-eabi/newlib/ -isystem 
/home/src/gcc-4.7.0/build/arm-eabi/newlib/targ-include -isystem 
/home/src/gcc-4.7.0/newlib/libc/include 
-B/home/src/gcc-4.7.0/build/arm-eabi/libgloss/arm 
-L/home/src/gcc-4.7.0/build/arm-eabi/libgloss/libnosys 
-L/home/src/gcc-4.7.0/libgloss/arm -B/opt/cross/arm-eabi/bin/ 
-B/opt/cross/arm-eabi/lib/ -isystem /opt/cross/arm-eabi/include -isystem 
/opt/cross/arm-eabi/sys-include-DHAVE_CONFIG_H -I. 
-I../../../../../../libjava/classpath/native/fdlibm -I../../include 
-fexceptions -fasynchronous-unwind-tables -g -Os -MT dtoa.lo -MD -MP -MF 
.deps/dtoa.Tpo -c -o dtoa.lo 
../../../../../../libjava/classpath/native/fdlibm/dtoa.c

libtool: compile: not configured to build any kind of library
libtoo

Re: GCC 3.3.3 not on GNU servers...

2008-06-23 Thread Kai Ruottu

Paul Koning wrote:


I was looking for GCC 3.3.3 just now, and noticed that it doesn't
exist on the generic GNU FTP server or its mirrors
(ftp://ftp.gnu.org/gnu/gcc for example).  3.3.2 and 3.3.4 are there,
as well as lots of other releases, but 3.3.3 is mysteriously missing.


I looked too and was curious about the 'releases' subdirectory. There
was that 'gcc-3.3.3' fully alone...


It does exist on the specific GCC mirrors, fortunately, but this is a
puzzling glitch.


So it was in 'ftp.gnu.org/gnu/gcc/releases' but not where the other
releases were. Maybe someone was too "creative" and created this subdir
for the releases but forgot to move the others there...


Re: Why is building a cross compiler "out-of-the-box" always broken?

2007-08-20 Thread Kai Ruottu

Segher Boessenkool wrote:

The manual explicitly says you need to have target headers.  With
all those --disable-XXX and especially --enable-sjlj-exceptions it
all works fine without, though.
If one tries to produce everything in the 'gcc' subdirectory, including 
'libgcc',
then headers may be needed.  But if producing only the 'xgcc', 'cc1', 
'collect2'
etc binaries for the host is enough, then nothing for the target will be 
required.
Everything is totally dependent on how that "cross compiler" will  be 
defined


Years ago in Berkeley university there was a system called "Nachos" which
required a "crosscompiler for DECstation/MIPS" but without any DEC's
target headers  or libraries and even without  any 'libgcc'.

So if one needs a "Nachos-like" crosscompiler for some target on one's host,
that could always succeed!  Only problems totally related to the $host could
be seen

Meanwhile a cross GCC for AIX, HP-UX, and even for the quite common
Mac OS X and for the commercial "Enterprise Linux" distros and for many
other systems could be either impossible or very hard if no free access 
to the
target stuff (copyrighted prebuilt libraries) is provided.  And when one 
thinks

a cross GCC being something similar to the native GCC, with all the base C
libraries, all the extra termcap, curses, X11 etc. libraries...  Or when 
there is
not much or not enough support in the GNU binutils for the target.  For 
instance
a cross GCC for the MS's "Interix" aka "Services for Unix" would fail 
with the
GNU ld not working for this target :-(  So if one defines a "cross 
compiler" to
include the "as", "ld" etc. binutils and all the target libraries, then 
there can be
very serious problems, neither the GNU binutils, nor the GNU libc are 
supporting
all the targets GCC is seemingly "supporting"  Someone recently 
mentioned
that the GNU binutils shouldn't work for AIX and quite surely there 
isn't any

free access to the  AIX C libraries...

Quite many crosscompiler builders don't have any clue about what GCC is and
what components are belonging to it...  Some will always mix producing a 
cross

GCC with producing a self-made Linux distro, where that GCC is only a tool
used to produce the final product, Linux :-(   These things just should 
be much

more clear in people's minds...





Re: Building Single Tree for a Specific Set of CFLAGS

2024-03-27 Thread Kai Ruottu via Gcc

Christophe Lyon via Gcc kirjoitti 27.3.2024 klo 10.52:

On 3/26/24 22:52, Joel Sherrill via Gcc wrote:

I am trying --disable-multlibs on the gcc configure and adding
CFLAGS_FOR_TARGET to make.

I would configure GCC with --disable-multilibs --with-cpu=XXX 
--with-mode=XXX --with-float=XXX [maybe --with-fpu=XXX]

This way GCC defaults to what you want.


The "multilibs" is a typo, actually the option is :

-- clip ---

|--disable-multilib|

   Specify that multiple target libraries to support different target
   variants, calling conventions, etc. should not be built. The default
   is to build a predefined set of them.
   Some targets provide finer-grained control over which multilibs are
   built (e.g., --disable-softfloat):

-- clip ---


Re: The riscv compilation chain for your own operating system cannot recognize march.

2024-09-28 Thread Kai Ruottu via Gcc

Troy via Gcc kirjoitti 29.9.2024 klo 6.15:

I've created a Unix-like system, and although it's not very complete
yet, I want to make a cross-compilation chain for it so that I can use
some open source c libraries.

More important would be to see the -v output when you ran the compiler
and got the error message from the compiler.  Most likely it's not
finding a risc-v assembler.


Configured with: ../gcc/configure --target=riscv64-caffeinix
--prefix=/home/troy/repo/riscv-toolchain/riscv-gnu-toolchain/bin-gcc/



  as -v --traditional-format -march=rv64imafdc_zicsr
-march=rv64imafdc_zicsr -mabi=lp64d -misa-spec=20191213 -o /tmp/ccIwNEVC.o
/tmp/ccH3vrdP.s
GNU assembler version 2.38 (x86_64-linux-gnu) using BFD version (GNU
Binutils for Ubuntu) 2.38
Assembler messages:
Fatal error: invalid -march= option: `rv64imafdc_zicsr'
Using just the same $prefix value in both the GNU binutils and GCC 
configure is very
basic know-how, one shouldn't use different values for them unless doing 
something
to help the '$target-gcc' to find the right binutils. The default place 
for the $target

binutils is the :
$prefix/$target/bin
As their "bare" names like 'as', 'ld', 'ar', 'nm' etc. Meanwhile the 
same stuff aimed for
the GCC user to use are in the '$prefix/bin' with names '$target-as', 
'$target-ld' etc.
As you can see the name 'as' was tried for the 'risc-v' target assembler 
and when the

right one wasn't found the build system's own 'as' was used.