Build Failure for gcc-4.3-20071109

2007-11-15 Thread Tom Browder
I have been unable to build recent gcc versions on my i386 (AMD 64x2)
running Fedora 7 although I have no problems building them on other,
similar hosts running F7 and older Fedora releases and on both Intel
and AMD machines.

I have suspected my environment because I have noticed for the first
time in years of problem-free building that librt is required and I
have another librt ahead of the "normal" one (but that situation also
exists on other hosts which causes no problems).  Thus I have set
LD_LIBRARY_PATH to /usr/local/lib:/usr/lib so the other librt is not
even seen.

Attached is a log of my build attempt (and the config.log).

It looks to me as if libiberty is getting compiled and tested Ok, but,
for some reason, make reports an error which then prevents a good
build completion.

Can anyone tell what is happening or suggest the next step?

Thanks.

-Tom

Tom Browder
Niceville, Florida
USA


compile_gcc-4.3-20071109.log.bz2
Description: BZip2 compressed data


config.log.bz2
Description: BZip2 compressed data


Re: internal compiler error when build toolchains using gcc 4.1.2

2007-11-15 Thread Clemens Koller

马骅 wrote:

I thought it may be a bug for gcc 4.1.2.


Please don't top-post.


On Nov 15, 2007 11:11 AM, Tim Prince <[EMAIL PROTECTED]> wrote:

马骅 wrote:

hi,
 I try to build toolchains using buildroot. but when compile the
busybox, an internel compiler error show.


If you have questions about the advice gcc gave you, gcc-help mail list
is the place.



马骅, have a look at the output:

Please submit a full bug report,
with preprocessed source if appropriate.
See http://gcc.gnu.org/bugs.html> for instructions.

If you run into problems with that link, feel free to ask
for details in the gcc-help list how to submit a bug report.

Regards,

Clemens Koller
__
R&D Imaging Devices
Anagramm GmbH
Rupert-Mayer-Straße 45/1
Linhof Werksgelände
D-81379 München
Tel.089-741518-50
Fax 089-741518-19
http://www.anagramm-technology.com



Re: [RFC][modulo-sched] Fix scheduling order within a cycle

2007-11-15 Thread Ayal Zaks
Revital1 Eres/Haifa/IBM wrote on 14/11/2007 18:46:14:

>
> >
> > When scheduling insn 58, we calculate a window of possible cycles
according
> > to already scheduled predecessors and successors. This window looks
like a
> > parallelogram in general rather than a rectangle: in the first cycle
there
> > may be predecessors (already scheduled in the first cycle, or a
multiple of
> > II cycles away) which must_preceed insn 58 (having tight dependence
with
> > insn 58 if it is placed in the first cycle). So insn 58 can be placed
in
> > 'rightmost' slots of the first cycle only. Similarly, in the last
cycle,
> > insn 58 might be placed in 'leftmost' slots only, due to successors
which
> > must_follow insn 58. Inside internal cycles (strictly between first and
> > last cycles), insn 58 can be placed in any vacant slot.
> >
> > Now if (as in the above case) an already scheduled insn 61 is both a
> > successor and a predecessor of insn 58, it may be that (not it the
above
> > case) insn 61 must_precede insn 58 (when looking for available slots
for
> > insn 58 in the first cycle) and must_follow insn 58 (when looking for
> > available slots in the last cycle).
> >
> > Currently we apply the must_precede and must_follow restrictions to all
> > cycles of the window. This is overly conservative (i.e., should not
produce
> > above wrong code!). One way to improve this is to split the window into
> > first cycle (with must_precede), internal part (with neither
must_precede
> > nor must_follow), and last cycle (with must_follow). And, of-course, if
> > first cycle == last cycle, apply both must_precede and must_follow for
it.
> >
> >
> > Finally, note that in the above case we traverse the window backwards
with
> > step -1, so 'start' is the last cycle 4, and 'end' is one past the
first
> > cycle 0 (i.e. -1).
>
> Thanks for the explanation!  Is it reasonable to conclude that for the
> must_follow/must_precede calculation we should not relay on start/end
> rows (as they are influenced from the direction of the window); but
> use win_boundary_close_to_preds row and win_boundary_close_to_succs
> row, calculated from start and ends rows depending on the direction
> of the window (if step is 1 than in_boundery_close_to_preds
> = start; if step is -1 than  in_boundary_close_to_preds = end, etc).
> win_boundary_close_to_preds will be used only for must_precede
calculation
> and win_boundary_close_to_succs row will be used only for must_follow
> as you described above.
>

One way to do this is inside the

for (c = start; c != end; c += step)
  {
verify_partial_schedule (ps, sched_nodes);

psi = ps_add_node_check_conflicts (ps, u_node, c,
   must_precede,
   must_follow);
...

loop, set:

tmp_precede = zero bitmap
tmp_follow = zero bitmap
if (c == start)
  if (step == 1)
tmp_precede = must_precede;
  else /* step == -1.  */
tmp_follow = must_follow;
if (c == end - step)
  if (step == 1)
tmp_follow = must_follow;
  else /* step == -1.  */
tmp_precede = must_precede;

and then call
psi = ps_add_node_check_conflicts (ps, u_node, c,
   tmp_precede,
   tmp_follow);


Ayal.

> Thanks,
> Revital



Re: undocumented optimization options

2007-11-15 Thread Razya Ladelsky
Ian Lance Taylor <[EMAIL PROTECTED]> wrote on 13/11/2007 20:11:35:

> Razya Ladelsky <[EMAIL PROTECTED]> writes:
> 
> > This patch adds documentation for fipa-cp and -fipa-matrix-reorg.
> > 
> > 2007-11-12  Razya Ladelsky <[EMAIL PROTECTED]>
> > 
> > * doc/invoke.texi (fipa-cp, fipa-matrix-reorg): Add documentation.
> > 
> > Ok to commit?
> 
> This is OK.
> 

Should I add a more detailed documentation for 
these optimizations, though?

Thanks,
Razya

> Thanks.
> 
> Ian



Handling overloaded template functions with variadic parameters

2007-11-15 Thread Rob Quill
Hi,

I am trying to fix this bug:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33962

The problem seems to that more_specialized_fn() doesn't seem to know
how to cope with deciding whether which function is more specialised
from two variadic functions.

I have narrowed the problem down to more_specialized_fn() being told
their are two parameters to look at, but the tree chain for the
arguments only having one. I am not sure if this because:

a) the number of parameters should not include the variadic (...) parameter

or

b) the tree chains should include something to show the variadic parameter

Any advice you can offer is greatly appreciated.

Rob Quill


Re: Using crlibm as the default math library in GCC sources

2007-11-15 Thread Christoph Quirin Lauter

Hello,

It's time for CRLibm developpers to step in this discussion.

We confirm that CRLibm is as fast as other portable libraries, or 
faster, and that it keeps improving (some benchmarks below). When we are 
slower, it is because we wanted cleaner code or smaller tables or we 
tuned the code on a given processor and it turns out later that it is a 
bad choice on others. In any case, we can still copy the faster code. 
Not to say that we are the best overall ever but we are concerned with 
performance.


We confirm that CRLibm can be turned into a 0.503 ulp (more or less) 
library at the cost of a few #ifs. We might even add a flag for that in 
the next release. Still it is proven 0.503 ulp, the proof won't go away. 
You will win on x86 5-20 cycles in average (depending on the function), 
and much more in worst case time.


Our opinion is that reproducibility (through correct rounding and in 
general C99 and standard compliance) should be the default for a system

AND that flags should be able to disable it if wanted.


Now we would like to know what GCC people mean with "having a libm with 
GCC". Is it
 a one-size-fits-all libm written in C ? With GCC-controlled #ifs ? 
With builtins etc ?

 a library written in GIMPLE ?
 a library generator within GCC ?
etc...

What we are interested in writing at the moment is a library generator 
(let us call it metalibm) that can output libms for mainstream 
precisions and optimize for various objectives (correct rounding or not, 
speed, memory footprint, register usage, parallelism, whatever).


We could right now write a simple metalibm that mostly removes the 
repetitive work from CRLibm development. This would be mostly printfs 
and ifs, and it would be simple enough one may imagine that it ends up 
in GCC codebase.


Then, since we have a fairly mature expertise with automatic generation 
of optimized polynomial approximation, we could enhance it so that it 
would also include such optimizations, and target single, 
double-extended and quad. Range reduction exploration is also at hand, 
but more target-specific optimisations are not. But then, this metalibm 
begins to depend on so many libraries and has such impredictible runtime 
(it should even have an ATLAS-like profiling mode) that nobody would 
want it in the GCC code base . And it is a lot more development, of 
course. And it is more interesting.


So what should we start with ?

We also confirm that many functions are missing from CRLibm. This is 
mostly a matter of workforce. We'd like to add some of the missing ones 
as a demonstration of a metalibm. Some are more difficult than the 
others, but the only really difficult one is gamma.

We may start with asinh and acosh, is that OK?

Christoph and Florent

Here is a sample of performance results obtained on an Opteron
( Linux 2.6.17-2-amd64
  gcc (GCC) 4.1.2 20060901 (prerelease) (Debian 4.1.1-13) )

 log avg time max time
default libm   210150919  Rem: 12KB of tables
 CRLibm146   903  Rem: using double-extended arithmetic
   (this was an experiment)
 CRLibm266  1459  Rem: this one using SSE2 doubles

exp  avg time max time
default libm   141   1105468  REM: 13KB of tables
 CRLibm184  1210

 sin avg time max time
default libm   147   1018622
 CRLibm171  3895

 cos avg time max time
default libm   152028302
 CRLibm171  3752

 tan avg time max time
default libm   244   1101230
 CRLibm280  9949

asin avg time max time
default libm   107   1018823   Rem: 20KB of tables shard with acos
 CRLibm316  2296

acos avg time max time
default libm83   1018786
 CRLibm264  3660

atan avg time max time
default libm   144100066 Rem: 43KB of tables
 CRLibm243  4724

   log10 avg time max time
default libm   249  1061NOT correctly rounded
 CRLibm321  1706

log2 avg time max time
default libm   174   274NOT correctly rounded
 CRLibm320  1663



(while building this table I just noticed a bug line 27 of 
sysdeps/ieee754/dbl-64/sincos.tbl : 40 should be 440)





The same, on an IBM power5 server (time units are arbitrary)


 log avg time max time
default libm61 23471
 CRLibm 57   307   Rem: using double-precision arithmetic

exp  avg time max time
default libm41 25019
 CRLibm 42   242

 sin avg time max time
default libm37132435
 CRLibm 43  1910

 cos avg time max time
default libm38134045
 CRLibm 44  1946

 tan avg time max time
default libm65141885
 CRLibm 72  4671

asin avg time max time
default libm29132912
 CRLibm 62   465

acos avg time max time
default libm24132798
 CRLibm 57   765

 

Re: How to let GCC produce flat assembly

2007-11-15 Thread Joe Buck
On Thu, Nov 15, 2007 at 06:41:17AM -0800, Li Wang wrote:
> I wonder how to let GCC produce flat assembly, say, just like the .com
> file under the DOS, without function calls and complicate executable
> file headers, only instructions. How to modify the machine description
> file to achieve that? Thanks in advance.

Questions about the use of gcc should go to gcc-help, not this list.



Re: Progress on GCC plugins ?

2007-11-15 Thread Diego Novillo

Joe Buck wrote:

On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote:

Is there any progress in the gcc-plugin project ?


Non-technical holdups.  RMS is worried that this will make it too easy
to integrate proprietary code directly with GCC.


I don't believe this is a strong argument.  My contention is, and has 
always been, that GCC is _already_ trivial to integrate into a 
proprietary compiler.  There is at least one compiler I know that does this.


In fact, much/most of the effort we have been spending in making GCC 
easier to maintain, necessarily translates into a system that is easier 
to integrate into other code bases.


IMO, the benefits we gain in making GCC a more attractive code base, far 
outweigh the fears of someone co-opting it for their own proprietary uses.


We gain nothing by holding infrastructure advances in GCC.  While GCC 
still has the advantage of being widely used, its internal 
infrastructure is still relatively arcane and hard to deal with.  We 
have already kicked it into the mid 90s, but we still have a lot of 
ground to cover.  An antiquated and arcane infrastructure will only help 
turn new developers away.



Diego.


Re: Progress on GCC plugins ?

2007-11-15 Thread Richard Kenner
> I don't believe this is a strong argument.  My contention is, and has 
> always been, that GCC is _already_ trivial to integrate into a 
> proprietary compiler.  There is at least one compiler I know that does this.

I believe that any such compiler would violate the GPL.  But I also believe
it's not in the best interest of the FSF to litigate that matter if the
linkage between the compiler is anything other than linked in a single
executable.  Therefore, I think it's important for us to make it as
technically hard as possible for people to do such a linkage by readin and
writing tree or communicating as different libraries or DLLs.  I'm very
much against any sort of "plug in" precisely for this reason.

> IMO, the benefits we gain in making GCC a more attractive code base, far 
> outweigh the fears of someone co-opting it for their own proprietary uses.

That depends on the importance you attach to the philosophy of free
software.  I suspect RMS attaches much more importance to that than anybody
on this list.


Re: Progress on GCC plugins ?

2007-11-15 Thread Joe Buck
On Thu, Nov 15, 2007 at 02:34:38PM -0500, Diego Novillo wrote:
> Joe Buck wrote:
> >On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote:
> >>Is there any progress in the gcc-plugin project ?
> >
> >Non-technical holdups.  RMS is worried that this will make it too easy
> >to integrate proprietary code directly with GCC.
> 
> I don't believe this is a strong argument.  My contention is, and has 
> always been, that GCC is _already_ trivial to integrate into a 
> proprietary compiler.  There is at least one compiler I know that does this.

I agree, but we still have the roadblock.

Maybe we need a group of volunteers to meet with RMS in person and work on
convincing him.  E-mail seems way too inefficient and frustrating a
mechanism.

RMS regularly points to the examples of C++ and Objective-C as an argument
for trying to force all extensions to be GPL (Mike Tiemann's employer
tried to figure out a way of making g++ proprietary; NeXT tried a
"user does the link" hack to get around the GPL for their original
Objective-C compiler).

Problem is, he hasn't really kept up; the problem with being as pure as
he is is that you can become isolated from what's going on.


bootstrap failure with rev 130208

2007-11-15 Thread Thomas Koenig
This is on i686-pc-linux-gnu:

$ ../../gcc/trunk/configure --prefix=$HOME --enable-languages=c,fortran
--enable-maintainer-mode && make bootstrap

...

build/genmodes -h > tmp-modes.h
/bin/sh: build/genmodes: No such file or directory
make[3]: *** [s-modes-h] Error 127
make[3]: Leaving directory `/home/ig25/gcc-bin/trunk/gcc'
make[2]: *** [all-stage1-gcc] Error 2
make[2]: Leaving directory `/home/ig25/gcc-bin/trunk'
make[1]: *** [stage1-bubble] Error 2
make[1]: Leaving directory `/home/ig25/gcc-bin/trunk'
make: *** [bootstrap] Error 2

Does this ring any bells?

Thomas



Re: Progress on GCC plugins ?

2007-11-15 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Richard Kenner) writes:

> > I don't believe this is a strong argument.  My contention is, and has 
> > always been, that GCC is _already_ trivial to integrate into a 
> > proprietary compiler.  There is at least one compiler I know that does this.
> 
> I believe that any such compiler would violate the GPL.  But I also believe
> it's not in the best interest of the FSF to litigate that matter if the
> linkage between the compiler is anything other than linked in a single
> executable.  Therefore, I think it's important for us to make it as
> technically hard as possible for people to do such a linkage by readin and
> writing tree or communicating as different libraries or DLLs.  I'm very
> much against any sort of "plug in" precisely for this reason.

We can make it as technically hard as possible, but it's way too late
to make it technically hard.  In fact, it's easy.  You have to write
some code to translate from tree to your proprietary IR, and then you
have to plug that code into passes.c.

If gcc supports plugins, then all we've eliminated is the need to plug
that code into passes.c.  But that is the easiest part of the job.
Adding plugins is not going to require us to support a stable tree
interface or anything along those lines; if it did, I would oppose
that.

So this seems to me to be a very weak argument against plugins.
Adding plugins does not make it noticeably easier to integrate gcc's
frontend with a proprietary compiler.  And adding plugins would not
change the issue of whether such a combination violated the GPL.

Do you disagree with this assessment?


I think it's quite important for gcc's long-term health to permit and
even encourage academic researchers and students to use it.  And I see
plugins as directly supporting that goal.  Note that I don't see any
problem with requiring (or attempting to require) that any plugin be
covered by the GPL.

So from my perspective the downside of plugins is very small, and the
upside is very large.

Ian


Re: Progress on GCC plugins ?

2007-11-15 Thread Diego Novillo

Richard Kenner wrote:


Therefore, I think it's important for us to make it as
technically hard as possible for people to do such a linkage by readin and
writing tree or communicating as different libraries or DLLs.  I'm very
much against any sort of "plug in" precisely for this reason.


That's the line of reasoning that I find weak.  It is, in fact, 
extremely simple to integrate GCC into another compiler.  All you need 
to do is insert a gimple or RTL converter somewhere in passes.c.  Having 
plug-ins will not make this step any easier.


If a third party is willing to violate the GPL, the presence of a 
plug-in infrastructure will _not_ make their job significantly easier.



That depends on the importance you attach to the philosophy of free
software.


That's precisely the reason why I think it is imperative for us to keep 
improving GCC's internal architecture.  If we offer a solid compiler 
framework for people to use, we will increase its attractiveness to 
universities and research institutions, which are the very pools where 
future compiler engineers will come from.  That will help the long-term 
survival of the project.


It is very easy for us to make code developed using plug-ins to be 
covered by the GPL.  They will be using GCC header files, after all.



Diego.


Re: Progress on GCC plugins ?

2007-11-15 Thread Richard Kenner
> In fact, it's easy.  You have to write some code to translate from
> tree to your proprietary IR, and then you have to plug that code
> into passes.c.

Well first of all, that code becomes GPL so the IR isn't truely "proprietary".

> So this seems to me to be a very weak argument against plugins.
> Adding plugins does not make it noticeably easier to integrate gcc's
> frontend with a proprietary compiler.  And adding plugins would not
> change the issue of whether such a combination violated the GPL.
> 
> Do you disagree with this assessment?

No, not in that case, but I don't see that as the only case.  Another
case would be somebody who wanted to keep an optimizer proprietary by
making it a plug-in.  My view is that because of the linkage with the
GCC IR, it can't be proprietary in that case, but that's the harder argument
to make legally.

> I think it's quite important for gcc's long-term health to permit and
> even encourage academic researchers and students to use it.  And I see
> plugins as directly supporting that goal.  

I don't see that.  Why is it that much harder to link in with GCC than doing
it as a plugin?


Re: Progress on GCC plugins ?

2007-11-15 Thread Diego Novillo

Richard Kenner wrote:


No, not in that case, but I don't see that as the only case.  Another
case would be somebody who wanted to keep an optimizer proprietary by
making it a plug-in.  My view is that because of the linkage with the
GCC IR, it can't be proprietary in that case, but that's the harder argument
to make legally.


I don't think we can argue legalities here.  All we can do is say that 
if they are using plug-ins, they are forced to include GPL'd header 
files and link against GPL'd code.  The rest is up to the FSF legal folks.



I don't see that.  Why is it that much harder to link in with GCC than doing
it as a plugin?


Limited time and steep learning curves.  Typically, researchers are 
interested in rapid-prototyping to keep the paper mill going.  Plug-ins 
offers a simple method for avoiding the latencies of repeated bootstrap 
cycles.


Several projects will survive the initial prototyping stages and become 
techniques we can apply in industrial settings.  We want to attract 
that.  Plus we want to attract the grad students that did the research 
and graduate with a favourable attitude towards using GCC in their 
future career.



Diego.


Re: Progress on GCC plugins ?

2007-11-15 Thread Benjamin Smedberg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Richard Kenner wrote:

>> I think it's quite important for gcc's long-term health to permit and
>> even encourage academic researchers and students to use it.  And I see
>> plugins as directly supporting that goal.  
> 
> I don't see that.  Why is it that much harder to link in with GCC than doing
> it as a plugin?

To provide an example: Mozilla has been using elsa/oink to do static
analysis and rewriting of the codebase. We would love to use a more mature
and correct frontend such as GCC... and we would like to eventually have
this static analysis be a standard part of the build process when using the
GCC compiler.

To avoid requiring everyone who does Mozilla hacking to also do a
custom-built GCC would be to write just the static analysis as a plugin,
compile that to a .so, and then build with an extra flag such as g++
- -static-analyze=/custom/libmozstaticanalysis.so... in many cases we could
even pre-compile this file for common versions of GCC.

- --BDS

- --

Benjamin Smedberg
Platform Guru
Mozilla Corporation
[EMAIL PROTECTED]
http://benjamin.smedbergs.us/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHPMKISSwGp5sTYNkRAtX2AKDa2OWDLgQkeXLQjzcI5BzqGf3b2ACgmm1r
jnbvtmAnq0GPPb19M/92lFo=
=af7b
-END PGP SIGNATURE-


Re: Progress on GCC plugins ?

2007-11-15 Thread Richard Kenner
> Limited time and steep learning curves.  Typically, researchers are 
> interested in rapid-prototyping to keep the paper mill going.  Plug-ins 
> offers a simple method for avoiding the latencies of repeated bootstrap 
> cycles.

I don't follow.  If you're developing an optimizer, you need to do the
bootstrap to test the optimizer no matter how it connects to the rest
of the compiler.  All you save is that you do a smaller link, but that
time is measured in seconds on modern machines.


[LTO] LTO breaks if debug info is stripped from object files

2007-11-15 Thread William Maddox
It appears that portions of the LTO information are emitted in the usual
debugging sections, rather, information that would already be present there
is shared.  This is great for reducing the size of object files that
contain both
LTO info and debugging info, but means that LTO breaks if 'strip --strip-debug'
has been run on object files submitted to it as input. Is this intended or
desirable behavior?

--Bill


Re: Progress on GCC plugins ?

2007-11-15 Thread Diego Novillo

Richard Kenner wrote:


I don't follow.  If you're developing an optimizer, you need to do the
bootstrap to test the optimizer no matter how it connects to the rest
of the compiler.  All you save is that you do a smaller link, but that
time is measured in seconds on modern machines.


No, you don't.  All you need is an existing GCC binary installed 
somewhere in your path that accepts -fplugin=mypass.so.  All you compile 
is your pass, you don't need to even build GCC.


Plug-ins also facilitate other kinds of work that are not related to 
optimization passes.  Static analysis, refactoring tools, visualization, 
code navigation, etc.  Sean described a variety of different 
applications they had implemented with their plug-in framework at the 
last summit.  It was all pretty impressive.


I agree with your point of not getting into legal discussions here, so I 
won't comment on your previous posting.



Diego.


Re: Progress on GCC plugins ?

2007-11-15 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Richard Kenner) writes:

> > In fact, it's easy.  You have to write some code to translate from
> > tree to your proprietary IR, and then you have to plug that code
> > into passes.c.
> 
> Well first of all, that code becomes GPL so the IR isn't truely "proprietary".

I'm with you on that.  The fact remains, people have done this
already.  Whether the result is legally proprietary or not is almost
irrelevant; as you said earlier, the FSF is unlikely to actually sue.


> > So this seems to me to be a very weak argument against plugins.
> > Adding plugins does not make it noticeably easier to integrate gcc's
> > frontend with a proprietary compiler.  And adding plugins would not
> > change the issue of whether such a combination violated the GPL.
> > 
> > Do you disagree with this assessment?
> 
> No, not in that case, but I don't see that as the only case.  Another
> case would be somebody who wanted to keep an optimizer proprietary by
> making it a plug-in.  My view is that because of the linkage with the
> GCC IR, it can't be proprietary in that case, but that's the harder argument
> to make legally.

I can not deny the possibility of somebody writing an (effectively)
proprietary optimization pass.  However, I consider that to be both
unlikely and irrelevant.  People can always make private changes to
gcc.  We care about proprietary changes if they start distributing
them.  But that is an unlikely scenario.  There is no money to be made
in compiler optimization.

If we make it clear that we consider any plugin to be covered by the
GPL, then any attempt to sell a proprietary optimization will face
community opposition, thus making even less business sense.

And in the end, what will they have?  A marginal improvement over the
standard compiler.  Since they can not hide the results, we can figure
out what they are doing and recreate it.  In fact, it will be easier
for us to do so than it would be for a private port.

So I think this is a groundless fear.  When considering hypothetical
consequences, we need to consider both their likelihood and their
benefit or cost.  This one is both unlikely and has low cost.


> > I think it's quite important for gcc's long-term health to permit and
> > even encourage academic researchers and students to use it.  And I see
> > plugins as directly supporting that goal.  
> 
> I don't see that.  Why is it that much harder to link in with GCC than doing
> it as a plugin?

It is harder for people who are not used to the myriad details of
building gcc themselves.  Most people these days do not build gcc
themselves, and on gcc-help you will see the plaintive cries of those
who do.  We're talking about grad students focused on their research
here, not people who want to learn the arcana of our build system.

Ian


Re: Progress on GCC plugins ?

2007-11-15 Thread Emmanuel Fleury
Ian Lance Taylor wrote:
> 
> I think it's quite important for gcc's long-term health to permit and
> even encourage academic researchers and students to use it.  And I see
> plugins as directly supporting that goal.  Note that I don't see any
> problem with requiring (or attempting to require) that any plugin be
> covered by the GPL.

Not only this but I'm also more and more concerned about closed source
static analysis and code verification tools such as Coverity
(http://www.coverity.com/), SLAM (http://research.microsoft.com/slam/)
and others (see the list on Wikipedia:
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis).

If our concern in the 90's was to build code with an open source
compiler, our concern nowadays should be to produce better code inside
the open source community... Not having a common open source toolkit to
perform *advanced* analysis of code and to build software on it will
ends up having an unbalanced quality between proprietary and open source
projects in a very unfair manner.

We are now only at the beginning of the rise of these tools and yet it
makes quite a difference between the code quality of projects using it
or not... Having GCC to provide plug-ins would (in my humble opinion)
indirectly boost code quality in open source development by allowing
better tools to appears and help to fill the gap that started to appear
between proprietary and open source code.

Moreover, considering the GCC developers point-of-view, it becomes more
and more obvious (at least to me) that the next 'frontier' in modern
compiler design will be to export features to perform static analysis
and verification on code.

At last, I would say that I'm more concerned about the number of closed
source tools to perform code analysis (with or without the help of GCC)
than the hypothetical use of GCC by proprietary softwares because not
having this analysis tools will make us loose the race anyway...

Well, of course, this is only my very personal point of view on this
topic... :)

Regards
-- 
Emmanuel Fleury

Research is an organized method for keeping you
reasonably dissatisfied with what you have.
  -- Charles Kettering


Re: Attributes on structs

2007-11-15 Thread Mark Mitchell
Jason Merrill wrote:

> may_alias and target attributes are the problematic case.  Most of these
> just get added to the TYPE_ATTRIBUTES list, and
> build_type_attribute_qual_variant creates a new TYPE_MAIN_VARIANT
> without copying the fields, which is why things break.
> 
> A simple solution might be to just have
> build_type_attribute_qual_variant refuse to make a variant of
> record/enum types.  This will reject the may_alias cases that break, but
> leave existing use of aligned/packed/unused/deprecated alone.

That seems reasonable to me.  The transparent_union trick (copying the
fields, along with making a new TYPE_MAIN_VARIANT) might work, but even
there you have to worry about making sure you a different type_info
object, how do you mangle the name, etc.  You're also likely to get
messages like "can't convert X to X" because one of them has an attribute.

If we can mandate that all semantic type attributes apply only at the
point of definition, then we can dodge all these issues; there will
always be only one "class X", whatever attributes it might happen to
have.  So, I like that answer.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


own target: combine emits invalid RTL

2007-11-15 Thread Michael_fogel
Hi

There is again a problem i con not solve by my own. I tried to compile
LwIP and discovered following error.

tcp_in.c:1133: internal compiler error: in gen_reg_rtx, at emit-rtl.c:771
Please submit a full bug report,

A full output of all passes showed, that combine seems to make invalid
combinations.

After the life1 pass there are two instructions:

(insn 2058 2053 2059 144 (set (reg:QI 1255 [ .flags ])
(mem/s:QI (reg/f:SI 1250) [0 .flags+0 S1 A32])) 131
{*target.md:2289} (insn_list:REG_DEP_TRUE 2053 (nil))
(nil))

(insn 2059 2058 2060 144 (set (reg:SI 1256)
(ior:SI (subreg:SI (reg:QI 1255 [ .flags ]) 0)
(const_int 2 [0x2]))) 18 {iorsi3_internal1}
(insn_list:REG_DEP_TRUE 2058 (nil))
(expr_list:REG_DEAD (reg:QI 1255 [ .flags ])
(nil)))

but after the combine pass one instruction is deleted and combined with
the second:

(note 2058 2053 2059 144 NOTE_INSN_DELETED)

(insn 2059 2058 2060 144 (set (reg:SI 1256)
(ior:SI (subreg:SI (mem/s:QI (reg/f:SI 1250) [0
.flags+0 S1 A32]) 0)
(const_int 2 [0x2]))) 18 {iorsi3_internal1}
(insn_list:REG_DEP_TRUE 2053 (nil))
(nil))

This instruction is invalid and there is no pattern for a match.

(define_insn "iorsi3_internal1"
  [(set (match_operand:SI 0 "gp_reg_operand" "=r,r")
(ior:SI (match_operand:SI 1 "reg_or_0_operand" "%rJ,rJ")
(match_operand:SI 2 "uns_arith_operand" "r,K")))]

predicates and constraints are similar to the mips target

How can i stop combine doing this?
What controls the behavior of combine? Only the pattern signatures or is
there a target hook or macro?

Thanks for your help

best regards

Michael Fogel



Re: Progress on GCC plugins ?

2007-11-15 Thread Richard Kenner
> If a third party is willing to violate the GPL, the presence of a 
> plug-in infrastructure will _not_ make their job significantly easier.

The issue isn't the ease in which it violates the GPL, but the ease in
which you can show it *is* a violation!  If there's no plug-in and you
link directly with GCC, there's no question that such code is covered
by the GPL.  If you use a plug-in, there *is* such a question and people
might well believe it is *not* a GPL violation.

> It is very easy for us to make code developed using plug-ins to be 
> covered by the GPL.  They will be using GCC header files, after all.

I don't want to turn this into a legal discussion, but there is some legal
ambiguity about that statement too.


Re: own target: combine emits invalid RTL

2007-11-15 Thread Jim Wilson

Michael_fogel wrote:

(ior:SI (subreg:SI (mem/s:QI (reg/f:SI 1250) [0
.flags+0 S1 A32]) 0)


See register_operand and general_operand in recog.c.  (SUBREG (MEM)) is 
accepted by register_operand if INSN_SCHEDULING is not defined, for 
historical reasons.  This is something that should be fixed some day.


INSN_SCHEDULING is defined if you have any of the instruction scheduling 
related patterns in your md file.  If this is a new port, and you 
haven't tried to add instruction scheduling support, then 
INSN_SCHEDULING won't be defined yet.


Anyways, this means that the RTL is correct, and we expect reload to fix 
it.  The error from gen_reg_rtx implies that reload is failing, perhaps 
because of a bug in your port that doesn't handle (SUBREG (MEM)) correctly.


There are other legitimate cases where (SUBREG (MEM)) can occur during 
reload, when you have a subreg of an pseudo that did not get allocated 
to a hard register for instance, so even if register_operand and 
general_operand are changed, you still need to find and fix the bug in 
your port.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Progress on GCC plugins ?

2007-11-15 Thread Richard Kenner
> > I don't follow.  If you're developing an optimizer, you need to do the
> > bootstrap to test the optimizer no matter how it connects to the rest
> > of the compiler.  All you save is that you do a smaller link, but that
> > time is measured in seconds on modern machines.
> 
> No, you don't.  All you need is an existing GCC binary installed 
> somewhere in your path that accepts -fplugin=mypass.so.  All you compile 
> is your pass, you don't need to even build GCC.

No, I mean for *testing* you need to do a bootstrap.  I'm not talking
about the minimum actually needed to build.


Re: How to let GCC produce flat assembly

2007-11-15 Thread Li Wang
Hi,
I may need explain this problem more clearly.For a backend which runs as
coprocessor to a host processor, such as GPU, which incoporates large
numbers of ALUS and processes only arithmetic operations and some other
simple operations, runs in VLIW pattern to accelerate the host
processor. Say, this coprocessor is referred as 'raw processor', note, I
don't mention GPU, GPU is similar in mechnism but more complex than
this. It owns simple ISA, and has no dedicated ESP, EBP to support
function call, It fetches the VLIW instruction from instruction memory
one by one,
and execute it. If I want to let GCC produce assembly for it, how should
I code the machine description file? Should I first let cc1 produce a
elf assembly for it, and then let binutils trunate it to a flat
assembly? It seems ugly hacking. Thanks.

Regards,
Li Wang
> Li Wang wrote:
>   
>> Hi,
>> I wonder how to let GCC produce flat assembly, say, just like the .com
>> file under the DOS, without function calls and complicate executable
>> file headers, only instructions. How to modify the machine description
>> file to achieve that? Thanks in advance.
>> 
>
> Perhaps you are asking on the wrong list.
>
> And what exactly do you want to achieve and why?
>
> What is your target system?
>
> Why using (and appropriately configuring) the binutils (in particular
> its linker, ld, implicitly invoked by gcc) not appropriate for your
> needs? I am sure that you can configure it appropriately (binutils is
> very powerful).
>
> You still will need other generated data than the instructions.
> Typically, constants such as strings. And many other stuff.
>
>
>   



Re: Build Failure for gcc-4.3-20071109

2007-11-15 Thread Jim Wilson

Tom Browder wrote:

Attached is a log of my build attempt (and the config.log).


There is a config.log file in every directory that gets configured.  It 
looks like you attached the one from the top-level dir which is not 
where the problem is occurring.


The "make -j3" makes the output hard to read.  You might try a "make 
all" build from scratch to get a better look at what is wrong.


You can build a host libiberty with "make all-libiberty" and a target 
libiberty with "make all-target-libiberty".  The same is true for other 
things, e.g. "make all-gcc" will just build the gcc directory, and "make 
all-target-libstdc++" will build a target libstdc++.  But "make all" 
works just as well to get back to the error point.



It looks to me as if libiberty is getting compiled and tested Ok, but,
for some reason, make reports an error which then prevents a good
build completion.


These lines in the output are suspect:
/bin/sh: /usr/bin/true: Success

I don't have a /usr/bin/true on my F7 machines.  There is a /bin/true. 
The program true should just return without error and not do anything else.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Progress on GCC plugins ?

2007-11-15 Thread Ian Lance Taylor
Andrew Haley <[EMAIL PROTECTED]> writes:

>  > We can make it as technically hard as possible, but it's way too late
>  > to make it technically hard.  In fact, it's easy.  You have to write
>  > some code to translate from tree to your proprietary IR, and then you
>  > have to plug that code into passes.c.
> 
> Sure, but you then have to maintain your port forever, and there is a
> substantial cost to this.  I am pretty sure that if there were a
> stable API to get trees out of GCC, people would have bolted gcc into
> proprietary compilers.  As there isn't a stable way to do it, it's
> easier not to do it that way, and instead to contribute to gcc.

I agree that there is a cost to maintaining your port.  I disagree
that plugins make it any cheaper.  See below.


>  > If gcc supports plugins, then all we've eliminated is the need to
>  > plug that code into passes.c.  But that is the easiest part of the
>  > job.  Adding plugins is not going to require us to support a stable
>  > tree interface or anything along those lines; if it did, I would
>  > oppose that.
> 
> Ahh.  I don't know about that: once we have a plugin
> infrastructure, we have to document it and there will be pressure to
> stabilize it.  I don't believe that an unstable plugin architecture
> has any viability at all.

I disagree.  In fact, if creating a plugin architecture comes with a
requirement to make a stable structure for trees, then I'm opposed to
it.  That would hurt us far more than it would help.  This is not a
slippery slope.

An unstable plugin architecture is still very useful for our users.
Correct installation of a patched gcc is an awkward affair that many
people get wrong.  Correct installation of a plugin requires no more
than a command line option.  Plugins make it easy for people to share
their gcc extensions across projects or across university departments.


>  > So this seems to me to be a very weak argument against plugins.
>  > Adding plugins does not make it noticeably easier to integrate gcc's
>  > frontend with a proprietary compiler.  And adding plugins would not
>  > change the issue of whether such a combination violated the GPL.
>  > 
>  > Do you disagree with this assessment?
> 
> I think there is a real possibility that, had we had such a plugin
> interface years ago, some of the gcc back-ends and optimization work
> we have would never have been paid for by some companies, and so gcc
> would be a worse compiler.

Most new gcc back-ends are private, so I don't buy that part of the
argument.  And in any case nobody is talking about plug-ins for gcc
backends.  We're talking about plugins at the tree/GIMPLE level.

And, frankly, very few people are paying for general new gcc
optimizations.  As far as I know, the only people doing so are
companies like IBM and Red Hat, and they would contribute their
changes anyhow.  Do you have any examples in mind?

When I was in the business of convincing people to pay for gcc work, I
had a laundry list of general gcc improvements to sell.  I was never
able to get a dime except for target specific improvements.  A plugin
architecture would not make any difference to that kind of work.

Ian


Re: Progress on GCC plugins ?

2007-11-15 Thread Andrew Haley
Ian Lance Taylor writes:
 > [EMAIL PROTECTED] (Richard Kenner) writes:
 > 
 > > > I don't believe this is a strong argument.  My contention is,
 > > > and has always been, that GCC is _already_ trivial to integrate
 > > > into a proprietary compiler.  There is at least one compiler I
 > > > know that does this.
 > > 

 > > I believe that any such compiler would violate the GPL.  But I
 > > also believe it's not in the best interest of the FSF to litigate
 > > that matter if the linkage between the compiler is anything other
 > > than linked in a single executable.  Therefore, I think it's
 > > important for us to make it as technically hard as possible for
 > > people to do such a linkage by readin and writing tree or
 > > communicating as different libraries or DLLs.  I'm very much
 > > against any sort of "plug in" precisely for this reason.
 > 
 > We can make it as technically hard as possible, but it's way too late
 > to make it technically hard.  In fact, it's easy.  You have to write
 > some code to translate from tree to your proprietary IR, and then you
 > have to plug that code into passes.c.

Sure, but you then have to maintain your port forever, and there is a
substantial cost to this.  I am pretty sure that if there were a
stable API to get trees out of GCC, people would have bolted gcc into
proprietary compilers.  As there isn't a stable way to do it, it's
easier not to do it that way, and instead to contribute to gcc.

 > If gcc supports plugins, then all we've eliminated is the need to
 > plug that code into passes.c.  But that is the easiest part of the
 > job.  Adding plugins is not going to require us to support a stable
 > tree interface or anything along those lines; if it did, I would
 > oppose that.

Ahh.  I don't know about that: once we have a plugin
infrastructure, we have to document it and there will be pressure to
stabilize it.  I don't believe that an unstable plugin architecture
has any viability at all.

 > So this seems to me to be a very weak argument against plugins.
 > Adding plugins does not make it noticeably easier to integrate gcc's
 > frontend with a proprietary compiler.  And adding plugins would not
 > change the issue of whether such a combination violated the GPL.
 > 
 > Do you disagree with this assessment?

I think there is a real possibility that, had we had such a plugin
interface years ago, some of the gcc back-ends and optimization work
we have would never have been paid for by some companies, and so gcc
would be a worse compiler.

Andrew.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 
1TE, UK
Registered in England and Wales No. 3798903


Re: Progress on GCC plugins ?

2007-11-15 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Richard Kenner) writes:

> > Limited time and steep learning curves.  Typically, researchers are 
> > interested in rapid-prototyping to keep the paper mill going.  Plug-ins 
> > offers a simple method for avoiding the latencies of repeated bootstrap 
> > cycles.
> 
> I don't follow.  If you're developing an optimizer, you need to do the
> bootstrap to test the optimizer no matter how it connects to the rest
> of the compiler.  All you save is that you do a smaller link, but that
> time is measured in seconds on modern machines.

But users who are not gcc developers have trouble doing a bootstrap.
Our build process is complicated, and it is not getting simpler.

And the sorts of researchers that dnovillo is talking about don't need
to do a bootstrap routinely.  They are experimenting with new
optimizations or with static analysis.  They aren't generating actual
code.  A bootstrap is beside the point.

Ian


Re: Build Failure for gcc-4.3-20071109

2007-11-15 Thread Tom Browder
On Nov 15, 2007 6:24 PM, Jim Wilson <[EMAIL PROTECTED]> wrote:
> Tom Browder wrote:
> > Attached is a log of my build attempt (and the config.log).
...
> These lines in the output are suspect:
> /bin/sh: /usr/bin/true: Success
> I don't have a /usr/bin/true on my F7 machines.  There is a /bin/true.
> The program true should just return without error and not do anything else.

Thanks, Jim, I think you've hit on it.  I'll check the hosts again
when I get back to work in the morning.

I now vaguely remember doing something with that /usr/bin/true because
of some daemon program I was fooling around with.  And it's on the two
hosts I'm having trouble with.

-Tom

Tom Browder
Niceville, Florida
USA


Re: Progress on GCC plugins ?

2007-11-15 Thread Diego Novillo

Richard Kenner wrote:


No, I mean for *testing* you need to do a bootstrap.  I'm not talking
about the minimum actually needed to build.


Nope, you don't.  If you are doing static analysis, for instance, you 
don't care nor need to bootstrap GCC.  You just need to load your module 
every time a file is compiled.


If you are doing a pass that you are testing and/or prototyping, you 
don't want to waste time rebuilding the whole compiler.  If/when your 
pass reaches certain maturity, you think it's ready for production, and 
people think it's a good idea to have it in the compiler, then you 
convert it into a static pass and you go through the traditional 
bootstrap process.


Similarly, for visualization plug-ins, you don't want/need to bootstrap 
the compiler.  That's the core idea with plug-ins, they allow more 
flexibility for experimentation.  They are not a means for implementing 
permanent features in the compiler.  We would not guarantee ABI nor API 
stability, there is just no point to doing that.



Diego.


Re: How to let GCC produce flat assembly

2007-11-15 Thread Jim Wilson

Li Wang wrote:

and execute it. If I want to let GCC produce assembly for it, how should
I code the machine description file? Should I first let cc1 produce a
elf assembly for it, and then let binutils trunate it to a flat
assembly? It seems ugly hacking. Thanks.


I don't know what a .com file is, but you can use objcopy from binutils 
to convert between file formats.  You could use objcopy to convert an 
ELF file into a binary file by using "-O binary" for instance.  See the 
objcopy documentation.  The binary file format may be what you are 
looking for.


If you want code without function calls, then you would have to write a 
C program without any function calls.  Neither gcc nor binutils will 
help you there.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: How to let GCC produce flat assembly

2007-11-15 Thread Joe Buck
On Thu, Nov 15, 2007 at 04:20:49PM -0800, Li Wang wrote:
> I may need explain this problem more clearly.

Yes, my earlier message directing you to gcc-help was because I thought
you didn't grasp what the compiler should do and what the linker should
do; sorry about that.

> For a backend which runs as
> coprocessor to a host processor, such as GPU, which incoporates large
> numbers of ALUS and processes only arithmetic operations and some other
> simple operations, runs in VLIW pattern to accelerate the host
> processor. Say, this coprocessor is referred as 'raw processor', note, I
> don't mention GPU, GPU is similar in mechnism but more complex than
> this. It owns simple ISA, and has no dedicated ESP, EBP to support
> function call.

But those registers aren't dedicated to support function calls on the x86
except by convention.  If your coprocessor has no ABI to describe a stack
and a function interface, you need to invent one, so that you can do
function calls.  gcc can inline the calls where it makes sense, and the
scores can be adjusted so that a lot of inlining happens if your stack is
inefficient.

> If I want to let GCC produce assembly for it, how should
> I code the machine description file? Should I first let cc1 produce a
> elf assembly for it, and then let binutils trunate it to a flat
> assembly? It seems ugly hacking. Thanks.

gcc produces assembler code.  as turns it into object code.  ld links
to form an executable.  That's the way that it works.



How to let GCC produce flat assembly

2007-11-15 Thread Li Wang
Hi,
I wonder how to let GCC produce flat assembly, say, just like the .com
file under the DOS, without function calls and complicate executable
file headers, only instructions. How to modify the machine description
file to achieve that? Thanks in advance.

Regards,
Li Wang


Re: bootstrap failure with rev 130208

2007-11-15 Thread Jim Wilson

Thomas Koenig wrote:

build/genmodes -h > tmp-modes.h
/bin/sh: build/genmodes: No such file or directory


Does the file build/genmodes exist?  If the file isn't there, then you
need to figure out what happened to it.

If the file is there, then this might mean that the interpreter for the
binary is missing.  For an ELF executable, the interpreter is the
dynamic linker ld.so.  This info is stored in the interp section.
localhost$ objdump --full-contents --section .interp /bin/ls

/bin/ls: file format elf32-i386

Contents of section .interp:
 8048134 2f6c6962 2f6c642d 6c696e75 782e736f  /lib/ld-linux.so
 8048144 2e3200   .2.

If the interp section points at a file that is non-existent, then you
may get a confusing message when you run the program.

A similar problem can occur with shell scripts.  E.g. if you have
#!/bin/foo
exit 0

in a script, make it executable, and try to run it directly, you may get
a confusing message saying the file does not exist.  The message means
the interpreter is missing, not the actual shell script.

The error message comes from the shell.  Bash prints a useful message, 
but some shells print a confusing one.

localhost$ echo $SHELL
/bin/bash
localhost$ cat tmp.script
#!/bin/foo
exit 0
localhost$ ./tmp.script
bash: ./tmp.script: /bin/foo: bad interpreter: No such file or directory
localhost$ csh
[EMAIL PROTECTED] ~/tmp]$ ./tmp.script
./tmp.script: Command not found.
[EMAIL PROTECTED] ~/tmp]$ 


--
Jim Wilson, GNU Tools Support, http://www.specifix.com



Modulo operation in C for -ve values

2007-11-15 Thread Deepak Gaur
Subject: Modulo operation in C for -ve values

The Modulo operation as specified in
http://xenia.media.mit.edu/~bdenckla/thesis/texts/htthe/node13.html says that
for a fraction like n/k which can be expressed as n/k = i + j/k the C division
and mod operation should yeild
n div k = i (integer part)
n mod k = j (remainder part)
For n +ve above is true
For n -ve
-n/k = -i + j/k
-n div k = -i
-n mod k = j (+ve remainder)

But running a sample program on Redhat enterprise Linux EL4
with gcc version 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
on a Intel PIV Machine 

#include 
#include 
#include 
int main()
{
int n,k,j;
n=-3;
k=8; /* k is power of 2 */
j=(n/k);
printf("n n div k = %d", j);
j=(n%k);
printf("\n n mod k = %d", j);
j=(n) & (k-1);
printf("\n n & k-1 = %d", j);
}
gives following output for n = -3 k = 8
n div k = 0
n mod k = -3
n & k-1 = 5
though it should have been as per hypothesis proposed in
http://xenia.media.mit.edu/~bdenckla/thesis/texts/htthe/node13.html
n div k = -1
n mod k = 5 
n & k-1 = 5

Which is correct(0,-3,5) or (-1,5,5)?

Thanks

Deepak Gaur



Re: Modulo operation in C for -ve values

2007-11-15 Thread Alan Modra
On Fri, Nov 16, 2007 at 09:27:22AM +0530, Deepak Gaur wrote:
> The Modulo operation as specified in
> http://xenia.media.mit.edu/~bdenckla/thesis/texts/htthe/node13.html 

This is not the C % operator.  google "ISO/IEC 9899:1999" for a clue.

-- 
Alan Modra
Australia Development Lab, IBM


Re: How to let GCC produce flat assembly

2007-11-15 Thread Li Wang

Hi,
   Thanks for your attention and response. I think I am still not very 
accurate to describe what I want to do. I am too anxious to explain far 
from clearly. Now permit me use a simple example, for the simple C 
program below, compiled by cc1 targetting to x86 platform, the assembly 
is as follows,


int main()
{
int a, b, c;
   
a = 2;

b = 2;
c = a + b;
return 0;
}

   .file"test.c"
   .text
.globl main
   .typemain,@function
main:
   pushl%ebp
   movl%esp, %ebp
   subl$24, %esp
   andl$-16, %esp
   movl$0, %eax
   subl%eax, %esp
   movl$2, -4(%ebp)
   movl$2, -8(%ebp)
   movl-8(%ebp), %eax
   addl-4(%ebp), %eax
   movl%eax, -12(%ebp)
   movl$0, %eax
   leave
   ret

As you said, the coprocessor has no ABI to describe a stack and a 
function interface, then inline applies. But how could I inline 'main'? 
And I am sorry for I misuse the word 'elf assembly', what exactly I mean 
by that is how to omit the section or any other informations helps 
linker to organize a executable from the cc1 output. In a word, codes 
something like the following is what I want, If possible to let cc1 
produce such assembly? Thanks.


   movl$2, -4(%ebp)
   movl$2, -8(%ebp)
   movl-8(%ebp), %eax
   addl-4(%ebp), %eax

Regards,
Li Wang

On Thu, Nov 15, 2007 at 04:20:49PM -0800, Li Wang wrote:
  

I may need explain this problem more clearly.



Yes, my earlier message directing you to gcc-help was because I thought
you didn't grasp what the compiler should do and what the linker should
do; sorry about that.

  

For a backend which runs as
coprocessor to a host processor, such as GPU, which incoporates large
numbers of ALUS and processes only arithmetic operations and some other
simple operations, runs in VLIW pattern to accelerate the host
processor. Say, this coprocessor is referred as 'raw processor', note, I
don't mention GPU, GPU is similar in mechnism but more complex than
this. It owns simple ISA, and has no dedicated ESP, EBP to support
function call.



But those registers aren't dedicated to support function calls on the x86
except by convention.  If your coprocessor has no ABI to describe a stack
and a function interface, you need to invent one, so that you can do
function calls.  gcc can inline the calls where it makes sense, and the
scores can be adjusted so that a lot of inlining happens if your stack is
inefficient.

  

If I want to let GCC produce assembly for it, how should
I code the machine description file? Should I first let cc1 produce a
elf assembly for it, and then let binutils trunate it to a flat
assembly? It seems ugly hacking. Thanks.



gcc produces assembler code.  as turns it into object code.  ld links
to form an executable.  That's the way that it works.


  




Help understanding overloaded templates

2007-11-15 Thread Rob Quill
Hi,

I was wondering if anyone could help me make sense of the
more_specialized_fn() function in pt.c (line 13281).

Specifically, I am trying to understand what each of the are:

  tree decl1 = DECL_TEMPLATE_RESULT (pat1);
  tree targs1 = make_tree_vec (DECL_NTPARMS (pat1));
  tree tparms1 = DECL_INNERMOST_TEMPLATE_PARMS (pat1);
  tree args1 = TYPE_ARG_TYPES (TREE_TYPE (decl1));

and how the function is supposed to deal with variadic functions in
terms of these. That is to say, if a function is variadic, how is that
represented in these data structures?

Any help is much appreciated.

Thanks.
Rob