Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Eric Botcazou
> I understand and can support (up to a point) the desire of distributors to
> continue working within GPLv2 and I know that's why the 4.1 branch is in
> this situation.  However IMHO this position is in tension with the
> interests of users who don't get gcc from distributors (think
> non-linux-gnu platforms) and therefore leaving the 4.1 branch in this
> situation forever eventually is against the interests of the FSF.

I don't think anyone talked about leaving the 4.1 branch in this state 
forever, just that it would not be moved to GPLv3 before being closed.

If maintaining it is now too much of a burden, then let's close it.

-- 
Eric Botcazou


Re: [PATCH][RFC] Statistics "infrastructure"

2008-03-16 Thread Richard Guenther
On Sun, 16 Mar 2008, Zdenek Dvorak wrote:

> Hi,
> 
> > > > A statistics event consists of a function (optional), a statement
> > > > (optional) and the counter ID.  I converted the counters from
> > > > tree-ssa-propagate.c as an example, instead of
> > > > 
> > > > prop_stats.num_copy_prop++;
> > > > 
> > > > you now write
> > > > 
> > > > statistics_add ("copy propagations");
> > > > 
> > > > (function and statement omitted, you might prefer #defines for strings
> > > > that you use multiple times).
> > > 
> > > it would perhaps be better to use #defines with integer values?  Also,
> > > it would be more consistent to have statistics.def similar to
> > > timevar.def for this.  It would make creation of new counters a bit
> > > more difficult, but on the other hand, it would make it possible to
> > > classify the counters (by type of the counted operation/its
> > > expensiveness/...),
> > 
> > The difficultness to add new counters is exactly why I didn't go
> > down that route.  I expect this mainly used for experimentation
> > where it is IMHO inconvenient to go the .def route
> 
> I thought of it more as an aid in debugging performance problems, as in,
> checking the dumps without introducing new statistics counters; in which
> case, having some description of what the counters mean and the metadata
> from the .def file would be useful.
> 
> On the other hand, I agree that for the purpose that you suggest
> avoiding .def is better.  Perhaps we could require that all the
> statistics strings are #defined and documented (and of course you can
> ignore this rule for the counters that you use for experimentation)?

Sure, we can even do a .def file for this purpose.

Richard.


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread NightStrike
On 3/15/08, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote:
> On Sat, 15 Mar 2008, NightStrike wrote:
>
> > On 3/15/08, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote:
> > > I support the final-release-then-close approach.  But can we get a
> > > volunteer to convert that branch to GPLv3... ?
> >
> > How complicated is the task?
>
> It's not complicated, but perhaps it is tedious.

In that case, I'd volunteer.


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread NightStrike
On 3/16/08, Eric Botcazou <[EMAIL PROTECTED]> wrote:
> > I understand and can support (up to a point) the desire of distributors to
> > continue working within GPLv2 and I know that's why the 4.1 branch is in
> > this situation.  However IMHO this position is in tension with the
> > interests of users who don't get gcc from distributors (think
> > non-linux-gnu platforms) and therefore leaving the 4.1 branch in this
> > situation forever eventually is against the interests of the FSF.
>
> I don't think anyone talked about leaving the 4.1 branch in this state
> forever, just that it would not be moved to GPLv3 before being closed.
>
> If maintaining it is now too much of a burden, then let's close it.

What is burdensome?


Re: libtool for shared objects?

2008-03-16 Thread Ralf Wildenhues
* Basile STARYNKEVITCH wrote on Wed, Mar 12, 2008 at 07:57:33AM CET:
> So I tried to add to gcc/configure.ac the following lines (which exist  
> in libmudflap/configure.ac)
>
>   AC_LIBTOOL_DLOPEN
>   AM_PROG_LIBTOOL
>   AC_SUBST(enable_shared)
>   AC_SUBST(enable_static)
>
> and it does not work:
>
> (cd /usr/src/Lang/basile-melt-gcc/gcc && autoconf)
> configure.ac:434: error: possibly undefined macro: AC_LIBTOOL_DLOPEN
>   If this token and others are legitimate, please use m4_pattern_allow.
>   See the Autoconf documentation.
> configure.ac:435: error: possibly undefined macro: AM_PROG_LIBTOOL
> (cd /usr/src/Lang/basile-melt-gcc/gcc && autoheader)

You would need to run
  aclocal -I .. -I ../config

first.  But really it'd be better if you put libtool-using code outside
of gcc/, I think.

Cheers,
Ralf


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Richard Guenther
On Sun, Mar 16, 2008 at 9:33 AM, Eric Botcazou <[EMAIL PROTECTED]> wrote:
> > I understand and can support (up to a point) the desire of distributors to
>  > continue working within GPLv2 and I know that's why the 4.1 branch is in
>  > this situation.  However IMHO this position is in tension with the
>  > interests of users who don't get gcc from distributors (think
>  > non-linux-gnu platforms) and therefore leaving the 4.1 branch in this
>  > situation forever eventually is against the interests of the FSF.
>
>  I don't think anyone talked about leaving the 4.1 branch in this state
>  forever, just that it would not be moved to GPLv3 before being closed.
>
>  If maintaining it is now too much of a burden, then let's close it.

Seconded.  Option 1 is the only viable (apart from 3 of course).

Then of course you'll see fixes going uncoordinated into the various
vendor branches.
Sounds like in the interest of the FSF, but not necessarily the GCC
user community.

Richard.


Been Looking how gcc operates there is a major weaknesses in its optimiser.

2008-03-16 Thread Peter Dolding
Now lets take a simple built.
gcc -c test1.c
gcc -c test2.c
gcc test1.o test2.o -o final

--test1.c--
/* of course in real world this would be some complex but solveable function */
int test (int a) {
a=a+1;
}

--test2.c--
#include 
int test(int a); /* normally in a header somewhere not bothering to
put it there */
int main() {
printf("hi value %i\n",test(20));
}
-- back to point out problem here --
Since test is in a different object file it gets completely skiped
from optimising even that it should be optimised out.

Now if you build it gcc test1.c test2.c -o final it does get optimised
out of existance.

Common argument linker should do this.  Sorry that means binutils has
to have a complete duplication of the optimisation system in gcc.

Now I will run threw the information that is useful that number one
could pickup if something like this has happened number two allow
something to be done about it.

First thing is keeping two lists  one of solvable functions ie
functions if passed constant values will return a constant and another
functions passed constant values that are extern.  Comparing these
lists before the objects go to linker will tell if this event has
happened.  Programmer can be informed and code altered avoiding events
like this.  Of course this is not perfect.

This can go even more down the track.  Producing two files.  .o and
.solve   solve contains all solvable functions.  solve can be
appendable for the project.  Since test1.c was built first and its
solvable was put in the solve file  when test2.o is processed it is
solved.  Now this makes it simpler for the linker it now only has to
strip unused functions also avoids taking a performance hit going
threw not needed code paths.  Ie gcc -c test1.c -solve project.solve;
gcc -c test2.c -solve project.solve.This allows location of
solvable functions spread across many .o files.  Of course this is not
perfect programmer would either have to build everything at least
twice or load them into the solve file in the correct order.  Its not
a magic bullet.  Of course printing warning about using functions that
could have been solved would be a good thing.

If the gcc supports import only .solve files these could be used with
libraries to reduce code as much as possible.  This could be like
crypt in glibc and other functions like it that when passed constant
values give constant returns.

Even in C++ it could have major size and speed advantages.

The optimiser is missing finding functions that should be optimised
out of existence due to a minor design oversite.

Peter Dolding


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Mark Mitchell

Richard Guenther wrote:


 I support the final-release-then-close approach.  But can we get a
 volunteer to convert that branch to GPLv3... ?


I strongly object to moving the 4.1 brach to GPLv3.


I too think that it would be a bad idea to switch the 4.1 branch to 
GPLv3, and, therefore, I think it would be a bad idea to do another 
release.  Instead, I think we should just leave the branch in its 
current state: people can put GPLv2 changes there to fix GCC 4.1.x 
regressions.  There's no particular benefit to actually "closing" the 
branch; anyone who wants can always take a checkout on the branch from 
whatever point they prefer.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713



Re: [RFC] GCC caret diagnostics

2008-03-16 Thread Mark Mitchell

Manuel López-Ibáñez wrote:


That is a good point. The underlying mechanism can be fine tuned
later. What would be the main problems to get caret diagnostics in
trunk? The most common issue is probably bad locations but I don't see
that as a major problem. On the contrary, the only reliable way to fix
this is to enable caret diagnostics, otherwise people hardly bother
with the exact location. 


I like caret diagnostics, so I'm in favor of the idea of the patch. 
But, I disagree with this style of argument.  Intentionally enabling a 
feature when we don't feel it's ready for public consumption in order to 
try to force people to fix it might be good for us as developers, but 
it's not good for users.  Carets should be on by default when -- only 
when -- we judge the accuracy to be sufficiently great that ordinary 
users will say "Wow, great new feature!" not "Gee, how frustrating, that 
sometimes works, but often just lies to me."


I wouldn't object to having the functionality in the compiler as an 
option (but off by default), until we fix the accuracy, though.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713



xscale-elf-gcc: compilation of header file requested

2008-03-16 Thread Ajit Mittal
This command
$(CC) -M $(HOST_CFLAGS) $(CPPFLAGS) -MQ $@ include/common.h > [EMAIL PROTECTED]

 is generating following error:

here cc is xscale-elf-gcc
and target is autoconf.mk

Generating include/autoconf.mk
xscale-elf-gcc: compilation of header file requested


any tips.

Regards


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread NightStrike
On 3/16/08, Mark Mitchell <[EMAIL PROTECTED]> wrote:
> Richard Guenther wrote:
>
> >>  I support the final-release-then-close approach.  But can we get a
> >>  volunteer to convert that branch to GPLv3... ?
> >
> > I strongly object to moving the 4.1 brach to GPLv3.
>
> I too think that it would be a bad idea to switch the 4.1 branch to
> GPLv3, and, therefore, I think it would be a bad idea to do another

What exactly is the downside to upgrading the license?  I'm not
familiar with the implications of doing so.


Re: [trunk] Addition to subreg section of rtl.text.

2008-03-16 Thread Kenneth Zadeck

Jeff, DJ and Richard,

Richard Sandiford and I have taken on the task of trying to fully 
explain subregs in the gcc docs.   This is an area where that 
traditionally has been very confusing to outsiders and even insiders who 
were not rtl maintainers.   As the community of active developers has 
evolved, the complexity of this area is problematic to people new to the 
community. 

We have attached a draft of the new section, however, there are many 
issues that we still do not understand.   Could you please review the 
draft and take a few moments to answer the remaining questions?   Also, 
if there are other subtleties that we have missed, please feel free to 
point us in the proper direction.


Of course, we also invite anyone in the community with experience here 
to comment.  All help is appreciated.


1) Is it possible to have a MODE_PARTIAL_INT inner register that is 
bigger than a word? If so, what restrictions (if any) apply to subregs 
that access the partial word? Is the inner register implicitly extended 
so that it is a whole number of words? If not, are we effectively 
allowing non-contiguous byte ranges when WORDS_BIG_ENDIAN != 
BYTES_BIG_ENDIAN?


E.g., suppose we have a 16-bit WORDS_BIG_ENDIAN, !BYTES_BIG_ENDIAN 
target in which PSImode is a 24-bit value. Is the layout of 0x543210 
"45..0123"?


2) Is it possible for the outer register in a normal subreg to be a 
superword MODE_PARTIAL_INT? (Our rules say "no".)


3) What about things like 80-bit FP modes on a 32-bit or 64-bit target? 
Is it valid to refer to pieces of an 80-bit FP pseudo? If so, are the 
rules we've got here right?


4) Do stores to subregs of hardreg invalidate just the registers 
mentioned in the outer mode or do they invalidate the entire set of 
registers mentioned in the inner mode? (our rules say only the outer mode).


Thanks in advance,

Kenny
@findex subreg
@item (subreg:@var{m1} @var{reg:m2} @var{bytenum})

@code{subreg} expressions are used to refer to a register in a machine
mode other than its natural one, or to refer to one register of
a multi-part @code{reg} that actually refers to several registers.

Each pseudo-register has a natural mode.  If it is necessary to
operate on it in a different mode, the pseudo-register must be
enclosed in a @code{subreg}.  It is seldom necessary to wrap
hard registers in @code{subreg}s; such registers would normally
reduce to a single @code{reg} rtx.

Subregs come in two distinct flavors, each having its own distinct
usage and rules:

@table
@item Paradoxical subregs
When @var{m1} is strictly wider than the
@var{m2}, the @code{subreg} expressions is called @dfn{paradoxical}.
The canonical test for this class of subreg is 

  @smallexample{GET_MODE_SIZE (m1) > GET_MODE_SIZE (m2)}.

Paradoxical subregs are used in cases where we want to refer to an
object in a wider mode but do not care what value the additional high
order bits have.  @var{Reg} is mapped into the low order bits.
Paradoxical subregs can be used as both lvalues and rvalues.   When
used as an lvalue, the high order bits are discarded.   @var{bytenum}
is always zero for a paradoxical register, even for big-endian targets.

@item Normal subregs 

When @var{m1} is at least as narrow as @var{m2} the @code{subreg}
expressions is called @dfn{normal}.

Normal subregs restrict consideration to certain bits of @var{reg}.
There are two cases.  If @var{m1} is smaller than a word, the
@code{subreg} refers to the least-significant part (or @dfn{lowpart})
of one word of @var{reg}.  If @var{m1} is word-sized or greater, the
@code{subreg} refers to one or more complete words.

When @var{m2} is larger than a word, the subreg is a @dfn{multi-word
outer subreg}.  When used as an lvalue, @code{subreg} is a word-based
accessor.  Storing to a @code{subreg} modifies all the words of
@var{reg} that overlap the @code{subreg}, but it leaves the other
words of @var{reg} alone.

When storing to a normal @code{subreg} that is smaller than a word,
the other bits of the referenced word are usually left in an undefined
state.  This laxity makes it easier to generate efficient code for
such instructions.  To represent an instruction that preserves all the
bits outside of those in the @code{subreg}, use @code{strict_low_part}
or @code{zero_extract} around the @code{subreg}.

If @var{reg} is a hard register, the @code{subreg} must also represent
the lowpart of a particular hard register, or represent one or more
complete hard registers.

@var{Bytenum} must identify the the offset of the first byte of the
@code{subreg} from the start of @var{reg}, assuming that @var{reg} is
laid out in memory order as defined by:

@cindex @code{WORDS_BIG_ENDIAN}, effect on @code{subreg}
The compilation parameter @code{WORDS_BIG_ENDIAN}, if set to 1, says
that byte number zero is part of the most significant word; otherwise,
it is part of the least significant word.

@cindex @code{BYTES_BIG_ENDIAN}, effect on @code{subreg}
The compilation parameter @code{BYTES_BIG_

Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Brian Dessent
NightStrike wrote:

> What exactly is the downside to upgrading the license?  I'm not
> familiar with the implications of doing so.

As I understand it, the concern is that many distros use the 4.1 branch
as the base for their main gcc system compiler.  If suddenly the branch
gets upgraded to GPLv3 that means they can no longer benefit from
backported fixes that get put on the branch since distros also tend to
have a number of local out-of-tree patches that are not necessarily
GPLv3.  The whole point of keeping an old release branch open would be
specifically to make their lives easier, but instead you'd be
multiplying the amount of work they'd have to do.  Now they must choose
between: a) ignoring all backports past the last GPLv2 point on the
branch, b) separately reimplementing each backport as a GPLv2 patch, or
c) auditing their whole local tree and deleting/reimplementing anything
not GPLv3 compatible.  It's an especially futile idea when the sole
reason for doing the v3 update is to be able to cut a final release off
the branch and then close it.  But anyone that wants the code on the
branch can easily just check it out or use a snapshot, there's no need
to go to all the trouble just to make a release.

Brian


Forward propagation before register allocation

2008-03-16 Thread Andy H
I have been working on AVR port and have come across many instances 
where poor code is produced due to the absence of effective forward 
propagation of operands before register allocation.


The AVR target in particular benefits from register lowering pass as 
many physical registers and instructions are only 8 bits. However, most  
of the opportunities this creates for register and instruction 
elimination  are not realized as the only following  pass that can help 
is register allocation which has minimal propagation capabilities - and 
no instruction simplifications.


Similarly, instruction splitting  created at the combine stage and 
before reload do not reliase the expected benefits from spliiting into 8 
bit operations. In fact if any new pseudo are created by a split this 
often turn into new hard register.


Targets other than the AVR which have small sized registers (i86 and 
68HC11 for example) are likely to have the same issues.


There have been suggestions in tha pass to split all instructions by RTL 
expanders. Indeed I and others have explored this. However, this has 
proved fruitless as this also requires a change to  a non-CC0 target. At 
this time this cannot be realized  due to problems reloading addresses 
and reloading arithmetic carry operations.


Hybrid approaches that split some instructions using  RTL expanders have 
also been tried. But the resultant complexity  (some split -some are 
not) then defeats early RTL optimizations -so we loose  more than we gain.


With  an additional forward propagation pass prior to register 
allocation the full benefits of  Subreg lowering and splitting are obtained.


I realise than nobody is keen on new passes. Yet I can see no other way 
in which this problem can be addressed. 

From a target viewpoint, moving fwprop2 latter would be fine. Adding a 
target dependent pass (before reload) would be also be fine. Indeed any 
way we could get better forward propagation/simplification  after 
combine/lowering and before register allocation would be great.


Can I humbly ask that the maintainers of gcc seriously consider this 
request and provide some means by which to solve the issue.


regards

Andy











Re: [trunk] Addition to subreg section of rtl.text.

2008-03-16 Thread Richard Kenner
> It is seldom necessary to wrap hard registers in @code{subreg}s;
> such registers would normally reduce to a single @code{reg} rtx.

Are these valid?  I know we've gone back and forth, but I thought the
current position is that SUBREGs of hard regs are only allowed
transitorily (e.g., during reload), but must be converted into a hard
reg by the end of the pass.


Re: Been Looking how gcc operates there is a major weaknesses in its optimiser.

2008-03-16 Thread Ian Lance Taylor
"Peter Dolding" <[EMAIL PROTECTED]> writes:

> Since test is in a different object file it gets completely skiped
> from optimising even that it should be optimised out.

http://gcc.gnu.org/wiki/LTO_Driver

Ian


Re: Been Looking how gcc operates there is a major weaknesses in its optimiser.

2008-03-16 Thread Peter Dolding

Ian Lance Taylor wrote:

"Peter Dolding" <[EMAIL PROTECTED]> writes:

  

Since test is in a different object file it gets completely skiped
from optimising even that it should be optimised out.



http://gcc.gnu.org/wiki/LTO_Driver

Ian
  
Ok that is half my idea.  Let it sort out at link stage.  But that does 
not cover like libc and dynamic dependencies.


The advantage of being able to generate list of solvable with what is 
needed to solve allowed smallest and fastest code possible.


Just like now gcc can solve maths functions.   Solvable storage of some 
form allows same kind of solve functions to be done with all libs on the 
system.


Yes I never found that page.

Peter Dolding


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Kaveh R. GHAZI
On Sun, 16 Mar 2008, Mark Mitchell wrote:

> Richard Guenther wrote:
>
> >>  I support the final-release-then-close approach.  But can we get a
> >>  volunteer to convert that branch to GPLv3... ?
> >
> > I strongly object to moving the 4.1 brach to GPLv3.
>
> I too think that it would be a bad idea to switch the 4.1 branch to
> GPLv3,

Can you please elabortate why?  I've heard one rationale, which is that
distributors would need to maintain separate forks of their 4.1 trees.  I
agree that's bad up to a point.  But distributors have always to some
extent maintained their own set of internal patches. E.g. Debian results
list lots of internal patches:
http://gcc.gnu.org/ml/gcc-testresults/2008-03/msg01125.html
http://gcc.gnu.org/ml/gcc-testresults/2008-03/msg01123.html
http://gcc.gnu.org/ml/gcc-testresults/2008-03/msg01124.html

However there is a class of users who don't get their compiler from
distributors, but who also want the safety of using official releases and
not some random svn checkout.  These users are missing one year's worth of
bugfixes.  They may not want to upgrade to 4.2.x for technical reasons.

So how far do we, wearing our FSF maintainer hats, bend over to
accommodate distributors at the expense of other classes of users?
By "how far" I mean for how long do we maintain the status quo?  If we
agree at some point to close the branch, it's only fair to do a final
release so the latter class of users get the same benefit that the clients
of distributors have had.

Thanks,
--Kaveh
--
Kaveh R. Ghazi  [EMAIL PROTECTED]


Re: gcc-4.1-20080303 is now available

2008-03-16 Thread Gerald Pfeifer
On Sun, 16 Mar 2008, Kaveh R. GHAZI wrote:
> However there is a class of users who don't get their compiler from
> distributors, but who also want the safety of using official releases and
> not some random svn checkout.  These users are missing one year's worth of
> bugfixes.  They may not want to upgrade to 4.2.x for technical reasons.

My pragmatic answer is that we are doing those weekly snapshots which
for a stable branch like 4.1 are more or less of the same quality that
any release would see at this point and that updating the license for
a branch as mature just is a bad thing as it massively violates the
expectations anyone would have at this point in the lifecycle.

Gerald


Re: -B vs Multilib

2008-03-16 Thread Greg Schafer
On Wed, Mar 12, 2008 at 10:44:48PM +1100, Greg Schafer wrote:

> Currently, -B doesn't add the multilib search paths when processing
> startfile_prefixes. For example, -B $prefix/lib/ doesn't find startfiles in
> $prefix/lib/../lib64
> 
> Most other calls to add_prefix() in gcc.c that refer to startfile_prefixes
> do actually process the multilibs. Is there any good reason why -B needs to
> be different? Maybe there are assumptions in the GCC build itself that would
> break if this were to change.

Ok, finally got around to trying some builds. The change I tested was this:

diff -Naur gcc-4.3.0.orig/gcc/gcc.c gcc-4.3.0/gcc/gcc.c
--- gcc-4.3.0.orig/gcc/gcc.c2008-03-02 22:55:19.0 +
+++ gcc-4.3.0/gcc/gcc.c 2008-03-16 21:39:07.0 +
@@ -3854,7 +3854,7 @@
add_prefix (&exec_prefixes, value, NULL,
PREFIX_PRIORITY_B_OPT, 0, 0);
add_prefix (&startfile_prefixes, value, NULL,
-   PREFIX_PRIORITY_B_OPT, 0, 0);
+   PREFIX_PRIORITY_B_OPT, 0, 1);
add_prefix (&include_prefixes, value, NULL,
PREFIX_PRIORITY_B_OPT, 0, 0);
n_switches++;

Initial signs were encouraging because a standard x86_64 bootstrap with
multilib enabled was successful. However, some non-standard build scenarios
resulted in build failure when building the 32-bit libgomp (configure test
failed to find -lgcc). It seems there are indeed some subtleties introduced
by the above as evidenced by this snippet of diffed -print-search-dirs -m32
output:

-/temptools/src/gcc-build/./gcc/32/
+/temptools/src/gcc-build/./gcc/../lib/

Therefore, for the time being I withdraw any proposal to modify the -B
behaviour. T'is a shame though, because it has the potential to solve the
other 4.3 driver problem I reported earlier..

Regards
Greg