RE: Information regarding -fPIC support for Interix gcc

2007-03-23 Thread Mayank Kumar
Ok, since I didn't get any pointers in this area.
I have a more generic question now to everybody:-

 I am new to gcc development as well as its architecture. I am looking forward 
to fix the -fPIC issue for Interix. As of now I found that a shared library 
compiled with fPIC crashes due to some wrong assembly instructions(a jmp 
instruction) embedded into a function call which cause it to jump 
unconditionally to a different function altogether whereas the c code has no 
such jumps or function calls.
Can some body point me to the part of source code that I should look into for 
this. I mean:-
1: the part which is responsible for generating this code from c code.
2: the part of the gcc code where -fPIC is being handled.
3: any other pointers to investigating this would be helpful.

Thanks
Mayank


-Original Message-
From: Paul Brook [mailto:[EMAIL PROTECTED]
Sent: Friday, March 23, 2007 2:24 AM
To: gcc@gcc.gnu.org
Cc: Mayank Kumar
Subject: Re: Information regarding -fPIC support for Interix gcc

On Thursday 22 March 2007 20:20, Mayank Kumar wrote:
> I work for Microsoft SFU(services for unix) group and I am currently
> investigating this fPIC issue for gcc 3.3 which is available with sfu 3.5.

gcc3.3 is really quite old, and hasn't been maintained for quite some time.
You're unlikely to get a particularly useful response from this list (or any
volunteer gcc developers) unless you're working with current gcc.

Of course there are organisations who will provide you with commercial support
for older gcc releases. That's a separate issue though.

Paul


Re: We're out of tree codes; now what?

2007-03-23 Thread Benjamin Kosnik

> I don't have these around, and I mistakenly updated my tree, so the
> numbers below are, unfortunately, incomparable to the numbers above.
> The disturbing fact is that mainline seems to be significantly slower
> now than it was in my previous tests (from just a few days ago), and
> the slowdown (20%) is much greater than any of the slowdowns we've
> been discussing in this thread. Has anyone else noticed this, or
> perhaps it's something in my environment?

Yes. This first started oscillating about a week or so ago.

http://www.suse.de/~gcctest/c++bench/tramp3d/

Day-to-day resolution is hard to see on these graphs. And, what we
really need is version-to-version resolution anyway...

It's a bummer because up until this point, mainline was making real
progress on compile-time issues. (Of course, anything is going to look
fast compared to 4.2.0.)

FWIW, I liked your earlier idea, that there would be a "speed credit"
whereby people who have done a 4% speed increase could then slow down
the compiler 2% in the future, or something similar. To enforce this,
we'd need a compile-time check as part of "check gcc" then

I am unfamiliar with this part of gcc to know if there are repeat
offenders, but this might be a way of making all developers more aware
of compile-time costs.

-benjamin


Re: GCC 4.2.0 Status Report (2007-03-22)

2007-03-23 Thread Richard Guenther

On 3/23/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:

Mark Mitchell wrote:
> There are still a number of GCC 4.2.0 P1s, including the following which
> are new in GCC 4.2.0 (i.e., did not occur in GCC 4.1.x), together with
> -- as near as I can tell, based on Bugzilla -- the responsibility parties.
>
> PR 29585 (Novillo): ICE-on-valid
> PR 30700 (Sayle): Incorrect constant generation

Sorry, that's PR 30704.

PR 30700 is also critical:

PR 30700 (Guenther): Incorrect undefined reference


Unfortunately Honza suggested my proposed fix is not the right one and I'm out
of cgraph-fu here.  Honza, can you have a look at that PR please?

Thanks,
Richard.


Re: We're out of tree codes; now what?

2007-03-23 Thread Richard Guenther

On 3/23/07, Benjamin Kosnik <[EMAIL PROTECTED]> wrote:


> I don't have these around, and I mistakenly updated my tree, so the
> numbers below are, unfortunately, incomparable to the numbers above.
> The disturbing fact is that mainline seems to be significantly slower
> now than it was in my previous tests (from just a few days ago), and
> the slowdown (20%) is much greater than any of the slowdowns we've
> been discussing in this thread. Has anyone else noticed this, or
> perhaps it's something in my environment?

Yes. This first started oscillating about a week or so ago.

http://www.suse.de/~gcctest/c++bench/tramp3d/

Day-to-day resolution is hard to see on these graphs. And, what we
really need is version-to-version resolution anyway...


Note that this particular tester is also used to test effects of patches before
they hit mainline.  This one:
http://www.suse.de/~gcctest/c++bench-haydn/tramp3d/
isn't.

Version-to-version resolution is really hard, but usually it's easy to
pinpoint an
offending patch.

And btw, I also prefer option (2), on 64bit hosts this will even have
zero memory
usage impact.

Richard.


Can't bootstrap gcc (revision 123155) trunk on cygwin: configure: error: C compiler cannot create executables [configure-stage2-intl] Error 77

2007-03-23 Thread Christian Joensson

This was on

Windows XP/SP2 cygwin on pentium4 single i686:

binutils 20060817-1
bison2.3-1
cygwin   1.5.24-2
dejagnu  20021217-2
expect   20030128-1
gcc  3.4.4-3
gcc-ada  3.4.4-3
gcc-g++  3.4.4-3
gmp  4.2.1-1
make 3.81-1
tcltk20060202-1
w32api   3.8-1

LAST_UPDATED: Fri Mar 23 10:13:01 UTC 2007 (revision 123155)

configure: ../gcc/configure --enable-languages=c --disable-nls

For some reason, yet unknow to me, I don't seem to be able to
bootstrap gcc trunk on cygwin due to some issue with configuring in
intl:

checking whether NLS is requested... no
checking for msgfmt... /usr/bin/msgfmt
checking for gmsgfmt... /usr/bin/msgfmt
checking for xgettext... /usr/bin/xgettext
checking for msgmerge... /usr/bin/msgmerge
checking for i686-pc-cygwin-gcc...
/usr/local/src/trunk/objdir/./prev-gcc/xgcc
-B/usr/local/src/trunk/objdir/./prev-gcc/
-B/usr/local/i686-pc-cygwin/bin/
checking for C compiler default output file name... configure: error:
C compiler cannot create executables
See `config.log' for more details.
make[2]: *** [configure-stage2-intl] Error 77
make[2]: Leaving directory `/usr/local/src/trunk/objdir'
make[1]: *** [stage2-bubble] Error 2
make[1]: Leaving directory `/usr/local/src/trunk/objdir'
make: *** [all] Error 2

the configure in intl works for stage1 (and stage0 so to speak),
attached is the intl/config.log

--
Cheers,

/ChJ


config.log
Description: Binary data


Re: We're out of tree codes; now what?

2007-03-23 Thread Kaveh R. GHAZI
On Thu, 22 Mar 2007, Mike Stump wrote:

> I did some quick C measurements compiling expr.o from the top of the
> tree, with an -O0 built compiler with checking:
> [...]
> I'll accept a  0.15% compiler.

Hi Mike,

When I brought up the 16-bit option earlier, Jakub replied that x86 would
get hosed worse because it's 16-bit accesses are not as efficient as it's
8 or 32 bit ones.

http://gcc.gnu.org/ml/gcc/2007-03/msg00763.html

I assume you tested on Darwin?  Can you tell me if it was ppc or x86?

Thanks,
--Kaveh
--
Kaveh R. Ghazi  [EMAIL PROTECTED]


Re: We're out of tree codes; now what?

2007-03-23 Thread Doug Gregor

On 3/23/07, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote:

When I brought up the 16-bit option earlier, Jakub replied that x86 would
get hosed worse because it's 16-bit accesses are not as efficient as it's
8 or 32 bit ones.

http://gcc.gnu.org/ml/gcc/2007-03/msg00763.html

I assume you tested on Darwin?  Can you tell me if it was ppc or x86?


I tested on x86 (i686-pc-linux-gnu; processor is an Intel Core 2 Duo
E6600) and found a 1% slowdown with 16-bit codes vs. 8-bit codes.

 Cheers,
 Doug


RE: Can't bootstrap gcc (revision 123155) trunk on cygwin: configure: error: C compiler cannot create executables [configure-stage2-intl] Error 77

2007-03-23 Thread Dave Korn
On 23 March 2007 12:00, Christian Joensson wrote:

> For some reason, yet unknow to me, I don't seem to be able to
> bootstrap gcc trunk on cygwin due to some issue with configuring in
> intl:

  It's generic.  
 
> checking for C compiler default output file name... configure: error:
> C compiler cannot create executables
> See `config.log' for more details.
> make[2]: *** [configure-stage2-intl] Error 77
> make[2]: Leaving directory `/usr/local/src/trunk/objdir'
> make[1]: *** [stage2-bubble] Error 2
> make[1]: Leaving directory `/usr/local/src/trunk/objdir'
> make: *** [all] Error 2
> 
> the configure in intl works for stage1 (and stage0 so to speak),
> attached is the intl/config.log

  See http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31039
 also http://cygwin.com/ml/cygwin/2007-03/msg00674.html
  http://cygwin.com/ml/cygwin/2007-03/msg00676.html
  http://cygwin.com/ml/cygwin/2007-03/msg00705.html

  (Running a build in the background myself to see if I can see what's going
on.)


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: We're out of tree codes; now what?

2007-03-23 Thread H. J. Lu
On Fri, Mar 23, 2007 at 09:29:05AM -0400, Doug Gregor wrote:
> On 3/23/07, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote:
> >When I brought up the 16-bit option earlier, Jakub replied that x86 would
> >get hosed worse because it's 16-bit accesses are not as efficient as it's
> >8 or 32 bit ones.
> >
> >http://gcc.gnu.org/ml/gcc/2007-03/msg00763.html
> >
> >I assume you tested on Darwin?  Can you tell me if it was ppc or x86?
> 
> I tested on x86 (i686-pc-linux-gnu; processor is an Intel Core 2 Duo
> E6600) and found a 1% slowdown with 16-bit codes vs. 8-bit codes.

Gcc isn't very efficient on bitfield operation. The main issue is
SLOW_BYTE_ACCESS. It is poorly documented and implemented:

http://gcc.gnu.org/ml/gcc-patches/2006-08/msg00885.html
http://gcc.gnu.org/ml/gcc-patches/2006-09/msg00029.html
http://gcc.gnu.org/ml/gcc-patches/2006-10/msg00705.html

Change to SLOW_BYTE_ACCESS on x86 will make some bitfield operations
to go faster while slown others. I believe Apple has some testcases
for it. I may also have some testcases.


H.J.


Re: We're out of tree codes; now what?

2007-03-23 Thread Marc Espie
In article <[EMAIL PROTECTED]> you write:
>On 19 Mar 2007 19:12:35 -0500, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:
>> similar justifications for yet another small% of slowdown have been
>> given routinely for over 5 years now.  small% build up; and when they
>> build up, they don't not to be convincing ;-)
>
>But what is the solution? We can complain about performance all we
>want (and we all love to do this), but without a plan to fix it we're
>just wasting effort. Shall we reject every patch that causes a slow
>down? Hold up releases if they are slower than their predecessors?
>Stop work on extensions, optimizations, and bug fixes until we get our
>compile-time performance back to some predetermined level?

Simple sociology.

Working on new optimizations = sexy.
Trimming down excess weight = unsexy.

GCC being vastly a volunteer project, it's much easier to find people
who want to work on their pet project, and implement a recent
optimization they found in a nice paper (that will gain 0.5% in some
awkward case) than to try to track down speed-downs and to reverse them.

I remember back then when people started converting gcc from RTL to ssa,
I was truely excited. Finally doing the right thing, compile times 
are going to get back down where they belong.

And then disappointment, as the ssa stuff just got added on top of the
RTL stuff, and the RTL stuff that was supposed to vanish takes forever
to go away...

Parts of GCC are over-engineered. I used to be able to read the 
__attribute__ stuff, then it got refactored, and the new code looks like 
it's going to be about 3 or 4 times slower than it was.

At some point, it's going to be really attractive to start again from
scratch, without all the backends/frontend complexities and interactions
that make cleaning up stuff harder and harder...

Also, I have the feeling that quite a few of gcc sponsors are in it for
the publicity mostly (oh look, we're nice people giving money to gcc),
and new optimization passes that get 0.02% out of SPEC are better bang
for their money.

Kuddoes go to the people who actually manage to reverse some of the
excesses of the new passes.


Re: Information regarding -fPIC support for Interix gcc

2007-03-23 Thread Ian Lance Taylor
Mayank Kumar <[EMAIL PROTECTED]> writes:

> Ok, since I didn't get any pointers in this area.
> I have a more generic question now to everybody:-
> 
>  I am new to gcc development as well as its architecture. I am looking 
> forward to fix the -fPIC issue for Interix. As of now I found that a shared 
> library compiled with fPIC crashes due to some wrong assembly instructions(a 
> jmp instruction) embedded into a function call which cause it to jump 
> unconditionally to a different function altogether whereas the c code has no 
> such jumps or function calls.
> Can some body point me to the part of source code that I should look into for 
> this. I mean:-

These are all rather difficult questions to answer succintly.  gcc is
a large code base.  It is not organized in a way which makes it simple
to answer this sort of question.

> 1: the part which is responsible for generating this code from c code.

If by "this code" you mean inserting a jmp instruction, there are many
possibilities.  The first one you should look at is that at least on
some x86 platforms gcc intentionally calls __i686.get_pc_thunk.bx as
part of setting the PIC register.  This looks a different function but
it is just a tiny helper routine.

> 2: the part of the gcc code where -fPIC is being handled.

It is handled in a number of places.  Search for flag_pic.  For i386
in particular the most exciting place is probably
legitimize_pic_address.

> 3: any other pointers to investigating this would be helpful.

Reading the gcc internal's manual?

Ian


The Linux binutils 2.17.50.0.14 is released

2007-03-23 Thread H. J. Lu
This is the beta release of binutils 2.17.50.0.14 for Linux, which is
based on binutils 2007 0322 in CVS on sourceware.org plus various
changes. It is purely for Linux.

All relevant patches in patches have been applied to the source tree.
You can take a look at patches/README to see what have been applied and
in what order they have been applied.

Starting from the 2.17.50.0.4 release, the default output section LMA
(load memory address) has changed for allocatable sections from being
equal to VMA (virtual memory address), to keeping the difference between
LMA and VMA the same as the previous output section in the same region.

For

.data.init_task : { *(.data.init_task) }

LMA of .data.init_task section is equal to its VMA with the old linker.
With the new linker, it depends on the previous output section. You
can use

.data.init_task : AT (ADDR(.data.init_task)) { *(.data.init_task) }

to ensure that LMA of .data.init_task section is always equal to its
VMA. The linker script in the older 2.6 x86-64 kernel depends on the
old behavior.  You can add AT (ADDR(section)) to force LMA of
.data.init_task section equal to its VMA. It will work with both old
and new linkers. The x86-64 kernel linker script in kernel 2.6.13 and
above is OK.

The new x86_64 assembler no longer accepts

monitor %eax,%ecx,%edx

You should use

monitor %rax,%ecx,%edx

or
monitor

which works with both old and new x86_64 assemblers. They should
generate the same opcode.

The new i386/x86_64 assemblers no longer accept instructions for moving
between a segment register and a 32bit memory location, i.e.,

movl (%eax),%ds
movl %ds,(%eax)

To generate instructions for moving between a segment register and a
16bit memory location without the 16bit operand size prefix, 0x66,

mov (%eax),%ds
mov %ds,(%eax)

should be used. It will work with both new and old assemblers. The
assembler starting from 2.16.90.0.1 will also support

movw (%eax),%ds
movw %ds,(%eax)

without the 0x66 prefix. Patches for 2.4 and 2.6 Linux kernels are
available at

http://www.kernel.org/pub/linux/devel/binutils/linux-2.4-seg-4.patch
http://www.kernel.org/pub/linux/devel/binutils/linux-2.6-seg-5.patch

The ia64 assembler is now defaulted to tune for Itanium 2 processors.
To build a kernel for Itanium 1 processors, you will need to add

ifeq ($(CONFIG_ITANIUM),y)
CFLAGS += -Wa,-mtune=itanium1
AFLAGS += -Wa,-mtune=itanium1
endif

to arch/ia64/Makefile in your kernel source tree.

Please report any bugs related to binutils 2.17.50.0.14 to [EMAIL PROTECTED]

and

http://www.sourceware.org/bugzilla/

Changes from binutils 2.17.50.0.13:

1. Update from binutils 2007 0322.
2. Fix >16byte nop padding regression in x86 assembler.
3. Fix x86-64 disassembler for xchg. PR 4218.
4. Optimize opcode for x86-64 xchg.
5. Allow register operand with x86 nop.
6. Properly handle holes between sections for PE-COFF. PR 4210.
7. Print more PE-COFF info for objdump -p.
8. Report missing matching LO16 relocation for HI16 relocation in mips
linker.
9. Use PC-relative relocation for Win64.
10. Fix strip for Solaris. PR 3535.
11. Fix a C++ demangler crash.
12. Some m32c update.
13. Fix misc ARM bugs.

Changes from binutils 2.17.50.0.12:

1. Update from binutils 2007 0315.
2. Add EFI/x86-64 support.
3. Fix ELF linker for relocation against STN_UNDEF. PR 3958.
4. Fix ELF linker for SHT_NOBITS section whose VMA > page size. PR 4144.
5. Make assembler and disassembler consistent for "test %eax,%ebx". PR
4027.
6. Fix i386 32bit address wraparound. PR 3966.
7. Allow Linux/i386 linker to read FreeBSD/i386 object files.
8. Fix ELF linker crash upon use of .gnu.warning. sections. PR
3953.
9. Fix ELF linker to issue an error on bad section in segment. PR 4007.
10. Support enabling both x86_64-mingw32 and i386-mingw32. PR 3945.
11. Fix assembler to stabilize .gcc_except_table relaxation. PR 4029.
12. Fix a MIPS linker crash. PR 3852.
13. Fix readelf for h8300-elf. PR 3800.
14. Fix strip for Solaris. PR 3535.
15. Misc xtensa bug fixes.
16. Misc PPC bug fixes.
17. Misc SPU bug fixes.
18. Add support for Toshiba MeP.

Changes from binutils 2.17.50.0.11:

1. Update from binutils 2007 0128.
2. Remove duplicate code in x86 assembler.
3. Fix 32bit and 64bit HPPA/ELF.

Changes from binutils 2.17.50.0.10:

1. Update from binutils 2007 0125.
2. Support environment variables, LD_SYMBOLIC for -Bsymbolic and
LD_SYMBOLIC_FUNCTIONS for -Bsymbolic-functions.
3. Build binutils rpm with LD_SYMBOLIC_FUNCTIONS=1 and reduce PLT
relocations in libfd.so by 84%.
4. Enable sharable sections only for ia32, x86-64 and ia64.
5. Properly handle PT_GNU_RELRO segment for objcopy.

Changes from binutils 2.17.50.0.9:

1. Update from binutils 2007 0122.
2. Implement sharable section proposal for ia32, x86-64 and ia64:

http://groups-beta.google.com/group/generic-abi

3. Implement linker enhancement, -Bsymbolic-functions,
--dynamic-list-cpp-new and --dyna

A question on ACX_BUGURL

2007-03-23 Thread H. J. Lu
ACX_BUGURL has

   [case "$withval" in
  yes) AC_MSG_ERROR([bug URL not specified]) ;;
  no)  REPORT_BUGS_TO="";
   REPORT_BUGS_TEXI=""
   ;;
  *)   REPORT_BUGS_TO="<$withval>"
   REPORT_BUGS_TEXI="@uref{`echo $withval | sed 's/@/@@/g'`}"
   ;;
 esac],
 REPORT_BUGS_TO="<$1>"
 REPORT_BUGS_TEXI="@uref{$1}"

It assumes there is no @ in $1. Shouldn't be

 REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"


H.J.


Re: We're out of tree codes; now what?

2007-03-23 Thread Manuel López-Ibáñez

On 23/03/07, Marc Espie <[EMAIL PROTECTED]> wrote:


GCC being vastly a volunteer project,


Actually, if you monitored gcc-patches and the subversion commits for
a while, you will realise that that statement is factually wrong. Most
of the code comes from individuals that are paid to work in GCC [*] by
Red Hat, Google, Sourceware, Suse, Sony, IBM, etc,  as it happens in
the Linux kernel as well. Moreover, one problem that is mentioned from
time to time is the difficulty to attract new volunteer contributors
for various reasons.

Since from a false premise anything may be concluded...

Cheers,

Manuel.


[*] Notwithstanding that most of them put extra effort and time than
the hours that they are paid for.


Re: A question on ACX_BUGURL

2007-03-23 Thread Paolo Bonzini

> It assumes there is no @ in $1. Shouldn't be
> 
>  REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"

Seems fair, but please check all the users, they might be escaping the
value already.

Paolo


Re: We're out of tree codes; now what?

2007-03-23 Thread Andrew Haley
Manuel López-Ibáñez writes:
 > On 23/03/07, Marc Espie <[EMAIL PROTECTED]> wrote:
 > >
 > > GCC being vastly a volunteer project,
 > 
 > Actually, if you monitored gcc-patches and the subversion commits for
 > a while, you will realise that that statement is factually wrong. Most
 > of the code comes from individuals that are paid to work in GCC [*] by
 > Red Hat, Google, Sourceware, Suse, Sony, IBM, etc,  as it happens in
 > the Linux kernel as well.

In which case, the companies concerned, rather than the individuals,
are volunteers: they have no contractual obligation to the FSF.  Marc
Espie's argument stands.

Andrew.


Re: A question on ACX_BUGURL

2007-03-23 Thread Joseph S. Myers
On Fri, 23 Mar 2007, H. J. Lu wrote:

> It assumes there is no @ in $1. Shouldn't be
> 
>  REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"

Feel free to refine it.  It's just there are about three possible users of 
these macros in the GCC and src trees and I expected them all to wish to 
use a bug database or instructions URL not containing '@' as the default.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: We're out of tree codes; now what?

2007-03-23 Thread Marc Espie
In article <[EMAIL PROTECTED]> you write:
>On Mar 20, 2007, at 11:23 PM, Alexandre Oliva wrote:
>> As for configure scripts...  autoconf -j is long overdue ;-)

>Is that the option to compile autoconf stuff into fast running  
>efficient code?  :-)

>But seriously, I think we need to press autoconf into generating 100x  
>faster code 90% of the time.  Maybe prebundling answers for the  
>common targets...

Doesn't win all that much.

Over in OpenBSD, we tried to speed up the build of various software by
using a site cache for most of the system stuff.  Makes some configure
scripts go marginally faster, but you won't gain back more than about 5%.

Due to autoconf design, there are lots of things you actually cannot
cache, because they interact in weird ways, and are used in strange

If you want to speed up autoconf, it needs to have some smarts about
what's going on. Not a huge list of generated variables, but what those
variables *mean*, semantically, and which value is linked to what
configuration option. You would then be able to avoid recomputing a lot
of things (and you would also probably be able to compress information
a lot... when I look at the list of configured stuff, there are so many
duplicates it's scary).  Autoconf also needs an actual database of
tests, so that people don't reinvent a square wheel all the time.

This would also solve the second autoconf plague: it's not only slow as
molasses, but it also `auto-detects' stuff when you don't want it to,
which leads to hard to reproduce builds, unless you start with an empty
machine every time (which has its own performance issue).

Even if it has lots of shortcomings still (large pile of C++ code to
compile first), I believe a replacement like cmake shows a lot of
promise there...

In my opinion, after spending years *fighting* configure issues in
making programs compile correctly under OpenBSD, I believe the actual
database of tests is the only thing worth saving in autoconf.
I don't know what the actual `good design' would be, but I'm convinced
using m4 as a lisp interpreter to generate shell scripts is a really bad
idea.


Tobias Burnus and Brooks Moses appointed Fortran maintainers

2007-03-23 Thread David Edelsohn
I am pleased to announce that the GCC Steering Committee has
appointed Tobias Burnus and Brooks Moses as Fortran maintainers.

Please join me in congratulating Tobias and Brooks on their new role.
Tobias and Brooks, please update your listings in the MAINTAINERS file.

Happy hacking!
David



Ayal Zaks appointed Modulo Scheduler maintainer

2007-03-23 Thread David Edelsohn
I am pleased to announce that the GCC Steering Committee has
appointed Ayal Zaks as Modulo Scheduler maintainer.

Please join me in congratulating Ayal on his new role.
Ayal, please update your listings in the MAINTAINERS file.

Happy hacking!
David



Re: A question on ACX_BUGURL

2007-03-23 Thread H. J. Lu
On Fri, Mar 23, 2007 at 04:57:03PM +, Joseph S. Myers wrote:
> On Fri, 23 Mar 2007, H. J. Lu wrote:
> 
> > It assumes there is no @ in $1. Shouldn't be
> > 
> >  REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
> 
> Feel free to refine it.  It's just there are about three possible users of 
> these macros in the GCC and src trees and I expected them all to wish to 
> use a bug database or instructions URL not containing '@' as the default.
> 

I am applying this patch and will regeneate bfd/configure.

H.J.

2007-03-23  H.J. Lu  <[EMAIL PROTECTED]>

* acx.m4 (ACX_BUGURL): Replace "@" with "@@" for
REPORT_BUGS_TEXI.

--- config/acx.m4.url   2007-03-23 09:35:15.0 -0700
+++ config/acx.m4   2007-03-23 09:49:08.0 -0700
@@ -585,7 +585,7 @@ AC_DEFUN([ACX_BUGURL],[
   ;;
  esac],
  REPORT_BUGS_TO="<$1>"
- REPORT_BUGS_TEXI="@uref{$1}"
+ REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
   )
   AC_SUBST(REPORT_BUGS_TO)
   AC_SUBST(REPORT_BUGS_TEXI)


Re: We're out of tree codes; now what?

2007-03-23 Thread Richard Kenner
> In which case, the companies concerned, rather than the individuals,
> are volunteers: they have no contractual obligation to the FSF.  Marc
> Espie's argument stands.

I don't see that.  They are "volunteers" in terms of what they choose to
contribute to the FSF, but not at all such in terms of what they *work on*.
And I thought that discussion was about the latter.


Re: We're out of tree codes; now what?

2007-03-23 Thread Manuel López-Ibáñez

On 23/03/07, Richard Kenner <[EMAIL PROTECTED]> wrote:

> In which case, the companies concerned, rather than the individuals,
> are volunteers: they have no contractual obligation to the FSF.  Marc
> Espie's argument stands.

I don't see that.  They are "volunteers" in terms of what they choose to
contribute to the FSF, but not at all such in terms of what they *work on*.
And I thought that discussion was about the latter.



I refrained from replying that because I think discussing the
semantics of the word 'volunteer' when applied to for-profit companies
or more precisely for individuals that work in for-profit companies is
way off-topic and not very helpful in order to solve the problem at
hand.

Cheers,

Manuel.


RE: We're out of tree codes; now what?

2007-03-23 Thread Dave Korn
On 23 March 2007 17:01, Marc Espie wrote:

> In article <[EMAIL PROTECTED]> you write:
>> On Mar 20, 2007, at 11:23 PM, Alexandre Oliva wrote:
>>> As for configure scripts...  autoconf -j is long overdue ;-)
> 
>> Is that the option to compile autoconf stuff into fast running
>> efficient code?  :-)

  Nah, it's the parallel autoconf flag

>> But seriously, I think we need to press autoconf into generating 100x
>> faster code 90% of the time.  Maybe prebundling answers for the
>> common targets...
> 
> Doesn't win all that much.

  Here's a thought, off the top of my head and not deeply considered because
it's friday afternoon...

  The main overhead in autoconf is all the forking and spawning and doing lots
and lots of operations which all involve large numbers of small files each
processed one at a time with one or more of a number of tools.

  It would be kinda neat if instead of repeating this endless cycle of cat'ing
here-docs into files, compiling those files and executing or grepping the
results it could cat the whole lot into one big file, compile it once, then
parse the error messages to spot the non-compiling (AC_COMPILE in particular)
tests, then re-cat the surviving tests (well, just the execute tests) into
another file, synthesize a main() that invokes them one after another, and
parses the output from the whole lot in one go.

  I can see it being tricky to shoehorn execution tests all into the same
executable, but I do think it should be pretty likely that it would be
possible to teach it to do all the compile tests in a single compilation,
shouldn't it?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: A question on ACX_BUGURL

2007-03-23 Thread Andreas Schwab
"H. J. Lu" <[EMAIL PROTECTED]> writes:

>   REPORT_BUGS_TO="<$1>"
> - REPORT_BUGS_TEXI="@uref{$1}"
> + REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"

You need to quote $1.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: A question on ACX_BUGURL

2007-03-23 Thread H. J. Lu
On Fri, Mar 23, 2007 at 06:55:38PM +0100, Andreas Schwab wrote:
> "H. J. Lu" <[EMAIL PROTECTED]> writes:
> 
> >   REPORT_BUGS_TO="<$1>"
> > - REPORT_BUGS_TEXI="@uref{$1}"
> > + REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
> 
> You need to quote $1.

I treated it the same as

REPORT_BUGS_TEXI="@uref{`echo $withval | sed 's/@/@@/g'`}".

It works for me.


H.J.


Re: We're out of tree codes; now what?

2007-03-23 Thread Gabriel Dos Reis
"Manuel López-Ibáñez" <[EMAIL PROTECTED]> writes:

| On 23/03/07, Richard Kenner <[EMAIL PROTECTED]> wrote:
| > > In which case, the companies concerned, rather than the individuals,
| > > are volunteers: they have no contractual obligation to the FSF.  Marc
| > > Espie's argument stands.
| >
| > I don't see that.  They are "volunteers" in terms of what they choose to
| > contribute to the FSF, but not at all such in terms of what they *work on*.
| > And I thought that discussion was about the latter.
| >
| 
| I refrained from replying that because I think discussing the
| semantics of the word 'volunteer' when applied to for-profit companies
| or more precisely for individuals that work in for-profit companies is
| way off-topic and not very helpful in order to solve the problem at
| hand.

And you could have as well assumed that Marc Espie is not totally
ignorant of the GCC project social aspects.

[ as a matter of fact, his relaying "remor report" surrounding EGCS
  back in summer 1997 partly convinced me to spend resources on  EGCS,
  then GCC. ] 

-- Gaby


RE: A question on ACX_BUGURL

2007-03-23 Thread Dave Korn
On 23 March 2007 18:11, H. J. Lu wrote:

> On Fri, Mar 23, 2007 at 06:55:38PM +0100, Andreas Schwab wrote:
>> "H. J. Lu" <[EMAIL PROTECTED]> writes:
>> 
>>>   REPORT_BUGS_TO="<$1>"
>>> - REPORT_BUGS_TEXI="@uref{$1}"
>>> + REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
>> 
>> You need to quote $1.
> 
> I treated it the same as
> 
> REPORT_BUGS_TEXI="@uref{`echo $withval | sed 's/@/@@/g'`}".
> 
> It works for me.

  It's a url, right?  It could have colons and forward slashes, but I don't
think most of the remaining metacharacters would be valid in a URL, would
they?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: We're out of tree codes; now what?

2007-03-23 Thread Mike Stump

On Mar 23, 2007, at 6:08 AM, Kaveh R. GHAZI wrote:
When I brought up the 16-bit option earlier, Jakub replied that x86  
would

get hosed worse because it's 16-bit accesses


I'm happy to have experts make predictions.  I'm happy to look at  
real numbers to double check things.  If an expert can predict what  
we could measure to show us getting hosed, I'd be happy to run that  
test.



I assume you tested on Darwin?


Yes.


Can you tell me if it was ppc or x86?


2.66 GHz x86.


Re: A question on ACX_BUGURL

2007-03-23 Thread H. J. Lu
On Fri, Mar 23, 2007 at 06:20:10PM -, Dave Korn wrote:
> On 23 March 2007 18:11, H. J. Lu wrote:
> 
> > On Fri, Mar 23, 2007 at 06:55:38PM +0100, Andreas Schwab wrote:
> >> "H. J. Lu" <[EMAIL PROTECTED]> writes:
> >> 
> >>>   REPORT_BUGS_TO="<$1>"
> >>> - REPORT_BUGS_TEXI="@uref{$1}"
> >>> + REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
> >> 
> >> You need to quote $1.
> > 
> > I treated it the same as
> > 
> > REPORT_BUGS_TEXI="@uref{`echo $withval | sed 's/@/@@/g'`}".
> > 
> > It works for me.
> 
>   It's a url, right?  It could have colons and forward slashes, but I don't
> think most of the remaining metacharacters would be valid in a URL, would
> they?
> 

If we want to change, we should first unify the argument processing
with something like the change below.


H.J.
---
--- acx.m4.url  2007-03-23 10:14:25.0 -0700
+++ acx.m4  2007-03-23 11:51:24.0 -0700
@@ -577,16 +577,23 @@ AC_DEFUN([ACX_BUGURL],[
[Direct users to URL to report a bug]),
 [case "$withval" in
   yes) AC_MSG_ERROR([bug URL not specified]) ;;
-  no)  REPORT_BUGS_TO="";
-  REPORT_BUGS_TEXI=""
+  no)  BUGURL="";
   ;;
-  *)   REPORT_BUGS_TO="<$withval>"
-  REPORT_BUGS_TEXI="@uref{`echo $withval | sed 's/@/@@/g'`}"
+  *)   BUGURL="$withval"
   ;;
  esac],
- REPORT_BUGS_TO="<$1>"
- REPORT_BUGS_TEXI="@uref{`echo $1 | sed 's/@/@@/g'`}"
+ BUGURL="$1"
   )
+  case ${BUGURL} in
+  "")
+REPORT_BUGS_TO=""
+REPORT_BUGS_TEXI=""
+;;
+  *)
+REPORT_BUGS_TO="<$BUGURL>"
+REPORT_BUGS_TEXI="@uref{`echo $BUGURL | sed 's/@/@@/g'`}"
+;;
+  esac;
   AC_SUBST(REPORT_BUGS_TO)
   AC_SUBST(REPORT_BUGS_TEXI)
 ])


SoC Project: Propagating array data dependencies from Tree-SSA to RTL

2007-03-23 Thread Alexander Monakov

Hello,

I would like to submit the following project for Google Summer of Code:

Propagating array data dependence information from Tree-SSA to RTL

Synopsis:

The RTL array data dependence analyzer was written specifically for swing  
modulo scheduling (SMS) implementation in GCC.  It is overly conservative,  
because it uses RTL alias analysis to find intra- and inter-loop memory  
dependencies.  It also assumes that the distance of an inter-loop memory  
dependence equals to one.


I propose to improve the quality of data dependence analysis on RTL via  
propagating the information from Tree-SSA dependence analyzer.  The saved  
information will be used in construction of data dependence graph for  
SMS.  It can also be used for other optimizations, e.g. scheduler.


Rationale:

In GCC, there are two analyses of array data dependencies, which are run  
on Tree-SSA and RTL levels, respectively.  The Tree-SSA data dependence  
analysis is located in tree-data-ref.[ch] files, also using parts of  
tree-chrec.c.  For a given loop, the analysis builds a vector of data  
references (represented as struct data_reference) and a vector of  
dependence relations (represented as struct data_dependence_relation). A  
data reference contains links to a memory reference and a container  
statement, a first accessed location, a base object, and other memory  
attributes.  A dependence relation contains the data references it links,  
its type, distance vector, direction vector, and subscripts information.


The RTL array data dependence analyzer is located in ddg.[ch] files and  
was written specifically for swing modulo scheduling (SMS) implementation  
in GCC.  The analyzer builds a data dependence graph (DDG) for a given  
basic block.  The DDG is represented as a vector of nodes. Each DDG node  
contains vectors of incoming and outgoing dependence edges, sets of  
successors and predecessors of the node in the DDG, and the containing  
instruction. Each DDG edge, analogously to the Tree-SSA analysis, contains  
source and destination nodes of the edge, a dependence type, an edge  
latency, and a distance.  Additionally, the edges that are going to/from  
the same node form a linked list analogously to control flow edges.  The  
analyzer uses scheduler dependence analysis (located in sched-deps.c) to  
build intra-loop dependencies and the data flow engine (located in df-*.c)  
to build inter-loop dependencies.


The RTL analyzer has the following deficiencies:

 * DDG is built only for a single basic block loops.  This is because the  
current SMS implementation only supports such loops. The problem not only  
puts additional constraint on the SMS, but also prevents using the  
dependence information in other passes.


 * Distance of inter-loop dependencies is not calculated and is set to one  
conservatively.  This limits the SMS implementation in interleaving  
instructions from successive iterations.


 * Intra-loop dependencies are calculated using RTL alias analysis, which  
is weaker (e.g. it is not able to disambiguate array references on  
architectures that lack base+offset addressing mode).


I propose to improve the quality of data dependence analysis on RTL via  
propagating the information from Tree-SSA dependence analyzer.  The  
project will consist of the following steps:


 * Export the Tree-SSA data dependence graph as a global data structure  
(or a field in struct function).  The information will be collected before  
ivopts to prevent it from turning array references into pointer  
references, which badly influences data dependence analysis.


 * Create the mapping between RTX mems and original trees.  A part of  
existing patch[1] of propagating alias information to RTL could be used.   
The patch saves links to original trees in mem’s attributes analogously to  
MEM_EXPRs.


 * Implement the verifier of the consistency of the saved information to  
check that it stays intact throughout the RTL pipeline.


 * Use the saved information when constructing the data dependence graph  
in ddg.c.  When two memory references are found to be dependent, a check  
whether the MEMs contain the original trees and whether the trees are  
array references will be performed.  If the trees are indeed ARRAY_REFs  
and the information about their dependence relation can be found in the  
exported graph, then it is possible either to avoid creating the spurious  
DDG edge, in case the data references are independent, or to assign the  
correct distance value to the edge, in case this information is present in  
the exported graph.


 * Provide new testcases and test the patch for correctness and speedups.

I would be pleased to see Ayal Zaks as my mentor, because proposed  
improvement is primarily targeted as modulo scheduling improvement. In  
case this is not possible, I will seek guidance from Maxim Kuvyrkov.


Please feel free to share your thoughts and suggestions.

[1] Alias export patch by Dmitry Melnik

Application for Google Summer of Code with GCC.

2007-03-23 Thread Dmitry Zhurikhin
Hello, I want to propose a project for Google Summer of Code on title
"New static scheduling heuristic". I hope that Vlad Makarov from
Redhat or Andrey Belevantsev from ISP RAS will menthor this
application.
I will appreciate any feedback and will try to answer any questions
regarding my application.


   Abstract
  Most scheduling approaches consist of two steps: computation of sets
of available instructions at the current scheduling point, and
choosing one of them to become the next instruction.  This approach is
used in GCC compiler in current implementation of list scheduler
(known as Haifa scheduler) and new approach under developing –
selective scheduling.  While most activities aimed at improving
scheduling quality are trying to increase number of gathered available
instructions (such as above-mentioned selective scheduling, treegion
scheduling, improvements of region formation), choosing step plays an
important role too. Current implementation of scheduler uses static
priorities, based on critical path heuristic, which is known to behave
well while scheduling single basic blocks and become worse when
expanding scheduling boundaries to extended basic blocks or regions.
This project aims at developing a new priorities system, which would
consider not only the critical path, but other possible exits from
scheduling region and their probabilities.

   Current implementation
  First of all, depending on area, from which available instructions are
gathered, there are two schedulers present in GCC, namely sched-ebb
and sched-region. The scheduling infrastructure of these schedulers is
essentially the same, driven by haifa-sched.
  Current implementation of scheduling priorities is divided onto two
phases – static and dynamic. The most important is static part – it is
considered first, and in case of equal static priorities, dynamic
priority is used to determine which one of instructions is better.
  Given a dependence graph, static priority is calculated as default
instruction latency for all instructions without successors in
dependence graph, and maximum over all successors of sum of priority
of successor and cost of dependency – for other instructions. This
way, considering only one basic block, instructions that are on the
critical path to basic block exit are the most prioritized ones. It is
important to note that for sched-rgn dependencies leading from one
basic block to other are just discarded.

   Drawbacks of current implementation
  Critical path heuristic begins to fail to find good priorities when
used on areas, bigger than single basic block, i.e. with several
exits. For extended basic block this heuristic tends to give most
priority to instructions that contribute to the last (with the longest
path from entry in dependence graph) exit. At the same time
instructions, contributing to other exits, are delayed. Consequently,
if earliest exit from extended basic block is taken most of the time,
this can lead to performance regression.
  Behavior of critical path heuristic in case of region scheduling is
trickier. From the definition of priority, instructions from the end
of basic block are generally less prioritized than from the beginning.
Hence, when scheduling the end part of some basic block in region,
instructions from beginning of successive basic blocks may have bigger
priority than instructions from the end of current basic block. This,
in turn, can lead to performance regression too.

   Ways to overcome drawbacks
  The drawbacks of current priority scheme can be evaded in several
ways. First, critical path priorities can be adjusted dynamically with
regard to the current scheduling point. Second, compute new static
priorities with regard to probabilities of taking corresponding exits.
The background idea of this approach is that instruction priority is
increased for some amount for each exit it contributes to. This amount
depends on the probability of taking current exit and the depth of
instruction in dependence graph, consisting only from instructions,
contributing to current exit. In this way, instructions contributing
to several exits will get higher priority. Likewise, instructions that
are situated higher in dependence graph will get lower priority.
Third, taking into account processor resources while creating data
dependence graph can increase accuracy of priority calculation.
I aim mostly at the implementation of second approach as it seems to
solve known problems and then to improve it with resource-aware
priority computation.

   Roadmap
  I plan to implement and test on ia64 platform all three approaches
to solving static scheduling priority problem for sched-ebb and
sched-rgn and do it later for selective scheduling.




Re: Information regarding -fPIC support for Interix gcc

2007-03-23 Thread Murali Vemulapati

Please look at the patch:

http://gcc.gnu.org/ml/gcc-patches/2007-02/msg00855.html

It was intended for cygwin/mingw but should work for interix too if
TARGET_CYGMING
is defined.
The patch needs some changes in GNU linker in order for it to work correctly.
(See the thread in gcc-patches for details) . I am working on the
linker changes and
will post it when the testing is complete.

Thanks
Murali



On 3/23/07, Mayank Kumar <[EMAIL PROTECTED]> wrote:

Ok, since I didn't get any pointers in this area.
I have a more generic question now to everybody:-

 I am new to gcc development as well as its architecture. I am looking forward 
to fix the -fPIC issue for Interix. As of now I found that a shared library 
compiled with fPIC crashes due to some wrong assembly instructions(a jmp 
instruction) embedded into a function call which cause it to jump 
unconditionally to a different function altogether whereas the c code has no 
such jumps or function calls.
Can some body point me to the part of source code that I should look into for 
this. I mean:-
1: the part which is responsible for generating this code from c code.
2: the part of the gcc code where -fPIC is being handled.
3: any other pointers to investigating this would be helpful.

Thanks
Mayank


-Original Message-
From: Paul Brook [mailto:[EMAIL PROTECTED]
Sent: Friday, March 23, 2007 2:24 AM
To: gcc@gcc.gnu.org
Cc: Mayank Kumar
Subject: Re: Information regarding -fPIC support for Interix gcc

On Thursday 22 March 2007 20:20, Mayank Kumar wrote:
> I work for Microsoft SFU(services for unix) group and I am currently
> investigating this fPIC issue for gcc 3.3 which is available with sfu 3.5.

gcc3.3 is really quite old, and hasn't been maintained for quite some time.
You're unlikely to get a particularly useful response from this list (or any
volunteer gcc developers) unless you're working with current gcc.

Of course there are organisations who will provide you with commercial support
for older gcc releases. That's a separate issue though.

Paul



gcc-4.2.0 RC1 build report

2007-03-23 Thread Kate Minola

I was able to successfully build gcc-4.2.0-20070316 on
 the following architectures:

 For thse architectures I used --enable-languages=c

  alphaev56-unknown-linux-gnu
  alphaev68-dec-osf5.1b
  powerpc-ibm-aix5.2.0.0
  sparc-sun-solaris2.8

 For these architectures I used --enable-languages=c,c++

  x86_64-unknown-linux-gnu
 i386-pc-solaris2.10 (really x86_64-SunOS)
 i386-apple-darwin8.9.1
 i686-pc-linux-gnu

i386-pc-solaris2.9
 ia64-unknown-linux-gnu
 powerpc-apple-darwin8.8.0

I was unsuccessful when I tried to build on

 alphaev56-dec-osf4.0f

The error message I get during 'make bootstrap' is

 virtual memory exhausted: Not enough space

I did not have this problem when I built gcc-4.1.2 on
this machine.

This is an old machine that I am afraid to power-cycle
as it may not come back up. 'limit' says that stacksize
is 4096 kbytes if that is related.

Kate Minola
University of Maryland, College Park


Summer of Code student applications

2007-03-23 Thread Ian Lance Taylor
I encourage people to post Summer of Code student applications to this
mailing list for comments.

But I also want to say clearly that applications must also be
submitted at http://code.google.com/soc/.  And after submitting your
application there, you should check periodically to see if there are
any comments there.

Thanks for your interest.

Ian


Hosed my maintainer's bugzilla account

2007-03-23 Thread Thomas Koenig
Hello world

it seems I hosed my developer's bugzilla account by changing
my E-Mail address there from [EMAIL PROTECTED] to
[EMAIL PROTECTED]:

- I can no longer change the e-mail address back

- I no longer have maintainer's rights on bugzilla.

Could somebody change it back, please?

Thanks a lot!

Thomas



Re: We're out of tree codes; now what?

2007-03-23 Thread Phil Edwards
On Wed, Mar 21, 2007 at 10:51:06AM -0700, Mike Stump wrote:
> But seriously, I think we need to press autoconf into generating 100x  
> faster code 90% of the time.  Maybe prebundling answers for the  
> common targets...

Ek, imake!  :-)
 
Every time I've played with precomputing cache answers, I almost immediately
run into problems where the answers need to be customized or recalculated,
even for situations which I would have labelled as "common".  Even preloading
the cache only saves a little bit of time compared to the time for all
the zillions of tiny files bring created, compiled, deleted.


-- 
 what does your robot do, sam?
 it collects data about the surrounding environment, then discards
it and drives into walls


Re: gcc 4.2 more strict check for "function called through a non-compatible type"

2007-03-23 Thread Ryan Hill
Mark Mitchell wrote:
> Ian Lance Taylor wrote:
> 
>> I realized that I am still not stating my position very clearly.  I
>> don't think we should make any extra effort to make this code work:
>> after all, the code is undefined.  I just think 1) we should not
>> insert a trap; 2) we should not ICE. 
> 
> I agree.  If the inlining thing is indeed a problem (and I can see how
> it could be, even though you could not immediately reproduce it), then
> we should mark the call as uninlinable.  Disabling an optimization in
> the face of such a cast seems more user-friendly than inserting a trap.
>  Since we know the code is undefined, we're not pessimizing correct
> code, so this is not a case where to support old code we'd be holding
> back performance for valid code.
> 
> I also agree with Gaby that we should document this as an extension.  If
> we go to the work of putting it back in, we should ensure that it
> continues to work for the foreseeable future.  Part of that is writing
> down what we've decided.

Was there ever any action on this?  AFAICS consensus was that the trap
would be removed and this behaviour be documented as an extension.
There was a bit more discussion of how exactly the documentation would
be worded[i] and the thread petered out.  Fast forwarding to today the
abort is still present and the 4.2 branch (4.2.0-pre20070317 (rev.
123016)) is still unable to build a working openssl (0.9.8e).

I apologize for bringing this up so late in the release cycle.  I only
found this discussion today while searching for some solution to our
openssl issue, which i believe is the only blocker we have left as far
as being gcc-4.2 ready.


[i]  http://permalink.gmane.org/gmane.comp.gcc.devel/80366

-- 
where to now? if i had to guess
dirtyepic gentoo orgi'm afraid to say antarctica's next
9B81 6C9F E791 83BB 3AB3  5B2D E625 A073 8379 37E8 (0x837937E8)



gcc-4.3-20070323 is now available

2007-03-23 Thread gccadmin
Snapshot gcc-4.3-20070323 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20070323/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.3 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 123166

You'll find:

gcc-4.3-20070323.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.3-20070323.tar.bz2 C front end and core compiler

gcc-ada-4.3-20070323.tar.bz2  Ada front end and runtime

gcc-fortran-4.3-20070323.tar.bz2  Fortran front end and runtime

gcc-g++-4.3-20070323.tar.bz2  C++ front end and runtime

gcc-java-4.3-20070323.tar.bz2 Java front end and runtime

gcc-objc-4.3-20070323.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.3-20070323.tar.bz2The GCC testsuite

Diffs from 4.3-20070316 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.3
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: We're out of tree codes; now what?

2007-03-23 Thread Ian Lance Taylor
Phil Edwards <[EMAIL PROTECTED]> writes:

> On Wed, Mar 21, 2007 at 10:51:06AM -0700, Mike Stump wrote:
> > But seriously, I think we need to press autoconf into generating 100x  
> > faster code 90% of the time.  Maybe prebundling answers for the  
> > common targets...
> 
> Ek, imake!  :-)
>  
> Every time I've played with precomputing cache answers, I almost immediately
> run into problems where the answers need to be customized or recalculated,
> even for situations which I would have labelled as "common".  Even preloading
> the cache only saves a little bit of time compared to the time for all
> the zillions of tiny files bring created, compiled, deleted.

Yes.

In my opinion the problem with autoconf is not the lack of a
system-wide cache.  It's that the tests can not be run in parallel.
Fixing this requires a complete redesign and rewrite, which is frankly
long over due.  Using m4 to generate portable shell scripts was
moderately cute when DJM did it fifteen years ago, but it's looking
pretty old now.

Ian


Re: We're out of tree codes; now what?

2007-03-23 Thread Tom Tromey
> "Alexandre" == Alexandre Oliva <[EMAIL PROTECTED]> writes:

Alexandre> As for configure scripts...  autoconf -j is long overdue ;-)

Yeah.  The ideal, I think, would be to have configure just record the
options the user passed it, and then have the majority of actual
checks integrated into the build.  That way, e.g., you could
parallelize tests, checks that are not needed for a particular
configuration would never be run, etc.  If there was a single make
instance, you could even share the results and avoid re-running
checks.

But, implementing this is a very big job and it seems unlikely to ever
happen.

One thing I've noticed is that we do run many checks multiple times.
We could probably reduce this by rearranging the source tree a bit.
Also there are some redundant checks for things like "build system
sanity", which aren't cached; perhaps a new autoconf feature to let us
disable this stuff in all the subdirs would help a little.  I suspect
this may all be tinkering around the margins though.

I also wonder how much dead code is in there.  In libjava I know we
basically never try to garbage collect old, unused checks.

Tom


Broken commits

2007-03-23 Thread Steven Bosscher

Hi,

I managed to commit a ChangeLog entry that included the entire patch,
and a change to ifcvt.c that shouldn't have been commited. I use a
little script for checkins, but I passed the wrong file names to it.

I've fixed both issues, but anyone who has checked out or updated from
SVN between now and 15 minutes ago should probably update again to
pull in the fixes.

Sorry for the inconvenience.

Gr.
Steven


Re: gcc 4.2 more strict check for "function called through a non-compatible type"

2007-03-23 Thread Ian Lance Taylor
Ryan Hill <[EMAIL PROTECTED]> writes:

> Mark Mitchell wrote:
> > Ian Lance Taylor wrote:
> > 
> >> I realized that I am still not stating my position very clearly.  I
> >> don't think we should make any extra effort to make this code work:
> >> after all, the code is undefined.  I just think 1) we should not
> >> insert a trap; 2) we should not ICE. 
> > 
> > I agree.  If the inlining thing is indeed a problem (and I can see how
> > it could be, even though you could not immediately reproduce it), then
> > we should mark the call as uninlinable.  Disabling an optimization in
> > the face of such a cast seems more user-friendly than inserting a trap.
> >  Since we know the code is undefined, we're not pessimizing correct
> > code, so this is not a case where to support old code we'd be holding
> > back performance for valid code.
> > 
> > I also agree with Gaby that we should document this as an extension.  If
> > we go to the work of putting it back in, we should ensure that it
> > continues to work for the foreseeable future.  Part of that is writing
> > down what we've decided.
> 
> Was there ever any action on this?  AFAICS consensus was that the trap
> would be removed and this behaviour be documented as an extension.
> There was a bit more discussion of how exactly the documentation would
> be worded[i] and the thread petered out.  Fast forwarding to today the
> abort is still present and the 4.2 branch (4.2.0-pre20070317 (rev.
> 123016)) is still unable to build a working openssl (0.9.8e).

I don't think anything happened with this.

Is there a gcc bug report open about it?  If so, Mark can bump up the
priority.

Ian


Re: Ayal Zaks appointed Modulo Scheduler maintainer

2007-03-23 Thread Ayal Zaks
David Edelsohn <[EMAIL PROTECTED]> wrote on 23/03/2007 19:04:57:

>I am pleased to announce that the GCC Steering Committee has
> appointed Ayal Zaks as Modulo Scheduler maintainer.
>
>Please join me in congratulating Ayal on his new role.
> Ayal, please update your listings in the MAINTAINERS file.

Thanks!  I will look into the modulo-scheduler patches backlog shortly and
I've updated the MAINTAINERS file by the attached patch.

Ayal.


Index: ChangeLog
===
--- ChangeLog   (revision 123172)
+++ ChangeLog   (working copy)
@@ -1,3 +1,7 @@
+2007-03-24  Ayal Zaks  <[EMAIL PROTECTED]>
+
+   * MAINTAINERS (Modulo Scheduler): Add myself.
+
 2007-03-23  Brooks Moses  <[EMAIL PROTECTED]>

* MAINTAINERS (fortran 95 front end): Add myself.
Index: MAINTAINERS
===
--- MAINTAINERS (revision 117617)
+++ MAINTAINERS (working copy)
@@ -158,6 +158,7 @@
 scheduler (+ haifa)Michael Meissner[EMAIL PROTECTED]
 scheduler (+ haifa)Jeff Law[EMAIL PROTECTED]
 scheduler (+ haifa)Vladimir Makarov[EMAIL PROTECTED]
+modulo-scheduler   Ayal Zaks   [EMAIL PROTECTED]
 reorg  Jeff Law[EMAIL PROTECTED]
 caller-save.c  Jeff Law[EMAIL PROTECTED]
 callgraph  Jan Hubicka [EMAIL PROTECTED]



[Martin Michlmayr <[EMAIL PROTECTED]>] Documenting GCC 4.2 changes

2007-03-23 Thread Ian Lance Taylor
Now that the gcc 4.2 release is getting closer, I am resending this
e-mail from Martin Michlmayr.  I've removed options which I believe
are sufficiently internal to not require mention in the changes file,
and I've removed options which are now documented there.

Many of our users only discover new options and capabilities because
of the changes files.  It behooves us to let people know about the new
features we have developed.  Otherwise, many people will not know
about them and will not use them.  I don't mean to imply that every
option on this list must be mentioned in the changes files.  There are
reasonable choices to be made.  But these options should all be
considered for mention.

Martin, thanks for sending the original list.

Ian

Date: Wed, 23 Aug 2006 23:15:43 +0200
From: Martin Michlmayr <[EMAIL PROTECTED]>
To: gcc@gcc.gnu.org
Subject: Documenting GCC 4.2 changes

I went through the ChangeLog (2006 only so far) to identify command
line options that should be documented in the GCC 4.2 changes page at
http://gcc.gnu.org/gcc-4.2/changes.html

It would be nice if people listed below could submit patches for the
command line options they added/removed during the GCC 4.2 cycle.  I'm
not sure how important it is to document options mostly aimed at GCC
developers, but it would be great if at least changes to common user
options could be documented.


removals


2006-02-26  Zdenek Dvorak <[EMAIL PROTECTED]>
(-fstrength-reduce, -fprefetch-loop-arrays-rtl,
-frerun-loop-opt): Remove.

2006-02-26  Steven Bosscher  <[EMAIL PROTECTED]>
Remove all references to -floop-optimize and -frerun-loop-opt.

2006-02-10  Zdenek Dvorak <[EMAIL PROTECTED]>
(-floop-optimize2): Removed.


target/language changes
===

2006-02-07  Dirk Mueller  <[EMAIL PROTECTED]>
(-Wsequence-point): Update documentation that -Wsequence-point is
implemented for C++ as well.

2006-02-07  Eric Botcazou  <[EMAIL PROTECTED]>
config/sol26.h (CPP_SUBTARGET_SPEC): Accept -pthread.
doc/invoke.texi (SPARC options): Document -pthread.


renames/changes
---

2006-01-21  Gabriel Dos Reis  <[EMAIL PROTECTED]>
Document that -Wnon-virtual-dtor is no longer included in -Wall.


additions
=

2006-05-04  Leehod Baruch  <[EMAIL PROTECTED]>
(-fsee): Document.

2006-04-30  Roger Sayle  <[EMAIL PROTECTED]>
Document new command line option.  (Woverflow)

2006-03-24  Carlos O'Donell  <[EMAIL PROTECTED]>
Document -femit-class-debug-always

2006-03-21  Toon Moene  <[EMAIL PROTECTED]>
Document new flag -fargument-noalias-anything.

2006-02-19  Daniel Berlin  <[EMAIL PROTECTED]>
Document -fipa-pta.

2006-02-18  Richard Sandiford  <[EMAIL PROTECTED]>
(-fsection-anchors): Document.

2006-02-03  Andreas Krebbel  <[EMAIL PROTECTED]>
(-mlong-double-128, -mlong-double-64): Document the new options.

2006-01-31  Richard Guenther  <[EMAIL PROTECTED]>
(-msselibm): Document.

2006-01-28  Zack Weinberg  <[EMAIL PROTECTED]>
c.opt: Add -W(no-)overlength-strings.

2006-01-27  David Edelsohn  <[EMAIL PROTECTED]>
(-mabi): Collect options together.  Add ibmlongdouble and ieeelongdouble.

2006-01-18  DJ Delorie  <[EMAIL PROTECTED]>
Document -Werror=*

2006-01-16  Gabor Loki  <[EMAIL PROTECTED]>
Add documentation for -frtl-abstract-sequences.


-- 
Martin Michlmayr
http://www.cyrius.com/




Re: [Martin Michlmayr <[EMAIL PROTECTED]>] Documenting GCC 4.2 changes

2007-03-23 Thread Brooks Moses

(crossposting to fortran@)

Ian Lance Taylor wrote:

Now that the gcc 4.2 release is getting closer, I am resending this
e-mail from Martin Michlmayr.  I've removed options which I believe
are sufficiently internal to not require mention in the changes file,
and I've removed options which are now documented there.

Many of our users only discover new options and capabilities because
of the changes files.  It behooves us to let people know about the new
features we have developed.  Otherwise, many people will not know
about them and will not use them.  I don't mean to imply that every
option on this list must be mentioned in the changes files.  There are
reasonable choices to be made.  But these options should all be
considered for mention.

[...]

There are also some option changes in the Fortran front end, not 
mentioned on the list that I snipped, which should also be considered 
for adding to http://gcc.gnu.org/gcc-4.2/changes.html.  (A couple of 
these -- frange-check and Wampersand -- appear to have been backported 
to 4.1 as well; I'm not sure where they should be mentioned.)


As was requested in Ian's original email, could the people who added 
these options submit brief patches documenting them in the changes.html 
file?  Or alternately post a sentence or two in reply to this describing 
them, and I'll collate all that and post a combined patch.


Thanks!
- Brooks


additions
=
2007-01-14  Jerry DeLisle  <[EMAIL PROTECTED]>
Paul Thomas  <[EMAIL PROTECTED]>
* lang.opt: Add Wcharacter_truncation option.

2006-12-10  Thomas Koenig  <[EMAIL PROTECTED]>
* lang.opt:  Add option -fmax-subrecord-length=

2006-06-18  Jerry DeLisle  <[EMAIL PROTECTED]>
* lang.opt: Add option -frange-check.

2006-05-02  Steven G. Kargl  <[EMAIL PROTECTED]>
* lang.opt: New flag -fall-intrinsics.

2006-03-14  Jerry DeLisle  <[EMAIL PROTECTED]>
* lang.opt: Add Wampersand.

2006-02-06  Thomas Koenig  <[EMAIL PROTECTED]>
* lang.opt: Add fconvert=little-endian, fconvert=big-endian



removals


2006-10-15  Bernhard Fischer  <[EMAIL PROTECTED]>
* lang.opt (Wunused-labels): Remove.



Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-03-23 Thread Ian Lance Taylor
Paul Eggert <[EMAIL PROTECTED]> writes:

> >> GCC itself relies on wrapv semantics.  As does glibc.  And
> >> coreutils.  And GNU tar.  And Python.  I'm sure there are
> >> many other significant programs.  I don't have time to do a
> >> comprehensive survey right now.
> >
> > Where does GCC rely on that?  I don't see it anywhere?

> Thanks, but it's a lot of work to find bugs like this.
> Particularly if one is interested in finding them all, without a
> lot of false alarms.  I don't have time to wade through all of GCC
> with a fine-toothed comb right now, and I suspect nobody else does
> either.  Nor would I relish the prospect of keeping wrapv
> assumptions out of GCC as other developers make further
> contributions, as the wrapv assumption is so natural and
> pervasive.

I want to follow-up on this old thread.  For gcc 4.2 and later I have
implemented two new options.

The new option -fstrict-overflow tells gcc that it can assume the
strict signed overflow semantics prescribed by the language standard.
This option is enabled by default at -O2 and higher.  Using
-fno-strict-overflow will tell gcc that it can not assume that signed
overflow is undefined behaviour.  The general effect of using this
option will be that signed overflow will become implementation
defined.  This will disable a number of generally harmless
optimizations, but will not have the same effect as -fwrapv.

The new option -Wstrict-overflow will warn about cases where gcc
optimizes based on the fact that signed overflow is undefined.
Because this can produce many false positives--cases where gcc
uses undefined signed overflow to optimize that do not break any real
code--this option has five different levels.

-Wstrict-overflow=1
Warn about cases which are both questionable and easy to avoid.  For
example: x + 1 > x; with -fstrict-overflow, the compiler will simplify
this to 1.  This level of -Wstrict-overflow is enabled by -Wall;
higher levels are not, and must be explicitly requested.

-Wstrict-overflow=2
Also warn about other cases where a comparison is simplified to a
constant.  For example: abs (x) >= 0.  This can only be simplified
when -fstrict-overflow is in effect, because abs (INT_MIN) overflows
to INT_MIN, which is less than zero.  -Wstrict-overflow (with no
level) is the same as -Wstrict-overflow=2.

-Wstrict-overflow=3
Also warn about other cases where a comparison is simplified.  For
example: x + 1 > 1 will be simplified to x > 0.

-Wstrict-overflow=4
Also warn about other simplifications not covered by the above cases.
For example: (x * 10) / 5 will be simplified to x * 2.

-Wstrict-overflow=5
Also warn about cases where the compiler reduces the magnitude of a
constant involved in a comparison.  For example: x + 2 > y will be
simplified to x + 1 >= y.  This is reported only at the highest
warning level because this simplification applies to many comparisons,
so this warning level will give a very large number of false
positives.


For example, here is one of the test cases which started this
discussion:

extern int bigtime_test (int);
int
foo ()
{
  int j;
  for (j = 1; 0 < j; j *= 2)
if (! bigtime_test (j))
  return 1;
  return 0;
}

This still becomes an infinite loop with -O2 (depending on what
bigtime_test does), but now issues a warning when using -O2
-Wstrict-overflow=2 (or simply -O2 -Wstrict-overflow):

foo.c:6: warning: assuming signed overflow does not occur when simplifying 
conditional to constant

With -O1 or with -O2 -fno-strict-overflow the loop terminates when j
overflows.

Ian


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-03-23 Thread Robert Dewar

Ian Lance Taylor wrote:


The new option -fstrict-overflow tells gcc that it can assume the
strict signed overflow semantics prescribed by the language standard.
This option is enabled by default at -O2 and higher.  Using
-fno-strict-overflow will tell gcc that it can not assume that signed
overflow is undefined behaviour.  The general effect of using this
option will be that signed overflow will become implementation
defined.  This will disable a number of generally harmless
optimizations, but will not have the same effect as -fwrapv.


Can you share the implementation definition (implementation defined
generally means that the implementation must define what it does).
This seems awfully vague.


Re: question on verify_ssa failure due to ccp in dom3 (PR30784)

2007-03-23 Thread Dorit Nuzman
> On 3/14/07, Dorit Nuzman <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
> >
> > We have a '{2,2}' expression (vector initializer) propagated by dom
into a
> > BIT_FIELD_REF:
> >
> > before (bug.c.105t.vrp2):
> >
> >   vector long int vect_cst_.47;
> >   vect_cst_.47_66 = {2, 2};
> >   D.2103_79 = BIT_FIELD_REF ;
> >
> > after (bug.c.106t.dom3):
> >   "
> >   Optimizing block #7
> >
> >   Optimizing statement :;
> >   Optimizing statement D.2102_78 = BIT_FIELD_REF
 > 64, 0>;
> >   Optimizing statement D.2103_79 = BIT_FIELD_REF  > 0>;
> >   Replaced 'vect_cst_.47_66' with constant '{2, 2}'
> >   "
> >
> >   D.2103_79 = BIT_FIELD_REF <{2, 2}, 64, 0>;
> >
> >
> > ...which causes he following ICE:
> > "
> >bug.c:8: error: invalid reference prefix
> >{2, 2}
> >bug.c:8: internal compiler error: verify_stmts failed
> > "
> >
> > Several testcases are available in the bugzilla report (PR30784).
> >
> > So, the question is - what needs to be fixed - is it copy propagation
that
> > allows propagating the initializer into a BIT_FIELD_REF? or vect_lower
pass
> > that creates these BIT_FIELD_REFs after vectorization?
>
> I think the BIT_FIELD_REF should be properly folded to a constant or
> the propagation
> not done.  fold_stmt_inplace is the candidate to look at,
> propagate_rhs_into_lhs in
> tree-ssa-dom.c to reject propagation into BIT_FIELD_REF.
> fold_ternary should be
> able to fold the BIT_FIELD_REF in question, it would be interesting to
> know why it
> doesn't.
>

the problem is that for the case of BIT_FIELD_REF fold_ternary folds only
if operand 0 is VECTOR_CST. In our case it's a CONSTRUCTOR. I'm testing the
patch below (comments?).

Thanks a bunch!

dorit

(See attached file: fix.txt)


> Richard.Index: fold-const.c
===
--- fold-const.c(revision 123159)
+++ fold-const.c(working copy)
@@ -12470,7 +12470,8 @@
   gcc_unreachable ();
 
 case BIT_FIELD_REF:
-  if (TREE_CODE (arg0) == VECTOR_CST
+  if ((TREE_CODE (arg0) == VECTOR_CST
+  || (TREE_CODE (arg0) == CONSTRUCTOR && TREE_CONSTANT (arg0)))
  && type == TREE_TYPE (TREE_TYPE (arg0))
  && host_integerp (arg1, 1)
  && host_integerp (op2, 1))
@@ -12484,7 +12485,18 @@
  && (idx = idx / width)
 < TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)))
{
- tree elements = TREE_VECTOR_CST_ELTS (arg0);
+ tree elements = NULL_TREE;
+
+ if (TREE_CODE (arg0) == VECTOR_CST)
+   elements = TREE_VECTOR_CST_ELTS (arg0);
+ else
+   {
+ unsigned HOST_WIDE_INT idx;
+ tree value;
+
+ FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (arg0), idx, 
value)
+   elements = tree_cons (NULL_TREE, value, elements);
+   }
  while (idx-- > 0 && elements)
elements = TREE_CHAIN (elements);
  if (elements)