Re: Autovectorizing HIRLAM; some progress.

2007-11-25 Thread Toon Moene

Dorit Nuzman wrote:


(would be intereting to see an example for a loop that is now vectorized
and didn't vectorize before, if you have that information available)


Unfortunately not - it caught me by surprise (on a routine recompile / 
rerun of a fixed test-case).  Otherwise I would at least have saved the 
compile log ...


--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.indiv.nluug.nl/~toon/
GNU Fortran's path to Fortran 2003: http://gcc.gnu.org/wiki/Fortran2003


[RFC] [PATCH] 32-bit pointers in x86-64

2007-11-25 Thread Luca
This proof of concept patch modifies GCC to have 32-bit pointers and
longs on x86-64.

This allows to create an "x86-32" architecture that takes advantage of
the higher number of registers and support for 64-bit computation in
x86-64 long mode while avoiding the disadvantage of increased memory
usage due to 64-bit pointers and longs in structures.
Thus, such a GCC version could be used to produce a GNU/Linux
distribution with the performance of x86-64 and the reduced memory
usage of i386. Furthermore, programs requiring "large data" could use
special "64-bit pointer" attributes to only use 64-bit pointers to
reference the relevant large arrays/structures, using 32-bit pointers
for everything else.

The current patch is just an hack and should obviously be made
configurable and reimplemented properly.
Just setting POINTER_SIZE to 32 mostly works but more hacks are
necessary to get PIC compilation working (note that the hacks are
probably at least partially wrong, since I'm not an experienced GCC
hacker).
A patch to binutils is also required to stop it from complaining about
32-bit relocations in shared objects.

Currently a simple "Hello world!" program will work using a standard
x86-64 dynamic loader and libc.
This is because the function call ABI is unchanged and thus anything
that doesn't use structures containing pointers or longs should be
binary compatible.

I do not really intend to work on this personally: I did this initial
work for fun and I'm posting these patches to possibly stimulate
broader interest on this concept.

A possible roadmap for this would be:
1. Make it configurable
2. Fix the LEA hacks and allow proper PIC compilation
3. Fix everything else that may not work properly (e.g. PIC,
relocations, exception handling, TLS, debug info)
4. Add a "32-bit object" flag to x86-64 objects
5. Modify libc so that allocations are made in the lower 4GB space for
x86-32 shared objects and modify x86-64 assembly code to work with
32-bit pointers
6. Compile a native x86-32 libc and compile and test a full Debian or
Ubuntu distribution
7. Add support for loading x86-32 and x86-64 objects in the same
address space, using a single modified 64-bit libc (that for
compatibility actually generate pointers in the low 4GB)
7.1. Add __attribute__((pointer_size(XXX))) and #pragma pointer_size
to allow 64-bit pointers in 32-bit mode and viceversa
7.2. Surround glibc headers with #pragma pointer_size 64
7.3. Modify the dynamic loader to support different namespaces and
directories for x86-32 and x86-64. Symbols starting with "__64_" or
"__32_" or similar could be mapped to the other namespace. Also
support "multiarchitecture" objects that would be added to both.
7.4. Split malloc/mmap in __32_malloc, __32_mmap and similar in glibc.
glibc itself would use 32-bit allocations and be loaded in the low
4GB.
7.5. Compile the result and use a modified libc/dynamic loader
compiled in x86-64 mode flagged as multiarchitecture to load both
x86-32 and x86-64 objects
8. Modify popular programs to explicitly use 64-bit allocations and
pointers for potentially huge allocations (e.g. database caches,
compression program data structures, P2P software file mappings)

Patches are against GCC 4.2.2 and Binutils HEAD.
Index: bfd/elf64-x86-64.c
===
RCS file: /cvs/src/src/bfd/elf64-x86-64.c,v
retrieving revision 1.144
diff -u -r1.144 elf64-x86-64.c
--- bfd/elf64-x86-64.c	18 Oct 2007 09:13:51 -	1.144
+++ bfd/elf64-x86-64.c	25 Nov 2007 14:19:17 -
@@ -1038,6 +1038,8 @@
 	case R_X86_64_TPOFF32:
 	  if (info->shared)
 	{
+	if(0)
+	{
 	  (*_bfd_error_handler)
 		(_("%B: relocation %s against `%s' can not be used when making a shared object; recompile with -fPIC"),
 		 abfd,
@@ -1045,6 +1047,7 @@
 		 (h) ? h->root.root.string : "a local symbol");
 	  bfd_set_error (bfd_error_bad_value);
 	  return FALSE;
+	  }
 	}
 	  break;
 
@@ -1198,6 +1201,8 @@
 	  && (sec->flags & SEC_ALLOC) != 0
 	  && (sec->flags & SEC_READONLY) != 0)
 	{
+	if(0)
+	{
 	  (*_bfd_error_handler)
 		(_("%B: relocation %s against `%s' can not be used when making a shared object; recompile with -fPIC"),
 		 abfd,
@@ -1205,6 +1210,7 @@
 		 (h) ? h->root.root.string : "a local symbol");
 	  bfd_set_error (bfd_error_bad_value);
 	  return FALSE;
+	  }
 	}
 	  /* Fall through.  */
 
@@ -2599,6 +2605,8 @@
 		  || !is_32bit_relative_branch (contents,
 		rel->r_offset)))
 	{
+	if(0)
+	{
 	  if (h->def_regular
 		  && r_type == R_X86_64_PC32
 		  && h->type == STT_FUNC
@@ -2613,6 +2621,7 @@
 		   h->root.root.string);
 	  bfd_set_error (bfd_error_bad_value);
 	  return FALSE;
+	  }
 	}
 	  /* Fall through.  */
 
diff -ur g_orig/gcc-4.2.2/gcc/config/i386/i386.c gcc-4.2.2/gcc/config/i386/i386.c
--- g_orig/gcc-4.2.2/gcc/config/i386/i386.c	2007-09-01 17:28:30.0 +0200
+++ gcc-4.2.2/gcc/config/i386/i386.c	

Re: Progress on GCC plugins ?

2007-11-25 Thread Tom Tromey
> "Alexandre" == Alexandre Oliva <[EMAIL PROTECTED]> writes:

Alexandre> And then, once the underlying problem is addressed and we
Alexandre> have an API that is usable by regular users, maybe we will
Alexandre> find out that we don't need plugins, after all.

Plugins are about deployment, not development.

Plugins make it possible to redistribute useful things which are not
in GCC.  They don't -- and as you rightly point out, can't -- make it
simpler to actually develop these things.

The canonical example, which has been covered many times, is a pass
that does extra checking for a large program (e.g., Mozilla).


LD_PRELOAD would work just as well as having gcc directly support
plugins, provided that certain internal things are never made
file-local.  Someone could write a helper library to make it
relatively simple to hook in.  But... I looked at this recently, and
since gcc is not linked with -rdynamic, it is a non-starter.

Tom


[Fwd: Libiberty problem in gcc-4.3 snapshots]

2007-11-25 Thread Andris Pavenis



 Original Message 
Subject: Libiberty problem in gcc-4.3 snapshots
Date: Sun, 25 Nov 2007 17:14:47 -0500
From: Andris Pavenis <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]

Tried to build gcc-4.3-20071123 for DJGPP (native build). Run into a following 
libiberty related
problem problem:

vasprintf() was not found when building gcc stage 1.

vasprintf() should be provided by libiberty as DJGPP does not have own. 
Inspecting libiberty
Makefile and present object files however showed that none of procedures that 
are missing in DJGPP
were included in built libiberty.a:

LIBOBJS =

For GCC-4.2.2 I had in libiberty Makefile:

LIBOBJS =  ${LIBOBJDIR}./asprintf$U.o ${LIBOBJDIR}./mempcpy$U.o 
${LIBOBJDIR}./mkstemps$U.o
${LIBOBJDIR}./sigsetmask$U.o ${LIBOBJDIR}./strndup$U.o 
${LIBOBJDIR}./strverscmp$U.o
${LIBOBJDIR}./vasprintf$U.o

and as result no such problem.

As the difference between Makefile.ac versions contains DJGPP related stuff not 
from me (see
attachment) , I'm sending this message at first to DJGPP related list

Andris


--- configure.ac.4.2.2	2007-11-25 15:59:52 +
+++ configure.ac	2007-07-17 17:52:28 +
@@ -109,30 +109,32 @@
 AC_CHECK_TOOL(AR, ar)
 AC_CHECK_TOOL(RANLIB, ranlib, :)
 
+dnl When switching to automake, replace the following with AM_ENABLE_MULTILIB.
+# Add --enable-multilib to configure.
+# Default to --enable-multilib
+AC_ARG_ENABLE(multilib,
+[  --enable-multilib   build many library versions (default)],
+[case "$enableval" in
+  yes) multilib=yes ;;
+  no)  multilib=no ;;
+  *)   AC_MSG_ERROR([bad value $enableval for multilib option]) ;;
+ esac],
+	  [multilib=yes])
+
+# Even if the default multilib is not a cross compilation,
+# it may be that some of the other multilibs are.
+if test $cross_compiling = no && test $multilib = yes \
+   && test "x${with_multisubdir}" != x ; then
+   cross_compiling=maybe
+fi
+
 GCC_NO_EXECUTABLES
 AC_PROG_CC
 AC_PROG_CPP_WERROR
 
-# Warn C++ incompatibilities if supported.
-AC_CACHE_CHECK(
-  [whether ${CC} accepts -Wc++-compat],
-  [ac_cv_prog_cc_w_cxx_compat],
-  [save_CFLAGS="$CFLAGS"
-  CFLAGS="-Wc++-compat"
-  AC_COMPILE_IFELSE([AC_LANG_SOURCE([[]])],
-[ac_cv_prog_cc_w_cxx_compat=yes],
-[ac_cv_prog_cc_w_cxx_compat=no])
-  CFLAGS="$save_CFLAGS"
-  ])
-
-
-if test x$GCC = xyes; then
-  ac_libiberty_warn_cflags='-W -Wall -pedantic -Wwrite-strings -Wstrict-prototypes'
-fi
-if test $ac_cv_prog_cc_w_cxx_compat = yes ; then
-  ac_libiberty_warn_cflags="${ac_libiberty_warn_cflags} -Wc++-compat"
-fi
-AC_SUBST(ac_libiberty_warn_cflags)
+ACX_PROG_CC_WARNING_OPTS([-W -Wall -Wwrite-strings -Wc++-compat \
+			  -Wstrict-prototypes], [ac_libiberty_warn_cflags])
+ACX_PROG_CC_WARNING_ALMOST_PEDANTIC([], [ac_libiberty_warn_cflags])
 
 AC_PROG_CC_C_O
 # autoconf is lame and doesn't give us any substitution variable for this.
@@ -545,6 +547,23 @@
 setobjs=yes
 ;;
 
+  *-*-msdosdjgpp)
+for f in atexit basename bcmp bcopy bsearch bzero calloc clock ffs \
+ getcwd getpagesize getrusage gettimeofday \
+ index insque memchr memcmp memcpy memmove memset psignal \
+ putenv random rename rindex sbrk setenv stpcpy strcasecmp \
+ strchr strdup strerror strncasecmp strrchr strstr strtod \
+ strtol strtoul sysconf times tmpnam vfprintf vprintf \
+ vsprintf waitpid
+do
+  n=HAVE_`echo $f | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+  AC_DEFINE_UNQUOTED($n)
+done
+
+
+setobjs=yes
+;;
+
   esac
 fi
 



Re: [Fwd: Re: FW: matrix linking]

2007-11-25 Thread [EMAIL PROTECTED]

Thank you for your reply, Oliver.

Briefly speaking the solution to the problems you have mentioned looks 
like this:
1. take a loot at the first picture here: 
http://docs.georgeshagov.com/twiki/tiki-index.php?page=Matrix+Linking+how+it+works

2. Pointer 1, 2... are vptrs
3. The idea is that each module, library (.so) has a row of vptrs, when 
it is required to make a dynamic binding this row is going to be copied 
to the similar one, the new vptrs are applied to the new row of vptrs, 
the previous, old row is unchanged. Then the shift looks like 
incremental lock of integer value which is the version of this module 
(.so). So it means that these threads which execute the code inside the 
'old module' they are unchanged, and the new code is going to be 
executed in case we will have got the new call to the functions of the 
module. It might have been said it does not answer the question, since 
there might be some loops which needs to be reloaded also, though I 
believe it does, since this tends more to the architecture than to 
linkage already :-)


This is quite brief and uncertain explanation. In reality it does not 
look like this. More details could have been found here: 
http://docs.georgeshagov.com/twiki/tiki-index.php?page=Matrix+Linking+how+it+works.


I hope you will find worthy the reading.

In case of questions do not hesitate to ask.

Yours sincerely,
George.


Olivier Galibert ?:

On Fri, Nov 23, 2007 at 11:49:03AM +0300, [EMAIL PROTECTED] wrote:
[Changing the _vptr or C equivalent dynamically]
  
I would like the community would have considered the idea. I am ready to 
answer all the questions you might have.



Changing the virtual function pointer dynamically using a serializing
instruction is I'm afraid just the tip of the iceberb.  Even
forgetting for a second that some architectures do not have
serializing instructions per se, there are some not-so-simple details
to take into account:

- the compiler can cache the vptr in a register, making your
  serialization less than serialized suddently

- changing a group of functions is usually not enough.  A component
  version change usually means its internal representation of the state
  changes.  Which, in turn, means you need to serialize the object
  (whatever the programming language) in the older version and
  unserialize it in the newer, while deferring calls into the object
  from any thread

- previous point means you also need to be able to know if any thread
  is "inside" the object in order to have it get out before you do a
  version change.  Which in objects that use a somee of message fifo
  for work dispatching may never happen in the first place

Dynamic vtpr binding is only the start of the solution.

  OG.

  




Bounce Test

2007-11-25 Thread Tom Watson
bounce


Re: [RFC] [PATCH] 32-bit pointers in x86-64

2007-11-25 Thread Paul Brook
> 7. Add support for loading x86-32 and x86-64 objects in the same
> address space, using a single modified 64-bit libc.

I'm not convinvinced this is practical, or even possible.  I'd expect the 
restrictions imposed on code to make it work properly to be too onerous for 
it to be of any real use. I recommend just having separate lp64 and ilp32 
modes, like e.g. mips does with n32/n64.

You can't use conventional 32-bit x86 code, so there seems little or no 
benefit in allowing 32 and 64-bit code to be mixed.

Paul


Re: [RFC] [PATCH] 32-bit pointers in x86-64

2007-11-25 Thread Andrew Pinski
On 11/25/07, Luca <[EMAIL PROTECTED]> wrote:
> 7.1. Add __attribute__((pointer_size(XXX))) and #pragma pointer_size
> to allow 64-bit pointers in 32-bit mode and viceversa

This is already there, try using __attribute__((mode(DI) )).

-- Pinski


Re: Designs for better debug info in GCC

2007-11-25 Thread Mark Mitchell
Alexandre Oliva wrote:

>> You're again trying to make this a binary-value question.  Why?
> 
> Because in my mind, when we agree there is a bug, then a fix for it
> can is easier to swallow even if it makes the compiler spend more
> resources, whereas a mere quality-of-implementation issue is subject
> to quite different standards.

Unfortunately, not all questions are black-and-white.  I don't think
you're going to get consensus that this issue is as important to fix as
wrong-code (in the traditional sense) problems.  So, arguing about
whether this is a "correctness issue" isn't very productive.

Neither is arguing that there is now some urgent need for machine-usable
debugging information in a way that there wasn't before.  Machines have
been using debugging information for various purposes other than
interactive debugging for ages.  But, they've always had to deal with
the kinds of problems that you're encountering, especially with
optimized code.

I think that at this point you're doing research.  I don't think we have
a well-defined notion of what exactly debugging information should be
for optimized code.  Robert Dewar's definition of -O1 as doing
optimizations that don't interfere with debugging is coherent (though
informal, of course), but you're asking for something more: full
optimization, and, somehow, accurate debugging information in the
presence of that.  I'm all for research, and the thinking that you're
doing is unquestionably valuable.  But, you're pushing hard for a
particular solution and that may be premature at this point.

Debugging information just isn't rich enough to describe the full
complexity of the optimization transformations.  There's no great way to
assign a line number to an instruction that was created by the compiler
when it inserted code on some flow-graph edge.  You can't get exact
information about variable lifetimes because the scope doesn't start at
a particular point in the generated code in the same way that it does in
the source code.

My suggestion (not as a GCC SC member or GCC RM, but just as a fellow
GCC developer with an interest in improving the compiler in the same way
that you're trying to do) is that you stop writing code and start
writing a paper about what you're trying to do.

Ignore the implementation.  Describe the problem in detail.  Narrow its
scope if necessary.  Describe the success criteria in detail.  Ideally,
the success criteria are mechanically checkable properties: i.e., given
a C program as input, and optimized code + debug information as output,
it should be possible to algorithmically prove whether the output is
correct.

For example, how do you define the correctness of debug information for
a variable's location at a given PC?  Perhaps we want to say that giving
the answer "no information available" is always correct, but that saying
"the value is here" when it's not is incorrect; that gives us a
conservative fallback.  How do you define the point in the source
program given a PC?  If the value of "x" changes on line 100, and we're
at an instruction which corresponds line 101, are we guaranteed to see
the changed value?  Or is seeing the previous value OK?  What about some
intermediate value if "x" is being changed byte-by-byte?  What about a
garbage value if the compiler happens to optimize by throwing away the
old value of "x" before assigning a new one?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Progress on GCC plugins ?

2007-11-25 Thread Taras Glek

Tom Tromey wrote:

"Alexandre" == Alexandre Oliva <[EMAIL PROTECTED]> writes:



Alexandre> And then, once the underlying problem is addressed and we
Alexandre> have an API that is usable by regular users, maybe we will
Alexandre> find out that we don't need plugins, after all.

Plugins are about deployment, not development.

Plugins make it possible to redistribute useful things which are not
in GCC.  They don't -- and as you rightly point out, can't -- make it
simpler to actually develop these things.
  
The canonical example, which has been covered many times, is a pass

that does extra checking for a large program (e.g., Mozilla).
  
There are a lot of checks that could be implemented as plugins to 
benefit Mozilla and other projects. For future Mozilla development 
certain features that we are looking at are not feasible in C++ until 
the compiler can help with enforcing correct API usage. It would be 
extremely awesome to be able to utilize GCC internals for static 
analysis and source refactoring. Currently that isn't realistic as these 
features do not belong in the GCC mainline and distributing a gcc fork 
would be very burdensome.
Plugins would also encourage projects such as Mozilla to contribute to 
gcc to implement various backend improvements to make various plugins 
possible. I think GCC could gain some "accidental" contributors this way.
i believe that this would also have an additional effect of making GCC 
the strongly suggested compiler for development as it would be able to 
provide development benefits not (yet?) available in most compilers.


LD_PRELOAD would work just as well as having gcc directly support
plugins, provided that certain internal things are never made
file-local.  Someone could write a helper library to make it
relatively simple to hook in.  But... I looked at this recently, and
since gcc is not linked with -rdynamic, it is a non-starter.
  
Tom, I don't know much about linkers and LD_PRELOAD. Would making 
LD_PRELOAD work be easier than making an unstable plugin API?


Taras



Idea to refine -Wtype-limits

2007-11-25 Thread Gerald Pfeifer
-Wtype-limits guards a very useful, in my experience, set of warnings
including the following:

  warning: comparison of unsigned expression < 0 is always false
  warning: comparison of unsigned expression >= 0 is always true

Based on these I identified and fixed several real bugs in Wine so far
and I'd like to enable -Wtype-limits by default.  However, there is one 
caveat:  -Wtype-limits also warns about seemingly harmless code like

  bool f(unsigned i) {
if( DEFINE_FIRST <= i || i <= DEFINE_LAST )
  return false;
return true;
  }

in case DEFINE_FIRST evaluates to 0.

In applications like Wine there are a couple of such cases and it would 
not be good to remove the first half of the check to silence the warning 
because at a later point in time the DEFINE_FIRST might actually change,
plus the simplification is not obvious looking at the source.

So, I got a "creative" idea.  What if we disable this warning if comparing 
against 0U instead of 0?  Then we could use #define DEFINE_FIRST 0U and 
avoid the warning for cases like this, without losing real diagnostics
assuming that nobody would accidently write such code.

Ah, and we wouldn't even have to adjust any of the messages in GCC, just 
the check itself. ;-)

Thoughts?

Gerald


Re: Idea to refine -Wtype-limits

2007-11-25 Thread Manuel López-Ibáñez
On 26/11/2007, Gerald Pfeifer <[EMAIL PROTECTED]> wrote:
>
> So, I got a "creative" idea.  What if we disable this warning if comparing
> against 0U instead of 0?  Then we could use #define DEFINE_FIRST 0U and
> avoid the warning for cases like this, without losing real diagnostics
> assuming that nobody would accidently write such code.
>
> Ah, and we wouldn't even have to adjust any of the messages in GCC, just
> the check itself. ;-)
>
> Thoughts?
>

I would prefer if the warning was avoided for an explicit cast of 0 to
unsigned, that is, for:

((unsigned int) 0) <= c

I am thinking about code such in PR11856:

template 
void f(t c) {
  assert(0 <= c and c <= 2);
}
int main() {
  f(5);
}

It would be much easier to modify the above into:

template 
void f(t c) {
  assert( t(0) <= c and c <= 2);
}

But I don't have any idea how this can be checked at the point where
the warning is generated. Perhaps your proposal is easier to
implement.

Cheers,

Manu.


Re: Idea to refine -Wtype-limits

2007-11-25 Thread Gabriel Dos Reis
Gerald Pfeifer <[EMAIL PROTECTED]> writes:

| -Wtype-limits guards a very useful, in my experience, set of warnings
| including the following:
| 
|   warning: comparison of unsigned expression < 0 is always false
|   warning: comparison of unsigned expression >= 0 is always true
| 
| Based on these I identified and fixed several real bugs in Wine so far
| and I'd like to enable -Wtype-limits by default.  However, there is one 
| caveat:  -Wtype-limits also warns about seemingly harmless code like
| 
|   bool f(unsigned i) {
| if( DEFINE_FIRST <= i || i <= DEFINE_LAST )
|   return false;
| return true;
|   }
| 
| in case DEFINE_FIRST evaluates to 0.
| 
| In applications like Wine there are a couple of such cases and it would 
| not be good to remove the first half of the check to silence the warning 
| because at a later point in time the DEFINE_FIRST might actually change,
| plus the simplification is not obvious looking at the source.
| 
| So, I got a "creative" idea.  What if we disable this warning if comparing 
| against 0U instead of 0?  Then we could use #define DEFINE_FIRST 0U and 
| avoid the warning for cases like this, without losing real diagnostics
| assuming that nobody would accidently write such code.
| 
| Ah, and we wouldn't even have to adjust any of the messages in GCC, just 
| the check itself. ;-)

:-)

| Thoughts?

I have no implementation strategy at the moment.  But I do have a
request:  Please, whatever you do, please, don't make it noisy with 
template codes and `reasonable' codes. (We have had PRs for that
behaviour in template codes).

-- Gaby


Re: Problem with ARM_DOUBLEWORD_ALIGN on ARM

2007-11-25 Thread Daniel Jacobowitz
On Wed, Nov 21, 2007 at 06:32:22PM -0500, Geert Bosch wrote:
> Richard, for the help). However,  we're not quite sure if this is
> right.  If you or other ARM-knowledgeable people have any feedback,
> that would be most welcome. I'll then make an updated patch against
> head and submit for review after testing etc.

Won't be me, I'm afraid - I don't know how this code works, just that
it does in our testing.  Sorry.

-- 
Daniel Jacobowitz
CodeSourcery


RE: Designs for better debug info in GCC. Choice A or B?

2007-11-25 Thread Ye, Joey
I like option B. It will be very helpful to reduce software product development 
time. Some software product just release with -O0 because they are not 
confident releasing a version differ to the one they were debugging and testing 
in. 

Also in some systems -O0 simply doesn't work, which is too slow or is too big 
code size to fit into flash memory. Developer has to suffer poor debugability.

I believe it valuable to have an option generating code with fair 
performance/code size but almost full debugability. And I believe it not 
technically impossible. 

Thanks - Joey

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of J.C. Pizarro
Sent: 2007年11月25日 7:46
To: gcc@gcc.gnu.org
Subject: Re: Designs for better debug info in GCC. Choice A or B?

To imagine that i'm using "-g -Os -finline-functions -funroll-loops".

There are differences in how to generate "optimized AND debugged" code.

A) Whole-optimized but with dirty debugged information if possible.

When there is coredump from crash then its debugged information can
be not complete (with losses) but can be readable for humans.
This kind of strategy can't work well in "step to step" debuggers like
gdb, ddd, kgdb, ... but its code is whole-optimized same as stripped program.

B) Whole-debugged but partially optimized because of restricted requirements
to maintain the full debugged information without losses.

This kind of strategy works well in "step to step" debuggers like
gdb, ddd, kgdb, ... but its code is less whole-optimized and bigger than
stripped program.

Sincerely, J.C.Pizarro


Re: Infinite loop when trying to bootstrap trunk

2007-11-25 Thread Ismail Dönmez
Saturday 24 November 2007 Tarihinde 03:44:04 yazmıştı:
> Hi all,
>
> I am trying to bootstrap gcc with the following config :
>
> ../configure --prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc/4.3.0
> --includedir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.0/include
> --datadir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0
> --mandir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0/man
> --infodir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0/info
> --with-gxx-include-dir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.0/include/g++-v3
> --host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --disable-libgcj
> --disable-libssp --disable-multilib --disable-nls --disable-werror
> --disable-checking --enable-clocale=gnu --enable-__cxa_atexit
> --enable-languages=c,c++,fortran,objc,obj-c++,treelang
> --enable-libstdcxx-allocator=new --enable-shared --enable-ssp
> --enable-threads=posix --enable-version-specific-runtime-libs
> --without-included-gettext --without-system-libunwind --with-system-zlib
>
> And build freezes at this point (stuck for ~1 hour and going on 2GB RAM
> Quad Core Xeon ) :
>
> /var/pisi/gcc-4.3.0_pre20071123-30/work/gcc-4.3.0_20071123/build-default-i6
>86-pc-linux-gnu/./prev-gcc/xgcc
> -B/var/pisi/gcc-4.3.0_pre20071123-30/work/gcc-4.3.0_20071123/build-default-
>i686-pc-linux-gnu/./prev-gcc/ -B/usr/i686-pc-linux-gnu/bin/ -c -march=i686
> -ftree-vectorize -O2 -pipe -fomit-frame-pointer -U_FORTIFY_SOURCE
> -fprofile-generate -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes
> -Wmissing-prototypes -Wold-style-definition -Wmissing-format-attribute
> -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings   
> -DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include
> -I../../gcc/../libcpp/include  -I../../gcc/../libdecnumber
> -I../../gcc/../libdecnumber/bid -I../libdecnumberinsn-attrtab.c -o
> insn-attrtab.o
>
>
> I attach gdb to build-default-i686-pc-linux-gnu/./prev-gcc/xgcc and break
> to get this backtrace :
>
> #0  0xa7fae410 in __kernel_vsyscall ()
> #1  0xa7e5bf93 in waitpid () from /lib/libc.so.6
> #2  0x0806608a in pex_wait (obj=0x808aca0, pid=966, status=0x80850d0,
> time=0x0) at ../../libiberty/pex-unix.c:100
> #3  0x0801 in pex_unix_wait (obj=0x808aca0, pid=966, status=0x80850d0,
> time=0x0, done=0, errmsg=0xafc05a54, err=0xafc05a50)
> at ../../libiberty/pex-unix.c:486
> #4  0x08065d31 in pex_get_status_and_time (obj=0x808aca0, done=0,
> errmsg=0xafc05a54, err=0xafc05a50) at ../../libiberty/pex-common.c:531
> #5  0x08065d94 in pex_get_status (obj=0x808aca0, count=2,
> vector=0xafc05a80) at ../../libiberty/pex-common.c:551
> #6  0x0804c6b2 in execute () at ../../gcc/gcc.c:3012
> #7  0x08050f44 in do_spec (
> spec=0x806828c "%{E|M|MM:
> %(trad_capable_cpp) %(cpp_options) %(cpp_debug_options)}  %{!E:%{!M:
> %{!MM:  %{traditional|ftraditional:%eGNU C no longer
> supports -traditional without -E}   %{!combine:\t  %{sa"...)
> at ../../gcc/gcc.c:4436
> #8  0x0805654e in main (argc=36, argv=0xafc05dc4) at ../../gcc/gcc.c:6684

Ok looks like this is not a heisenbug, I left the gcc bootstrap going on and 
it took 4+ hours to compile insn-attrtab.c on this machine which looks really 
bad as it takes 2-3 minutes on my laptop.

Any idea where to look?

Regards,
ismail

-- 
Faith is believing what you know isn't so -- Mark Twain


Re: Designs for better debug info in GCC

2007-11-25 Thread Alexandre Oliva
On Nov 24, 2007, "Richard Guenther" <[EMAIL PROTECTED]> wrote:

> No, hashing is fine, but doing walks over a hashtable when your algorithm
> depends on ordering is not.

Point.

> I have patches to fix the instance of walking over all referenced
> vars.  Which is in the case of UIDs using bitmaps and a walk over a
> bitmap (which ensures walks in UID order).

Why is such memory and CPU overhead better than avoiding the
divergence of UIDs in the first place?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}