[Fwd: Re: FW: matrix linking]

2007-11-23 Thread [EMAIL PROTECTED]

I appreciate your reply, Joe.
But I do not think this is off-topic, though. If we are going to discuss 
the details of your project, Ptolomy, right, then it would have been 
off-topic, I think. But I'm talking about GCC, therefore I believe this 
is the right place to post these ideas.
What I am trying to say is that there is an additional realization of 
dynamic binding, which is called matrix linking. There are other's 
realization of that idea, like Darwin project for instance, but all of 
these realization has one serious stumbling block which makes their use 
almost impossible and therefore these realizations have only academic 
interest, but practically they are useless. And this stumbling block is 
that you need to 'suspend' the application when you do dynamic binding, 
moreover you should have all the threads finished inside that code which 
is going to to re-bind. This is a huge problem. New approach, which I 
have called Matrix Linking does resolve this problem, it means it 
becomes possible to make binding right on the fly, without any threads 
suspending, directly in multi threaded application. I believe this is 
quite interesting idea in order to be shared with GCC community. I do 
not think this is a professional reply, saying 'Nothing new here: add a 
level of indirection...'. I believe the idea has a future, just imagine 
what opportunities it gives to developer, manager, analyst and clients.
I would like the community would have considered the idea. I am ready to 
answer all the questions you might have.


Yours sincerely
George.
http://docs.georgeshagov.com/twiki/tiki-index.php?page=Matrix+Linking

Joe Buck ?:

I wrote:
 

Nothing new here: add a level of indirection (or use C++ virtual
functions), and dynamically load code.  In the Ptolemy project
(http://ptolemy.eecs.berkeley.edu/) we were doing that in 1990:
we could define new classes and load them into a running application,
without restarting.



On Sun, Nov 18, 2007 at 09:33:16AM +0300, [EMAIL PROTECTED] wrote:
 

Is this a thread safe operation for your Ptolomy project?
Should you suspend the application in order to load 'new classes' there?



We're getting off-topic for the list, so I'm answering off-list.

The operation could be made thread-safe with appropriate locking (the
15-year-old code did not have this locking, but it's trivial to add).  It
used the object factory pattern, in which each class has a clone method
and there's support for adding a new, dynamically linked class to the
master list.  If other threads are running and these threads are not in
the act of creating new objects from the object factory, they can run in
parallel.
  





Status of GCC 4.3 on IA64 (Debian)

2007-11-23 Thread Martin Michlmayr
I recently compiled the Debian archive (around 7000 packages that need
to be compiled) on IA64 using trunk to identify new issues before GCC
4.3 is released.  I compiled the archive with optimization set to -O3
and found the following ICEs with trunk from 20071116:

 - PR34138: verify_ssa failed (found real variable when subvariables
   should have appeared) (5 failures)
   No progress yet.

 - PR34122: ICE in get_addr_dereference_operands, at tree-ssa-operands.c:1746
   (44 failures)
   Already fixed.

 - PR34206: Segfault in htab_find_with_hash (1 failure)
   Reported today.

 - PR33453: ICE in build2_stat, at tree.c:3110 (1 failure)
   A known issue that hasn't seen any action recently.

 - PR31976: ssa_operand_alloc, at tree-ssa-operands.c:484 (5 failures)
   A known issue that is being investigated.

None of these bugs is IA64 specific.  I'm happy to say that all of the
13 IA64 specific bugs I filed since March have been fixed.

The testing was done with 4.3.0 20071116 r127646 from 2007-11-16 to
2007-11-23.

Thanks to Gelato and HP for supporting my GCC testing efforts on IA64.
Since March, 13 IA64 specific and 52 generic compiler bugs were
reported, see http://cyrius.com/tmp/gcc-4.3-ia64.pdf for a full report.
-- 
Martin Michlmayr
http://www.cyrius.com/


Re: Re: Does gcc support compiling for windows x86-64?

2007-11-23 Thread Howard Chu

Ali, Muhammad wrote:

but the preliminary gcc/gfortran for mingw 64-bit mode which FX Coudert
supplied was a version of gcc-4.3.



May you can take a look at the developer project 'mingw-w64' on
sourceforge for more details.



Thanks for pointing me to the mingw-w64 sourceforge project. As Tim
said, there isn't much documentation available for it, so I guess I'll
just download and try it out. Although, at this point, I'm just
thinking that downloading a trial version of Visual C++ and using it
to compile my dll would be much easier :(. But even with that option,
I'm not sure if its legal to distribute that dll with our package.


I started to download the trial Visual C++ but it says it's 32-bit only.


Ali didn't say if he meant g++ rather than gcc, but I guess all of this
has missed his intended topic.

gcc. I'm working with a java application which is using JNI to call
native libraries. We want to port our software to 64-bit platforms,
and hence here I am, trying to figure out how to compile 64-bit dlls
on my amd64.

Thanks for all the comments,
Ali.


I've downloaded a couple of the binary tarballs from the mingw-w64 project 
page. Had a lot of trouble getting usable code out of them. I finally figured 
out that I had to compile without any optimization to get anything to run. 
It's not clear whether this is a problem specific to the win64 support, or if 
it's gcc 4.3.0's immaturity. I was also frustrated by the lack of a working 
debugger, giving up half way through building gdb. I'm thinking it may be 
quicker to write a tool that converts the gcc stabs stuff to rudimentary PDB 
format to provide function and variable names for WinDbg.


The cross-compiler runs pretty well hosted on Linux but for some reason some 
of the configure scripts I ran were accessing my Linux header files and so 
detecting features they shouldn't have. My only other choice was to run under 
Cygwin on the Windows side, but shell scripts run about 100 times slower 
there, making configure/libtool/etc unbearable. (Normally I would use MSYS but 
the last one I tried just crashes immediately on Win64.) And it looks like 
current bash on cygwin doesn't handle case/esac constructs correctly, so e.g. 
the configure script for BerkeleyDB 4.6.21 fails there.


Gotta hand it to Microsoft, they've sure made it hard to support their 
platform...
--
  -- Howard Chu
  Chief Architect, Symas Corp.  http://www.symas.com
  Director, Highland Sunhttp://highlandsun.com/hyc/
  Chief Architect, OpenLDAP http://www.openldap.org/project/


Re: Designs for better debug info in GCC

2007-11-23 Thread Robert Dewar

Richard Guenther wrote:

On Nov 22, 2007 8:22 PM, Frank Ch. Eigler <[EMAIL PROTECTED]> wrote:

Mark Mitchell <[EMAIL PROTECTED]> writes:


[...]

 Who is "we"?  What better debugging are GCC users demanding?  What
debugging difficulties are they experiencing?  Who is that set of users?
What functional changes would improve those cases?  What is the cost of
those improvements in complexity, maintainability, compile time, object
file size, GDB start-up time, etc.?

That's what I'm asking.  First and foremost, I want to know what,
concretely, Alexandre is trying to achieve, beyond "better debugging
info for optimized code".  Until we understand that, I don't see how we
can sensibly debate any methods of implementation, possible costs, etc.

It may be asking to belabour the obvious.  GCC users do not want to
have to compile with "-O0 -g" just to debug during development (or
during crash analysis *after deployment*!).  Developers would like to
be able to place breakpoints anywhere by reference to the source code,
and would like to access any variables logically present there.
Developers will accept that optimized code will by its nature make
some of these fuzzy, but incorrect data must be and incomplete data
should be minimized.

That they put up with the status quo at all is a historical artifact
of being told so long not to expect any better.


As it is (without serious overhead) impossible to do both, you either have
to live with possibly incorrect but elaborate or incomplete but correct
debug information for optimized code.  Choose one ;)


I don't think you can use the phrase "serious overhead" without rather
extensive statistics. To me, -O1 should be reasonably debuggable, as it
always was back in earlier gcc days. It is nice that -O1 is somewhat
more efficient than it was in those earlier days, but not nice enough
to warrant a severe regression in debug capabilities. To me anyone who
is so concerned about performance as to really appreciate this
difference will likely be using -O2 anyway.

The trouble is that we have set as the criterion for -O1 all the
optimizations that are reasonably cheap in compile time. I think
it is essential that there be an optimization level that means

All the optimizations that are reasonably cheap to implement
and that do not impact debugging information significantly
(except I would say it is OK to impact the ability to change
variables).

For me it would be fine for -O1 to mean that but if there is a
a consensus that an extra level (-Od or whatever) is worth while
that's fine by me.

I find working on the Ada front end that it used to be that I could
always use -O1, OK for debugging, and OK for performance. Now I have
to switch between -O0 for debugging, and then I use -O2 for performance
(for me, the debuggability of -O1 and -O2 are equivalent in this
context, both hopeless, so I might as well use -O2). So I no longer
use -O1 at all (the extra compile time for -O2 is negligible on my
fast note book).



What we (Matz and myself) are trying to do is provide elaborate debug
information with the chance of wrong (I'd call it superflous, or extra)
debug information.  Alexandre seems to aim at the world-domination
solution (with the serious overhead in terms of implementation and
verboseness).

Richard.




Re: Infinite loop when trying to bootstrap trunk

2007-11-23 Thread Ismail Dönmez
Saturday 24 November 2007 Tarihinde 03:44:04 yazmıştı:
> Hi all,
>
> I am trying to bootstrap gcc with the following config :
[...]

Sorry for the noise, looks like my snapshot tarball build from git repo using 
git-archive has some problems as the gcc-4.3-20071123 snapshot bootstrapped 
fine.

Thanks,
ismail

-- 
Faith is believing what you know isn't so -- Mark Twain


Infinite loop when trying to bootstrap trunk

2007-11-23 Thread Ismail Dönmez
Hi all,

I am trying to bootstrap gcc with the following config :

../configure --prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc/4.3.0 
--includedir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.0/include 
--datadir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0 
--mandir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0/man 
--infodir=/usr/share/gcc/i686-pc-linux-gnu/4.3.0/info 
--with-gxx-include-dir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.0/include/g++-v3 
--host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --disable-libgcj 
--disable-libssp --disable-multilib --disable-nls --disable-werror 
--disable-checking 
--enable-clocale=gnu --enable-__cxa_atexit 
--enable-languages=c,c++,fortran,objc,obj-c++,treelang 
--enable-libstdcxx-allocator=new 
--enable-shared --enable-ssp --enable-threads=posix 
--enable-version-specific-runtime-libs --without-included-gettext 
--without-system-libunwind --with-system-zlib

And build freezes at this point (stuck for ~1 hour and going on 2GB RAM Quad 
Core Xeon ) :

/var/pisi/gcc-4.3.0_pre20071123-30/work/gcc-4.3.0_20071123/build-default-i686-pc-linux-gnu/./prev-gcc/xgcc
 
-B/var/pisi/gcc-4.3.0_pre20071123-30/work/gcc-4.3.0_20071123/build-default-i686-pc-linux-gnu/./prev-gcc/
 -B/usr/i686-pc-linux-gnu/bin/ -c   
-march=i686 -ftree-vectorize -O2 -pipe -fomit-frame-pointer -U_FORTIFY_SOURCE 
-fprofile-generate -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -Wold-style-definition -Wmissing-format-attribute 
-pedantic -Wno-long-long -Wno-variadic-macros   
 -Wno-overlength-strings-DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. 
-I../../gcc/../include -I../../gcc/../libcpp/include  
-I../../gcc/../libdecnumber 
-I../../gcc/../libdecnumber/bid -I../libdecnumberinsn-attrtab.c -o 
insn-attrtab.o


I attach gdb to build-default-i686-pc-linux-gnu/./prev-gcc/xgcc and break to 
get this backtrace :

#0  0xa7fae410 in __kernel_vsyscall ()
#1  0xa7e5bf93 in waitpid () from /lib/libc.so.6
#2  0x0806608a in pex_wait (obj=0x808aca0, pid=966, status=0x80850d0, 
time=0x0) at ../../libiberty/pex-unix.c:100
#3  0x0801 in pex_unix_wait (obj=0x808aca0, pid=966, status=0x80850d0, 
time=0x0, done=0, errmsg=0xafc05a54, err=0xafc05a50)
at ../../libiberty/pex-unix.c:486
#4  0x08065d31 in pex_get_status_and_time (obj=0x808aca0, done=0, 
errmsg=0xafc05a54, err=0xafc05a50) at ../../libiberty/pex-common.c:531
#5  0x08065d94 in pex_get_status (obj=0x808aca0, count=2, vector=0xafc05a80) 
at ../../libiberty/pex-common.c:551
#6  0x0804c6b2 in execute () at ../../gcc/gcc.c:3012
#7  0x08050f44 in do_spec (
spec=0x806828c "%{E|M|MM:
%(trad_capable_cpp) %(cpp_options) %(cpp_debug_options)}  %{!E:%{!M:
%{!MM:  %{traditional|ftraditional:%eGNU C no longer 
supports -traditional without -E}   %{!combine:\t  %{sa"...) 
at ../../gcc/gcc.c:4436
#8  0x0805654e in main (argc=36, argv=0xafc05dc4) at ../../gcc/gcc.c:6684


For information, 2 days old trunk bootstrapped fine. Any idea is appreciated.

Regards,
ismail

-- 
Faith is believing what you know isn't so -- Mark Twain


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 23, 2007, "Steven Bosscher" <[EMAIL PROTECTED]> wrote:

>> So, what's this prejudice against debug insns?  Why do you regard them
>> as notes rather than insns?

> What worries me is that GCC will have to special-case DEBUG_INSN
> everywhere where it looks at INSNs.

This is just not true.  Anywhere that simply wants to update insns for
the effects of other transformations won't have to do that.  Only
places in which we need the weak-use semantics of debug_insns need to
give them special treatment.  Not because they're not insns, but
because they're weak uses, i.e., uses that shouldn't interfere with
optimizations.

Yes, catching all such cases hasn't been trivial.  If we miss some,
then what happens is that -O2 -g -fvar-tracking-assignments outputs
different executable code than -O2.  Everything still works just fine,
we eventually get a bug report, we fix it and move on.

This is *much* better than starting out with notes, that nearly
nothing cares about, and try to add code to update the notes as code
transformations are performed.  In this case, we get incorrect,
non-functional compiler output unless we catch absolutely all bugs
upfront.

> Apparently, you can't treat DEBUG_INSN just like any other normal
> insn.

Obviously not.  They're weaker uses than anything else.  We haven't
had any such thing in the compiler before.

> but for the moment I fear you're just going to see a lot of
> duplication of ugly conditionals

Your fear is understandable but not justified.  Go look at the
patches.  x86_64-linux-gnu now bootstraps and produces exactly the
same code with and without -fvar-tracking-assignments.  And no complex
conditionals were needed.  The most I've needed so far was to ignore
debug insns at certain spots.

It's true that in a number of situations this is an oversimplified
course of action, and some additional effort might be needed to
actually update the debug insns when they would have interfered with
optimizations.  Time will tell, I guess.  So far, it doesn't look like
it's been a problem, and I don't foresee these duplicated or ugly
conditionals you fear.

> and bugs where such conditionals are forgotten/overlooked/missing.

See above.  One of the reasons for the approach I've taken is that
such cases will, in the worst case, cause missed optimizations, not
incorrect compiler output.

> And the benefit, well, let's just say I'm not convinced that less
> elaborate efforts are not sufficient.

Sufficient for what?  Efforts towards what?  Generating more incorrect
debug information just for the sake of it?  Adding more debug
information while breaking some that's just fine now?  Is that really
progress?

> (And to be perfectly honest, I think GCC has bigger issues to solve
> than getting perfect debug info -- such as getting compile times of a
> linux kernel down ;-))

Compile speed is a quality of implementation issue.  Output
correctness and standard compliance comes first in my book.

And then, I'm supposed to fix this correctness problem, not other
issues that others might find more important.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Re: Does gcc support compiling for windows x86-64?

2007-11-23 Thread NightStrike
On Nov 23, 2007 6:31 PM, Howard Chu <[EMAIL PROTECTED]> wrote:
> I've downloaded a couple of the binary tarballs from the mingw-w64 project
> page. Had a lot of trouble getting usable code out of them. I finally figured
> out that I had to compile without any optimization to get anything to run.
> It's not clear whether this is a problem specific to the win64 support, or if
> it's gcc 4.3.0's immaturity. I was also frustrated by the lack of a working
> debugger, giving up half way through building gdb. I'm thinking it may be
> quicker to write a tool that converts the gcc stabs stuff to rudimentary PDB
> format to provide function and variable names for WinDbg.

GDB support isn't done yet.  Optimization issues will be more
difficult to debug, also.  Please remember that the project has only
two or three people on it, only one of which really understands this
stuff.  If you'd like to help out, we would be very grateful.

Also, please try a more current version of the project.  As soon as I
get through with some very pressing family emergencies, I will be
uploading new versions of everything.  Anything that you can test and
find problems with will be very helpful.

> The cross-compiler runs pretty well hosted on Linux but for some reason some
> of the configure scripts I ran were accessing my Linux header files and so
> detecting features they shouldn't have. My only other choice was to run under
> Cygwin on the Windows side, but shell scripts run about 100 times slower
> there, making configure/libtool/etc unbearable. (Normally I would use MSYS but
> the last one I tried just crashes immediately on Win64.) And it looks like
> current bash on cygwin doesn't handle case/esac constructs correctly, so e.g.
> the configure script for BerkeleyDB 4.6.21 fails there.

I made a binary release that runs on i686-pc-mingw32.  That may allow
you to step away from cygwin.  Regarding the linux release, however,
can you describe in more detail what you are seeing in terms of
accessing the linux header files?  It's entirely possible that I am
building the sysroot incorrectly (hey, we all make mistakes :) ).  If
you could provide more feedback, I'd love to try to fix it.  You can
email me directly, post to the mingw-w64 mailing list, file a bug
report on the mingw-w64 project, etc etc.


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 13, 2007, Michael Matz <[EMAIL PROTECTED]> wrote:

> Hi,
> On Mon, 12 Nov 2007, Alexandre Oliva wrote:

>> With the design I've proposed, it is possible to compute the value of i, 

> No.  Only if the function is reservible.

Of course.  I meant it for that particular case.  The generalization
is obvious, but I didn't mean it would be always possible.

>> As I wrote before, I'm not aware of any systemtap bug report about a
>> situation in which an argument was actually optimized away.

> I think it all started from PR23551.

Yep.  Nowhere does that bug report request parameters to be forced
live.  What it does request is that parameters that are not completely
optimized away be present in debug information.

Now, consider these cases:

1. function is not inlined

At its entry point, we bind the argument to the register or stack slot
in which the argument is live.  Worst case, it's clobbered at the
entry point instruction itself, because it's entirely unused.  By
emitting a live range from the entry point to the death point, we're
emitting accurate and complete debug information for the argument.  We
win.

2. function is inlined, the argument is unused and thus optimized
away, but the function does some other useful computation

At the inlined entry point, we have a note that binds the argument to
its expected value.  As we transform the program and optimize away the
argument, we retain and update the note, such that we can still
represent the value of the inlined argument for as long as it's
available.

3. function is inlined and completely optimized away

No instruction remains in which the argument is in scope, so we might
as well refrain from emitting location information for it.  Even
though we can figure out where the value lives, there's no code to
attach this information to.  So there's no place to set a breakpoint
on to inspect the variable location anyway.

> For us it also happened in the kernel in namei.c, where real_lookup
> is inlined sometimes, and it's arguments are missing.  That might or
> might not be reversible functions, so your scheme perhaps would have
> helped there.  But generally it won't solve the problem for good.

It looks like you're trying to solve a different problem.

I'm not trying to find a way to ensure that arguments are live.

I'm trying to get GCC to emit debug information that correctly matches
the instructions it generated.

If the value of a variable is completely optimized away at a point in
the porogram, the correct representation for its location at that
point is an empty set.

>> I wouldn't go as far as stopping the optimization just so that systemtap 
>> can monitor the code.

> Like I said, at some point you have to or accept that some code remains to 
> be not introspectable.

Yep.  It's easy enough to tweak the code to keep a variable live, if
you absolutely need it.  But this is not something I'm working to get
the compiler to do by itself.  Quite the opposite, in fact.  I'm going
to set the compiler free to perform some optimizations that it
currently refrains from performing for the sake of debug information,
when the conflict is only apparent because of past implementation
decisions that I'm working to fix.

> Then I'm probably still confused what problem you're actually trying to 
> solve.  If you don't want to be sure you get precise location information 
> 100% of the time, then what percentage are you required to get?

Accuracy comes first.  If we ever emit debug information saying 'this
variable is here' for a point in the program in which it's in fact
elsewhere or unavailable, that's a bug to be fixed.

Completeness comes second.  If we could have emitted debug information
saying 'the value of this variable is here' for a point in the
program, and we instead claim the variable is unavailable at that
point, that's an improvement that can be made.

> And how do you measure this?

Good question.  The implementation approach I've taken, that exposes
debug annotations as actual code, starts out with 100% accuracy
(that's the theory, anyway, otherwise generated code would change,
and, even though we still don't have a complete framework to ensure
code doesn't change, if it does, then at least debug information will
model the change accurately), and we can then grow completeness
incrementally.

> Or is the task rather "emit better debug info"?

Nope.  That's a secondary goal that will be achieved as we get
accurate and sufficiently complete debug information.  I don't have
completeness goals set, but I have reasons to expect we're going to
get much better results than we have now without too much additional
effort.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 12, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva wrote:
>> On Nov 12, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote:
>> 
>>> Clearly, for some users, incorrect debugging information on optimized
>>> code is not a terribly big deal.  It's certainly less important to many
>>> users than that the program get the right answer.  On the other hand,
>>> there are no doubt users where, whether for debugging, certification, or
>>> whatever, it's vitally important that the debugging information meet
>>> some standard of accuracy.
>> 
>> How is this different from a port of the compiler for a CPU that few
>> people care about?  That many users couldn't care less whether the
>> compiler output on that port works at all doesn't make it any less of
>> a correctness issue.

> You're again trying to make this a binary-value question.  Why?

Because in my mind, when we agree there is a bug, then a fix for it
can is easier to swallow even if it makes the compiler spend more
resources, whereas a mere quality-of-implementation issue is subject
to quite different standards.

> Lots of things are "a correctness issue".  But, some categories tend to
> be worse than others.  There is certainly a qualitative difference in
> the severity of a defect that results in the compiler generating code
> that computes the wrong answer and a defect that results in the compiler
> generating wrong debugging information for optimized code.

That depends a lot on whether your application depends uses the
incorrect compiler output or not.

If the compiler produces incorrect code, but your application doesn't
ever exercise that error, would you argue for leaving the bug unfixed?

These days, applications are built that depend on the correctness of
the compiler output in certain sections that historically weren't all
that functionally essential, namely, the meta-information sections
that we got used to calling debug information.

I.e., these days, applications exercise the "code paths" that formerly
weren't exercised.  This exposes bugs in the compiler.  Worse: bugs
that we have no infrastructure to test, and that we don't even agree
are actual bugs, because the standards that specify the "ISA and ABI"
in which such code ought to be output are apparently regarded as
irrelevant by some.

Just because their perception is distorted by a single use of such
information, which involves a high amount of human interaction, and
humans are able to tolerate and adapt to error conditions.

But as more and more uses of such information are actual production
systems rather than humans behind debuggers, such errors can no longer
be tolerated, because when the debug output is wrong, the system
breaks.  It's that simple.  It's really no different from any other
compiler bug.

> Let's put it this way: if a user has to choose whether the compiler will
> (a) generate code that runs correctly for their application, or (b)
> generate debugging information that's accurate, which one will they choose?

(a), for sure.  But bear in mind that, when the application's correct
execution depends on the correctness of debugging information, then a
implies b.

> But what's the point of this argument?  It sounds like you're trying to
> argue that debug info for optimized code is a correctness issue, and
> therefore we should work as hard on it as we would on code-generation
> bugs.

I'm working hard on it.  I'm not asking others to join me.  I'm just
asking people to understand how serious a problem it is, and that,
even those fixing these bugs may have a cost, it's bugs we're talking
about, it's incorrect compiler output that causes applications to
break, not mere inconvenience for debuggers.

> I'd like better debugging for optimized code, but I'm certainly more
> concerned that (a) we generate correct, fast code when optimizing,
> and (b) we generate good debugging information when not optimizing.

This just goes to show that you're not concerned with the kind of
application that *depends* on correct debug information for
functioning.  And it's not debuggers I'm talking about here.

That's a reasonable point of view.  Maybe the GCC community can decide
that the debug information it produces is just for (poor) consumption
by debug programs, and that we have no interest in *complying* with
the debug information standards that document the debug information
that other applications depend on.  And I mean *complying* with the
standards, rather than merely outputting whatever seems to be easy and
approximately close to what the standard mandates.

I just wish the GCC community doesn't make this decision, and it
accepts fixes to these bugs even when they impose some overhead,
especially when such overhead can be easily avoided with command-line
options, or even is disabled by default (because debug info is not
emitted by default, after all).

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fs

Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 13, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva wrote:
>>> What I don't understand is how it's actually going to work.  What
>>> are the notes you're inserting?
>> 
>> They're always of the form
>> 
>> DEBUG user-variable = expression

> Good, I understand that now.

> Why is this better than associating user variables with assignments?

I've already explained that, but let me try to sum it up again.

If we annotate assignments, then not only do the annotations move
around along with assignments (I don't think that's desirable), but
when we optimize such assignments away, the annotations are either
dropped or have to stand on their own.

Since dropping annotations and moving them around are precisely
opposed the goal of making debug information accurate, then keeping
the annotations in place and enabling them to stand on their own is
the right thing to do.

Now, since we have to enable them to stand on their own, then we're
faced with the following decision: either we make that the canonical
annotation representation all the way from the beginning, or we
piggyback the annotations on assignments until they're moved or
removed, at which point they become stand-alone annotations.  The
former seems much more maintainable and simpler to deal with, and I
don't see that there's a significant memory or performance penalty to
this.

>> That said, growing SET to add to it a list of variables (or components
>> thereof) that the variable is assigned to could be made to work, to
>> some extent.  But when you optimize away such a set, you'd still have
>> to keep the note around

> Why?  It seems to me that if we're no longer doing the assignment, then
> the location where the value of the user variable can be found (if any)
> is not changing at this point.

The thing is that the *location* of the user variable is changing at
that point.  Either because its previous value was unavalable, or
because it had remained only at a different location.  Only at the
point of the assignment should we associate the variable with the
location that holds its current value.

>> (set (reg i) (const_int 3)) ;; assigns to i
>> (set (reg P1) (reg i))
>> (call (mem f))
>> (set (reg i) (const_int 7)) ;; assigns to i
>> (set (reg i) (const_int 2)) ;; assigns to i
>> (set (reg P1) (reg i))
>> (call (mem g))
>> 
>> could have been optimized to:
>> 
>> (set (reg P1) (const_int 3))
>> (call (mem f))
>> (set (reg P1) (const_int 2))
>> (call (mem g))
>> 
>> and then you wouldn't have any debug information left for variable i.

> Actually, you would, in the method I'm making up.  In particular, both
> of the first two lines in the top example (setting "i" and setting "P1")
> would be marked as providing the value of the user variable "i".

Yes, this works in this very simple case.  But it doesn't when i is
assigned, at different points, to the values of two separate
variables, that are live and initialized much earlier in the program.
Using hte method you seem to be envisioning would extend the life of
the binding of variable 'i' to the life of the two other variables,
ending up with two overlapping and conflicting live ranges for i, or
it would have to drop one in favor of the other.  You can't possibly
retain correct (non-overlapping) live ranges for both unless you keep
notes at the points of assignment.

To make the example clear, consider:

(set (reg x [x]) ???1)
(set (reg y [y]) ???2)
(set (reg i [i]) (reg x [x]))
(set (reg P1) (reg i))
(call (mem f))
(set (reg i [i]) (reg y [y]))
(call (mem g))
(set (reg P1) (reg i))
(call (mem f))

if it gets optimized to:

(set (reg P1 [x, i]) ???1)
(set (reg y [y, i]) ???2)
(call (mem f))
(call (mem g))
(set (reg P1) (reg y))
(call (mem f))

then we lose.  There's no way you can emit debug information for i
based on these annotations such that, at the call to g, the value of i
is correct.  Even if you annotate the copy from y to P1, you still
won't have it right, and, worse, you won't even be able to tell that,
before the call to g, i should have held a different value.  So you'll
necessarily emit incorrect debug information for this case: you'll
state i still holds a value at a point in which it shouldn't hold that
value any more.  This is worse that stating you don't know what the
value of i is.

> What I'm suggesting is that this is something akin to a dataflow
> problem.  We start by marking user variables, in the original TREE
> representation.  Then, any time we copy the value of a user variable, we
> know that what we're doing is providing another place where we can find
> the value of that user variable.  Then, when generating debug
> information, for every program region, we can find the location(s) where
> the value of the user variable is available, and we can output any one
> of those locations for the debugger.

That's exactly what I have in mind.

> This method gives us accurate debug information, in the sense that if we
> say that the value of V is at location X,

Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 13, 2007, Michael Matz <[EMAIL PROTECTED]> wrote:

> The nice thing is, that there are only few places which really get rid of 
> SETs: remove_insn.  You have to tweak that to keep the information around, 
> not much else (though that claim remains to be proven :) ).

And then, you have to tweak everything else to keep the note that
replaced the set up to date as you further optimize the code.  So what
was the point of adding the note to the SET, again?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 12, 2007, "Steven Bosscher" <[EMAIL PROTECTED]> wrote:

> DEBUG_INSN in RTL (with one noteworthy difference, namely that having
> note-like GIPMLE statements is a totally new concept

Not quite.  There were codeless gimple constructs before (think
labels, for one).  Or empty asm statements.  But then, I'm not sure
what you mean by note-like; maybe it's something else.  As I explained
before, debug insns and debug stmts are more like code than like
notes, because notes generally don't need adjusting as code is
modified elsewhere, whereas code does.  And debug insns and stmts
definitely need adjusting like regular insns.

> while DEBUG_INSN is just a wannabe-real-insn INSN_NOTE).

Except for this tiny detail that INSN_NOTEs are never adjusted as code
is modified, because in general they don't even contain RTL.
VAR_LOCATION is a recent exception, and it used to be introduced so
late precisely because there's no infrastructure to keep notes
up-to-date as code transformations are performed.

So, yes, debug stmts and insns are notes in the sense that they don't
output code.  Like USE insns, labels, empty asm insns and other
UNSPECs.  But wait, those are insns, not notes.  And they do generate
code, just not in the .text section, but rather in .debug sections.

So, what's this prejudice against debug insns?  Why do you regard them
as notes rather than insns?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


gcc-4.3-20071123 is now available

2007-11-23 Thread gccadmin
Snapshot gcc-4.3-20071123 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20071123/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.3 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 130384

You'll find:

gcc-4.3-20071123.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.3-20071123.tar.bz2 C front end and core compiler

gcc-ada-4.3-20071123.tar.bz2  Ada front end and runtime

gcc-fortran-4.3-20071123.tar.bz2  Fortran front end and runtime

gcc-g++-4.3-20071123.tar.bz2  C++ front end and runtime

gcc-java-4.3-20071123.tar.bz2 Java front end and runtime

gcc-objc-4.3-20071123.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.3-20071123.tar.bz2The GCC testsuite

Diffs from 4.3-20071116 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.3
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Designs for better debug info in GCC

2007-11-23 Thread Richard Kenner
> Yes, catching all such cases hasn't been trivial.  If we miss some,
> then what happens is that -O2 -g -fvar-tracking-assignments outputs
> different executable code than -O2.

But that's a very serious type of bug because it means you have
situations where a program fails and you can't debug it because when
you turn on debugging information, it doesn't fail anymore.  We need
to make an absolute rule that this *cannot* happen and luckily this is
one of the easiest types of errors to project against.


Re: Designs for better debug info in GCC

2007-11-23 Thread Frank Ch. Eigler
Hi -

(BTW, sorry for reopening this old thread if people are sick & tired of it.)

> > Mark Mitchell <[EMAIL PROTECTED]> writes:
> > > [...]
> > > That's what I'm asking.  First and foremost, I want to know what,
> > > concretely, Alexandre is trying to achieve, beyond "better debugging
> > > info for optimized code".  [...]
> >
> > It may be asking to belabour the obvious.  GCC users do not want to
> > have to compile with "-O0 -g" just to debug during development [...]
> > Developers will accept that optimized code will by its nature make
> > some of these fuzzy, but incorrect data must be and incomplete data
> > should be minimized. [...]
> 
> As it is (without serious overhead) impossible to do both, you either have
> to live with possibly incorrect but elaborate or incomplete but correct
> debug information for optimized code.  Choose one ;)

I did say "minimized", not "eliminated".  It needs to be good enough
that a semi-knowledgable person or a dumb but heuristic-laden program
that processes debugging info can nevertheless extract reliable
information.


> What we (Matz and myself) are trying to do is provide elaborate
> debug information with the chance of wrong (I'd call it superflous,
> or extra) debug information.

(I will need to reread the thread to see what this extra information
can do in terms of misleading users or tools, such as giving incorrect
variable values/locations.  I'd appreciate a link if you have one
handy.)

> Alexandre seems to aim at the world-domination solution (with the
> serious overhead in terms of implementation and verboseness).

That ("world-domination") seems an overly unkind characterization - we
could simply say he's trying an exhaustive, straining-to-be-correct
solution.

It seems to me that we will shortly see the actual impacts of both of
these approaches in terms of compiler complexity as well as any
improvements in data quality.  It does not seem to me like there is
substantial disagreement over the ideal of correct and to a lesser
extent complete information, so let's see the implementations and then
compare.

- FChE


Re: [Fwd: Re: FW: matrix linking]

2007-11-23 Thread Olivier Galibert
On Fri, Nov 23, 2007 at 11:49:03AM +0300, [EMAIL PROTECTED] wrote:
[Changing the _vptr or C equivalent dynamically]
> I would like the community would have considered the idea. I am ready to 
> answer all the questions you might have.

Changing the virtual function pointer dynamically using a serializing
instruction is I'm afraid just the tip of the iceberb.  Even
forgetting for a second that some architectures do not have
serializing instructions per se, there are some not-so-simple details
to take into account:

- the compiler can cache the vptr in a register, making your
  serialization less than serialized suddently

- changing a group of functions is usually not enough.  A component
  version change usually means its internal representation of the state
  changes.  Which, in turn, means you need to serialize the object
  (whatever the programming language) in the older version and
  unserialize it in the newer, while deferring calls into the object
  from any thread

- previous point means you also need to be able to know if any thread
  is "inside" the object in order to have it get out before you do a
  version change.  Which in objects that use a somee of message fifo
  for work dispatching may never happen in the first place

Dynamic vtpr binding is only the start of the solution.

  OG.



Re: Does gcc support compiling for windows x86-64?

2007-11-23 Thread Ali, Muhammad
> >> Why not read the archives of more relevant lists before posting here?  I
> >> don't know what you are driving at, nor do I think anyone here cares,
I guess my initial posting was somewhat misleading. I only mentioned
MinGW because MinGW (and Cygwin) are the only ports of gcc I know of
that work on windows. But I realize that these have an older version
of gcc built into them. The reason I posted to this mailing list was
to find out if the current version supported x86-64 platforms, and if
so, then I would be interested in building the latest version.
And I did spend two days searching (and trying out possible) solutions
before posting to this list.

> >> but the preliminary gcc/gfortran for mingw 64-bit mode which FX Coudert
> >> supplied was a version of gcc-4.3.
Are you talking about the gcc supplied with the mingw-w64 project that
Tim mentioned?

> > May you can take a look at the developer project 'mingw-w64' on
> > sourceforge for more details.
Thanks for pointing me to the mingw-w64 sourceforge project. As Tim
said, there isn't much documentation available for it, so I guess I'll
just download and try it out. Although, at this point, I'm just
thinking that downloading a trial version of Visual C++ and using it
to compile my dll would be much easier :(. But even with that option,
I'm not sure if its legal to distribute that dll with our package.

> Ali didn't say if he meant g++ rather than gcc, but I guess all of this
> has missed his intended topic.
gcc. I'm working with a java application which is using JNI to call
native libraries. We want to port our software to 64-bit platforms,
and hence here I am, trying to figure out how to compile 64-bit dlls
on my amd64.

Thanks for all the comments,
Ali.


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 12, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:

>> And then, optimizations move instructions around, but I don't think
>> they should move the assignment notes around, for they should
>> reflect the structure of the source program, rather than the
>> mangled representation that the optimizers turn it into.

> I'm not sure I follow this.  If the equivalent of some source code
> line is hoisted out of a loop, shouldn't the user variable assignments
> follow it?

Why should it?  The user is entitled to expect the variable to be set
to that value at the right point in the program, no earlier than that.
Before the assignment point in the program, we ought to note that the
variable holds its previous value, or that its previous value is no
longer available.  But noting it holds a value it should only hold at
a later point doesn't seem right to me.

Consider, again, the example:

f(int x, int y) {
  int c;

  c = x;
  do_something_with_c();

  c = y;
  do_something_with_c();
}

If we optimize away the assignments c=x and c=y, and just use x and y
instead (assume c is not otherwise modified), what should we note in
debug info?  Should we pretend that c is dead all over, just because
it was optimized away?  Should we note that it's live in both x and y
registers/stack slots?  Or should we vary its location between x and
y, at the assignment points, as expected by the user?

Now, what if f() is inlined into a loop, such that c could be
versioned and the assignments to it could be hoisted, because x and y
don't vary?  Should this then change the debug information generated
for variable c from the IMHO correct points to the loop entry points?

> After the scheduler has run over a large basic block, the
> structure of the source program is gone.

The mapping becomes more difficult, yes.  But the structure of the
source program remains untouched, in the source program.  And debug
information is about mapping source concepts to implementation
concepts.  So we should try to map source concepts that remain in the
implementation to the remaining implementation concepts.

> Side note: I think it would be unwise to discuss specific patents on
> this public mailing list.  I think that where we have specific patent
> concerns, the steering committee should raise them on a telephone call
> with the FSF and/or the SFLC.  If you have concerns about a specific
> patent, I recommend that you telephone some member of the SC, or send
> e-mail directly to that person.

That makes sense.  I hadn't actually seen that patent before the day I
mentioned it, and I still haven't got 'round to reading it.  I just
thought it would be wise to inform people about the danger of going
down that path, but now I realize it may not have been wise at all.
Sorry for not thinking about it.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Designs for better debug info in GCC

2007-11-23 Thread Alexandre Oliva
On Nov 23, 2007, "Frank Ch. Eigler" <[EMAIL PROTECTED]> wrote:

>> > It may be asking to belabour the obvious.  GCC users do not want to
>> > have to compile with "-O0 -g" just to debug during development [...]
>> > Developers will accept that optimized code will by its nature make
>> > some of these fuzzy, but incorrect data must be and incomplete data
^avoided?
>> > should be minimized. [...]

Richard Guenther replied:

>> As it is (without serious overhead) impossible to do both,

Is it?  

>> you either have to live with possibly incorrect but elaborate or
>> incomplete but correct debug information for optimized code.

You have proof of that?

>> Choose one ;)

As in, command line options?  Or are we going to make a choice and
impose that on all our users, as if it fit all?

Frank followed up:

>> What we (Matz and myself) are trying to do is provide elaborate
>> debug information with the chance of wrong (I'd call it superflous,
>> or extra) debug information.

It's not just superfluous or extra.  Your approach actively regresses
debug information for some cases, while it's arguable whether it
actually improves others.

> That ("world-domination") seems an overly unkind characterization

+1

It would be like myself pointing out that, for every problem, there's
a solution that's simple, elegant and wrong ;-)

Given the problems with sequential live ranges being made parallel and
conflicting, values subject to conditions being made inconditional,
and overwritten values remaining noted as live, I wouldn't think the
characterization above would be unfair, but I'd managed to resist it
so far.

I don't think pulling the blanket such that it covers your face while
it uncovers your feet is the way to go.  It's even worse, because
then, with your face covered, you won't even see that your feet are
uncovered ;-)

Regressions are bad, and this proposed approach guarantees
regressions, while it might fix a few trivial cases.  This is not
enough for me.  I'm not just hacking up a quick fix for a
poorly-worded problem.  I'm doing actual software engineering here,
trying to get GCC to comply with existing debug info standards.

> It does not seem to me like there is
> substantial disagreement over the ideal of correct

Unfortunately, that is indeed up for debate.  There are even those who
dispute that there's any correctness issue involved.  Most other
approaches are actually overreaching in completeness, trading
correctness for more information, as if more unreliable information
was any better than no information at all.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Designs for better debug info in GCC

2007-11-23 Thread Steven Bosscher
On Nov 23, 2007 9:45 PM, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
> So, yes, debug stmts and insns are notes in the sense that they don't
> output code.  Like USE insns, labels, empty asm insns and other
> UNSPECs.  But wait, those are insns, not notes.  And they do generate
> code, just not in the .text section, but rather in .debug sections.

All of them relate to code generation though.  Without them, we create
wrong code.  I'm aware of how you feel about debug info and
correctness and so on.

> So, what's this prejudice against debug insns?  Why do you regard them
> as notes rather than insns?

What worries me is that GCC will have to special-case DEBUG_INSN
everywhere where it looks at INSNs.  One can already see some of that
happening on your branch.  Apparently, you can't treat DEBUG_INSN just
like any other normal insn.

What I see happening with your DEBUG_INSN approach, is that all passes
that use NEXT_INSN/PREV_INSN will have to special-case DEBUG_INSN in
addition to the NOTE_P or INSN_P checks that they already have.  I
have seen too many bugs with passes who forgot to look through notes
to feel comfortable about adding another
not-a-note-but-also-not-an-insn like thing to the insn stream. The
fact that DEBUG_INSN also has real operands that are not really real
operands is bound to confuse the matter even more.  Life with proper
insn and operands iterators for RTL would be so much easier, but for
the moment I fear you're just going to see a lot of duplication of
ugly conditionals and bugs where such conditionals are
forgotten/overlooked/missing.

So to summarize: I'm just worried your approach is going to make GCC
even slower, buggier, more difficult to maintain and more difficult to
understand and modify.  And the benefit, well, let's just say I'm not
convinced that less elaborate efforts are not sufficient.

(And to be perfectly honest, I think GCC has bigger issues to solve
than getting perfect debug info -- such as getting compile times of a
linux kernel down ;-))

Gr.
Steven