new cctools for powerpc-darwin7 required for HEAD

2005-03-25 Thread Geoff Keating
To be used conveniently on Panther, the recent stfiwx change in HEAD  
requires a later version of cctools than the 528.5 version that's  
currently on gcc.gnu.org.  So, I've put cctools-576 on gcc.gnu.org.   
You can install it by clicking on the link below, or by running these  
commands:

ftp ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-576.dmg
hdiutil attach cctools-576.dmg
sudo installer -verbose -pkg /Volumes/cctools-576/cctools-576.pkg  
-target /

This version also handles 8-bit characters in identifiers properly, so  
all those ucnid* testcases should pass.

It's not necessary to upgrade cctools to use 4.0, since the features  
that need the cctools fixes aren't there.

Source for the new cctools is at  
.  You  
can also get it from  
 (they're the same tarfile, just different compression).

The checksums are:
0ebb1a56b1af0e21d9de30b644c1c059  cctools-576.dmg
3b9a5dd3db6b4a7e9c8de02198faea25  cctools-576.tar.bz2


smime.p7s
Description: S/MIME cryptographic signature


Re: Bootstrap error on powerpc-apple-darwin: stfiwx

2005-03-27 Thread Geoff Keating
On 27/03/2005, at 4:00 AM, Geert Bosch wrote:
%cat LAST_UPDATED
Sat Mar 26 21:31:28 EST 2005
Sun Mar 27 02:31:28 UTC 2005
stage1/xgcc -Bstage1/ -B/opt/gcc-head//powerpc-apple-darwin7.8.0/bin/ 
-c   -g -O2 -mdynamic-no-pic -DIN_GCC   -W -Wall -Wwrite-strings 
-Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long 
-Wno-variadic-macros -Wold-style-definition -Werror-DHAVE_CONFIG_H 
   -I. -I. -I/Users/bosch/gcc/gcc -I/Users/bosch/gcc/gcc/. 
-I/Users/bosch/gcc/gcc/../include -I./../intl 
-I/Users/bosch/gcc/gcc/../libcpp/include -I/opt/include -I/opt/include 
/Users/bosch/gcc/gcc/c-cppbuiltin.c -o c-cppbuiltin.o
/var/tmp//ccsCOTn9.s:874:stfiwx instruction is optional for the 
PowerPC (not allowed without -force_cpusubtype_ALL option)
/var/tmp//ccsCOTn9.s:924:stfiwx instruction is optional for the 
PowerPC (not allowed without -force_cpusubtype_ALL option)
/var/tmp//ccsCOTn9.s:970:stfiwx instruction is optional for the 
PowerPC (not allowed without -force_cpusubtype_ALL option)
/var/tmp//ccsCOTn9.s:997:stfiwx instruction is optional for the 
PowerPC (not allowed without -force_cpusubtype_ALL option)
make[2]: *** [c-cppbuiltin.o] Error 1
make[1]: *** [stage2_build] Error 2
make: *** [bootstrap] Error 2
You need to upgrade your cctools, see 
.

smime.p7s
Description: S/MIME cryptographic signature


Re: PCH versus --enable-mapped-location

2005-03-31 Thread Geoff Keating
On 30/03/2005, at 10:36 PM, Per Bothner wrote:
* Note that we compile the gch file as it were the main file
- i.e. it has the MAIN_FILE_P property, and it is not included
from any file.  This means the restored line_table is slightly
anomalous.  One solution to this is when we generate the gch file,
we pretend the .h file is included in a dummy file, which we
may call .  This adds two line_map entries: one for 
before the LC_ENTER of the .h file, and one at the end to LC_LEAVE
the .h file.  Then when we restore, we patch both of these to
replace "" by the real main file name.
This is PR 9471, and your solution sounds like a good one.
* Any source_location values handed out before the #include
that restores the gch will become invalid.  They will be re-mapped
to that in the pre-compiled header.  Presumably that's ok - there's
no declartions or expressions in the main file at that point, or
the restore would have failed.  Any declarations that are 
or in the  will presumably be the same either way.
Another way of putting this is that source_location values should be in 
GCed memory.  I think they all are, certainly all declarations and 
macro definitions are.

* Presumably we need custom code to save and restore the line-map.
Zack experimented with having the entire line_table we gc-allocated,
and thus making using of the GTY machinery.  This addes an extra
indirection for each access to the line_table (probably not a big
deal - cpplib accesses are via the cpp_reader and thus unchanged).
But note we still need custom restore code.
Custom restore code is problematic because it means doing something at 
PCH restore time, and so it's always speed-critical.  Changing two 
pointers is not too expensive (especially if you can arrange for those 
pointers to be on the same page), but copying a data structure 
(especially a large one) is not a good idea.

You may be able to avoid even changing the two pointers, by pointing 
them at a static variable (a char array) and marking them as "skip".  
If you do that they'll continue to point to the static variable after 
the PCH is loaded, and you can just change the static variable.  (You 
can get away with this because they are char *.)

smime.p7s
Description: S/MIME cryptographic signature


Re: GCC 4.0 RC1 Available

2005-04-12 Thread Geoff Keating
On 12/04/2005, at 6:31 AM, Andrew Haley wrote:
Eric Botcazou writes:
which I see you've already committed a patch for, and a large number
of Java failures.

for 4.0.0-20050410.
Same failure as on Solaris.
Andrew, do you have a Darwin machine at hand?
Surprisingly enough, yes.  But I'm having trouble finding a compiler
binary for it.
Assuming it's a powerpc Darwin machine, you can get compiler binaries 
from .


smime.p7s
Description: S/MIME cryptographic signature


Re: GCC 4.0 RC1 Available

2005-04-12 Thread Geoff Keating
On 12/04/2005, at 6:47 AM, Andrew Haley wrote:
Geoff Keating writes:
On 12/04/2005, at 6:31 AM, Andrew Haley wrote:
Eric Botcazou writes:
which I see you've already committed a patch for, and a large 
number
of Java failures.

<http://gcc.gnu.org/ml/gcc-testresults/2005-04/msg00814.html>
for 4.0.0-20050410.
Same failure as on Solaris.
Andrew, do you have a Darwin machine at hand?
Surprisingly enough, yes.  But I'm having trouble finding a compiler
binary for it.
Assuming it's a powerpc Darwin machine, you can get compiler binaries
from <http://developer.apple.com/tools/download/>.
I'm there, but all I can see is Xcode and CHUD.  No compilers or other 
tools.
Xcode is Apple's name for IDE+compiler+debugger+binutils+etc.

smime.p7s
Description: S/MIME cryptographic signature


Re: GCC 4.0 RC1 Available

2005-04-12 Thread Geoff Keating
On 11/04/2005, at 11:23 PM, Ranjit Mathew wrote:
Geoffrey Keating wrote:
[...]
which I see you've already committed a patch for, and a large number
of Java failures.
You can see full test results at
[...]

for 4.0.0-20050410.
It might be helpful to put your "libjava.log" somewhere
or if all the Java failures seem similar, to post
the error messages around the "FAIL" lines from your
libjava.log.
Many seem to be NoClassDefFoundError exceptions.  Some examples of  
those plus all the unusual cases are:

Setting LD_LIBRARY_PATH to  
.:/Volumes/Unused/geoffk/gcc-4.0.0-20050410/powerpc-apple-darwin8.0.0/ 
./libjava/.libs:/Volumes/Unused/geoffk/gcc-4.0.0-20050410/gcc:.:/ 
Volumes/Unused/geoffk/gcc-4.0.0-20050410/powerpc-apple-darwin8.0.0/./ 
libjava/.libs:/Volumes/Unused/geoffk/gcc-4.0.0-20050410/gcc:.:/Volumes/ 
Unused/geoffk/gcc-4.0.0-20050410/powerpc-apple-darwin8.0.0/./libjava/ 
.libs:/Volumes/Unused/geoffk/gcc-4.0.0-20050410/gcc
Exception in thread "main" java.lang.RuntimeException: test failed:5
   <>
FAIL: Array_3 -O3 execution - bytecode->native test
UNTESTED: Array_3 -O3 output - bytecode->native test
...
Testing class `Class_1'...
Exception in thread "main" java.lang.NoClassDefFoundError: C
   <>
FAIL: Class_1 execution - gij test
...
Exception in thread "Thread-1" Exception in thread "Thread-2" Exception  
in thread "Thread-3" Exception in thread "Thread-4"  
java.lang.NoClassDefFoundError: SyncTest
   <>
java.lang.NoClassDefFoundError: SyncTest
   <>
java.lang.NoClassDefFoundError: SyncTest
   <>
java.lang.NoClassDefFoundError: SyncTest
   <>
fail: 0
FAIL: SyncTest execution - gij test
UNTESTED: SyncTest output - gij test
...
PASS: stringconst2 execution - gij test
FAIL: stringconst2 output - gij test

I can mail the full log to anyone interested, but I don't think the  
list needs it.


smime.p7s
Description: S/MIME cryptographic signature


Re: GCC 4.0 RC2 Available

2005-04-19 Thread Geoff Keating
On 19/04/2005, at 6:24 AM, Andrew Haley wrote:
Geoffrey Keating writes:
Mark Mitchell <[EMAIL PROTECTED]> writes:
RC2 is available here:
  ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.0-20050417/
As before, I'd very much appreciate it if people would test these 
bits
on primary and secondary platforms, post test results with the
contrib/test_summary script, and send me a message saying whether or
not there are any regressions, together with a pointer to the 
results.
Bad news, I'm afraid.
It's a bug in dbxout.  A field is marked as DECL_IGNORED_P, but
dbxout_type_fields() still tries to access it.
This patch works for me.
Andrew.
2005-04-19  Andrew Haley  <[EMAIL PROTECTED]>
* dbxout.c (dbxout_type_fields): Check DECL_IGNORED_P before
looking at a field's bitpos.



smime.p7s
Description: S/MIME cryptographic signature


Re: Sine and Cosine Accuracy

2005-05-31 Thread Geoff Keating


On 31/05/2005, at 6:34 AM, Paul Koning wrote:


"Geoffrey" == Geoffrey Keating <[EMAIL PROTECTED]> writes:



 Geoffrey> Paul Koning <[EMAIL PROTECTED]> writes:


After some off-line exchanges with Dave Korn, it seems to me that
part of the problem is that the documentation for
-funsafe-math-optimizations is so vague as to have no discernable
meaning.



 Geoffrey> I believe that (b) is intended to include:

 Geoffrey> ... - limited ranges of elementary functions

You mean limited range or limited domain?  The x87 discussion suggests
limiting the domain.


Both.  (For instance, this option would also cover the case of an exp 
() which refuses to return zero.)



  And "limited" how far?  Scott likes 0 to 2pi;
equally sensibly one might recommend -pi to +pi.


I guess another way to put it is that the results may become  
increasingly inaccurate for values away from zero or one (or  
whatever).  (Or they might just be very inaccurate to start with.)



All these things may well make sense, but few or none of them are
implied by the text of the documentation.


I think they're all covered by (b).

It is intentional that the documentation doesn't specify exactly how  
the results differ from IEEE.  The idea is that if you need to know  
such things, this is not the option for you.




Re: GCC 4.0.1 RC2

2005-06-19 Thread Geoff Keating


On 19/06/2005, at 3:45 PM, Gabriel Dos Reis wrote:


Geoffrey Keating <[EMAIL PROTECTED]> writes:

| libstdc++-v3/testsuite/26_numerics/cmath/ 
c99_classification_macros_c.cc

|
| appears to fail, with lots of complaints like
|
| c99_classification_macros_c.cc:49:21: error: macro  
"isgreaterequal" requires 2 arguments, but only 1 given

|
| but the actual file did this with previous versions too, I think
| something changed in the test harness.  As far as I can tell, this
| testcase is in fact invalid and should produce exactly this error
| message.

Why?

(I only thing I see wrong, right now is that the function definitions
should be part of a class, instead of being at the global scope).


The testcase includes math.h, which we've said should supply the C99  
functions (or, in this case, macros) even in C++ mode.  C99 says that  
'isgreaterequal' is a macro which takes 2 arguments.




smime.p7s
Description: S/MIME cryptographic signature


updating libtool, etc.

2005-06-30 Thread Geoff Keating
Does anyone mind if I update libtool to the latest released version,  
1.5.18, and regenerate everything with automake 1.9.5?




smime.p7s
Description: S/MIME cryptographic signature


Re: updating libtool, etc.

2005-06-30 Thread Geoff Keating


On 30/06/2005, at 6:26 PM, David Edelsohn wrote:


Geoff Keating writes:



Geoff> Does anyone mind if I update libtool to the latest released  
version,

Geoff> 1.5.18, and regenerate everything with automake 1.9.5?

If everyone agrees to go forward with this


Oh, I should have said:  "and if you don't mind, how do you feel  
about a GCC project fork of libtool?"





smime.p7s
Description: S/MIME cryptographic signature


Re: updating libtool, etc.

2005-06-30 Thread Geoff Keating

On 30/06/2005, at 6:41 PM, Daniel Jacobowitz wrote:


On Thu, Jun 30, 2005 at 06:37:07PM -0700, Geoff Keating wrote:



On 30/06/2005, at 6:26 PM, David Edelsohn wrote:



Geoff Keating writes:




Geoff> Does anyone mind if I update libtool to the latest released
version,
Geoff> 1.5.18, and regenerate everything with automake 1.9.5?

   If everyone agrees to go forward with this



Oh, I should have said:  "and if you don't mind, how do you feel
about a GCC project fork of libtool?"



Do you mean "do mind" or "don't mind" there?


What I meant to say was "and if you don't want libtool updated..."


If you want to update libtool, you get to play the all-of-src-uses-it
game.  I have been updating src directories to more recent autoconf
versions in the hope of getting rid of our outdated libtool someday. I
believe the only remaining holdout is newlib, but I didn't check
everything.


OK.  I don't want to update newlib to current tools; that would take  
too long and I don't have the ability to test the result.  Nor am I  
very enthusiastic about updating, for instance, gdb.


I guess that means that I should just work around the existing  
libtool, or if necessary use the new version in just the directories  
that need it.  Yuk.




smime.p7s
Description: S/MIME cryptographic signature


Re: volatile semantics

2005-07-22 Thread Geoff Keating


On 22/07/2005, at 4:33 PM, Ian Lance Taylor wrote:


Geoffrey Keating <[EMAIL PROTECTED]> writes:


Although I can see that this is how you might think about the
semantics of 'const' and 'volatile', I don't think they're an exact
match for the model in the standard.  In fact, I think you could
exchange the words 'const' and 'volatile' in the above and they would
be equally accurate.



Sure, and I think my ultimate point would still be accurate: gcc
should handle the access using the qualification of the pointer, not
of the underlying object.  The rest of the argument is just
motivation.


By "equally accurate", I also meant "equally inaccurate".

You've successfully argued that 'const' and 'volatile' are the same  
in lots of ways; but 'const' and 'volatile' do differ.  In order to  
be successful in this argument, you need to argue that the  
differences don't matter.


And, unfortunately, the differences *do* matter.  The standard does  
not say "any expression referring to a volatile-qualified type must  
be evaluated strictly according to the rules of the abstract  
machine".  What it says is that "An object that has volatile  
qualified type may be modified in ways unknown to the implementation"  
and then lists some of the consequences of an object being modifiable  
in that way.  So there are no "semantics of the access"; the  
semantics attach to the object.


This is part of what I meant by saying that your model isn't a match  
for the model in the standard.  Your model had semantics attached to  
the access.



In fact const and volatile are analogous here.  If you have a
non-const object, and you attempt to access it with a const-qualified
pointer, the compiler will apply the semantics of const (i.e., it will
reject an attempt to modify the object through the pointer).


There are no general differences in semantics of a 'const' access.   
There are only special semantics of an assignment to a 'const' lvalue  
(it's not allowed).  Nor are there any special semantics of a 'const'  
object; in particular, it is not true that a 'const' object cannot  
change, only that it can't be changed from C.  So it's not true that  
const and volatile are analogous here, nor is it true that an access  
to a non-const object with a const-qualified pointer is different to  
accessing it with a non-const-qualified pointer.


I am discussing here only with what GCC *could* do, and still be  
standards-conforming.  What it *should* do is a different conversation.




smime.p7s
Description: S/MIME cryptographic signature


Re: volatile semantics

2005-07-23 Thread Geoff Keating


On 22/07/2005, at 7:57 PM, Gabriel Dos Reis wrote:


There is a "semantics of access".  It is implementation-defined.


I think you're thinking of "what constitutes an access", which is  
implementation-defined, but is not the same of the semantics of an  
access.


The standard describes things like side effect and such in terms of  
*access*.


That's true, but this actually makes the model less useful, not  
more.  There is no requirement on the implementation that it actually  
perform any side effects if they are not needed.




smime.p7s
Description: S/MIME cryptographic signature


Re: volatile semantics

2005-07-23 Thread Geoff Keating


On 22/07/2005, at 7:15 PM, Paul Schlie wrote:


Geoffrey Keating writes:


without 'volatile', then this object cannot be modified unknown to  
the
implementation, even if someone also writes '(*(volatile int *)&i)  
= 1'.




- merely means: treat the object being referenced as volatile  
qualified int
  object (as the standard specifies, although it may result in an  
undefined
  behavior, nothing more or less; as although the object may have  
not been
  initially declared as being volatile, the program within the  
context of
  this particular references has asserted that it must be treated  
as such,
  thereby implying it's value must be assigned, and/or presumed to  
have been

  possibly modified beyond the logical view of the program).


It doesn't imply that.  All it implies is that *from this access* the  
compiler cannot assume that the object is not "modified in ways  
unknown to the implementation".  From other accesses, including from  
the original declaration (and its initializer if any), the  
implementation may be able to make that deduction.  If, and only if,  
the implementation can make that deduction, then it can perform  
optimizations.  In this example:


int i = 0;
while (*(volatile int *)&i == 0) ;

then the implementation can make that assumption, and optimise the  
loop into an infinite loop that does not test 'i', because the '= 0;'  
performs a store to a non-volatile object 'i' which therefore cannot  
be modified in ways unknown to the implementation and therefore will  
always be zero.

smime.p7s
Description: S/MIME cryptographic signature


Re: Pointers in comparison expressions

2005-07-25 Thread Geoff Keating


On 23/07/2005, at 6:12 PM, Paul Schlie wrote:


Geoffrey Keating wrote:


Mirco Lorenzon wrote:

.., are comparisons in the following program legal code?



No.



...
void *a, *b;
...
if (a < b)



Because 'a' and 'b' are not part of the same array,
the behaviour is undefined.



Although I don't mean to contest the conclusion, I do find it  
curious that
as all pointer values referencing unique objects must be  
correspondingly

unique, it would follow that they will be correspondingly ordered with
respect to each other.  Therefore although technically undefined,  
it seems
quite reasonable to expect an implementation to support ordered  
inequality

comparisons between arbitrary pointers to equivalent effective types?


Consider an implementation which does garbage collection and  
compaction.  In such an implementation, it might be quite  
inconvenient to have to maintain a consistent ordering for all pointers.


As it would seem otherwise impossible for an implementation to  
support the
ability to write code which enables the relative comparison of  
generalized
memory pointers unless one were to explicitly declare a union of an  
array
of all potentially allocateable memory, and all explicitly and  
implicitly

declared objects; which doesn't seem reasonable?


Although the C language doesn't guarantee the availability of such  
support, there is nothing that prevents an implementation from saying  
that it supports it, and in fact GCC does say that it supports it,  
through the use of a construct like


((intptr_t) a) < ((intptr_t) b)



smime.p7s
Description: S/MIME cryptographic signature


Re: -b vs -bundle

2005-08-01 Thread Geoff Keating


On 31/07/2005, at 12:03 PM, Jack Howarth wrote:



  In compiling xplor-nih under the gcc/g++ of 4.1 branch instead
of Apple's gcc/g++ 4.0 compilers from Xcode 2.1, I noticed that the
gnu gcc compiler doesn't gracefully handle the -bundle flag. On  
Apple's

compiler I can have a Makefile entry like...

createSharedModule = $(CXX) -bundle  \
-flat_namespace -undefined suppress $^ -o $@

and it compiles the shared module without error. However I see the
error...

g++-4 -bundle -flat_namespace -undefined suppress _xplorWrap.o  
libswigpy-xplor.dylib -o _xplorWrap.so -L/Users/howarth/Desktop/ 
xplor-nih-2.11.2.1/bin.Darwin_8/ -lcommon -lnmrPot -lintVar -lvmd - 
lpy  -lswigpy-xplor \
 -lcrypto -L/Users/howarth/Desktop/xplor-nih-2.11.2.1/ 
bin.Darwin_8/

g++-4: couldn't run 'undle-gcc-4.1.0': No such file or directory

with the gnu gcc compiler. I noticed that you rejected a proposed  
patch

a few years ago...

http://gcc.gnu.org/ml/gcc-patches/2002-12/msg00655.html
http://gcc.gnu.org/ml/gcc-patches/2002-10/msg01961.html

Could you revisit this issue and see if something could be done for  
4.0
and 4.1 branch? I would think that either the compiler should  
require the

-b flag to have a space before the machine name. Alternatively if the
gnu gcc compiler mustn't allow -bundle to be the first argument passed
to the compiler, it should at least treat that as a defined error  
rather
than producing the cryptic one it does now. Thanks in advance for  
looking

at this again.



Hi Jack,

I believe everything I said back then is still valid.  See especially  
.


I don't think we can require -b to have a space; that would break  
existing scripts.


smime.p7s
Description: S/MIME cryptographic signature


Re: -b vs -bundle

2005-08-01 Thread Geoff Keating


On 01/08/2005, at 1:44 PM, Jack Howarth wrote:


Geoff,
   What I don't understand is how Apple's compiler can parse the
-bundle as the first argument and the gnu gcc compiler can't.
Shouldn't the same mechanism Apple uses to allow this to work
be backportable into gnu gcc?


No.  There's lots of stuff in Apple's compiler that isn't written  
portably enough to work in GNU GCC.


My suggestion, if you're interested in fixing this, is to read

http://gcc.gnu.org/ml/gcc-patches/2002-12/msg00736.html

and fix the problems I mentioned there in the patch that Devang  
submitted, and submit a fixed version.




smime.p7s
Description: S/MIME cryptographic signature


Re: proposed Opengroup action for c99 command (XCU ERN 76)

2005-09-16 Thread Geoff Keating


On 16/09/2005, at 5:12 AM, Joseph S. Myers wrote:


On Fri, 16 Sep 2005, Geoffrey Keating wrote:



What this means in practise, I think, is that the structure that
represents a token, 'struct cpp_token' will grow from 16 bytes to 20
bytes, which makes it 2 cache lines rather than 1, and a subsequent
memory use increase and compiler performance decrease.  It might be
that someone will think of some clever way to avoid this, but I
couldn't think of any that would be likely to be a win overall, since
a significant proportion of tokens are identifiers.  (I especially
didn't like the alternative that required a second hash lookup for
every identifier.)



There are plenty of spare bits in cpp_token to flag extended  
identifiers

and handle them specially (as a slow path, marked as such with
__builtin_expect).  There's one bit in the flags byte, two unused  
bytes
after it and a whole word not used in the case of identifiers  
(identifiers

use a cpp_hashnode * where strings and numbers use a struct cpp_string
which is bigger) which could store a canonical form of an  
identifier (or
could store the noncanonical spelling for the use of the specific  
places

which care about the original spelling).


Yes, I think this can be made to work efficiently.

Adding salt to the wound, of course, is that for C the only  
difference

between an (A) or (B) and a (C) implementation is that a (C)
implementation is less expressive: there are some programs, all of
which are erroneous and require a diagnostic, that can't be written.
So you lose compiler performance just so users have another bullet
to shoot their feet with.



C++ requires (A)


This is true, but only in the sense that C requires (B).  Either  
language can be supported by any of the three implementations with an  
appropriate phase 1 rule.


Implementation of (A) could start by a (slow path, if there are  
extended

characters present) conversion of the whole input to UCNs, or a more
efficient conversion that avoids the need to convert within comments.


Although UCNs would be the most convenient form for the preprocessor,  
the backend would like strings to be in UTF-8, to avoid the need for  
conversion when outputting names to the assembler.


But if any normalisation of UCNs is documented for C++ it does need  
to be
documented in the form of transforming UCNs to other UCNs (not to  
UTF-8).


Yes; but this is not a difficult problem.  For C++, you would just  
say (following my proposed wording) that after they're converted to  
UTF-8, they are converted back to some canonical form of UCN ('the  
version with the most lower-case characters', for instance).  Then,  
when stringifying, you would convert UTF-8 characters in identifiers  
to that canonical UCN.


smime.p7s
Description: S/MIME cryptographic signature


Re: long double on ppc-darwin

2005-12-17 Thread Geoff Keating


On 17/12/2005, at 5:56 PM, Mike Stump wrote:


On Dec 17, 2005, at 6:08 AM, FX Coudert wrote:

I'm trying to understand the gfortran failure  
large_real_kind_2.F90 on ppc-darwin7.9, which can be reduced to:


$ cat large_real_kind_2.F90
  real(kind=16) :: x
  real(8) :: y

  x = 1
  y = x
  x = cos (x)
  y = cos (y)
  print *, x, y, y-x

  end
$ ./usr/local/gfortran/bin/gfortran -g large_real_kind_2.F90 && ./ 
a.out
  0.5403023058681397650104827000.540302305868140
1.984153572718682756025749000E-0004


But I can't make a C testcase for that. Is "long double" supposed  
to be usable on ppc-darwin7.9 ?




The trick is that things like sinl have a linker name like _sinl 
$LDBL128.  This is to enable 8 byte long doubles to continue to  
work, and _sin is an 8 byte long double routine.  Don't ask.   
Binary compatibility is so very much fun.


Since Geoff invented the scheme, I'm sure he had an idea of how he  
thought it should work for Fortran.  My guess would be that gcc has  
to pick the right library names internally for all languages, thus,  
obviating the need for the asm () fun remap the names in C.


Yes; to do this right, GCC's builtins need to know about the  
different names.


If you're interested in fixing this, I can tell you what to do...

smime.p7s
Description: S/MIME cryptographic signature


Re: long double on ppc-darwin

2005-12-18 Thread Geoff Keating


On 18/12/2005, at 10:57 AM, Mike Stump wrote:


On Dec 17, 2005, at 10:27 PM, Andrew Pinski wrote:


On Dec 18, 2005, at 1:13 AM, Geoff Keating wrote:


Yes; to do this right, GCC's builtins need to know about the  
different names.


If you're interested in fixing this, I can tell you what to do...



I figured out how to fix it and will be posting a patch later this  
week.

but a function like:



Yes, looks about right to me, the only modification would be  
testing 128 bit long double flag (TARGET_LONG_DOUBLE_128) and be  
aware, this is a ppc only thang.


You also need to be sure to test macos_version_min.  If you're  
compiling to target < 10.3.9, then you need to make sure to use the  
original version of at least printf and the other variadic functions.





And then go through all the builtins which need to be fixed.



If there are functions which aren't built in that are also  
modified, I'm not sure what we should do with them.  Thoughts?  If  
nothing, how do we know that they aren't used now and how do we be  
sure they won't be used in the future?


If you're matching by name, you might as well have all the names.   
But either way, the list would need to be maintained, since the OS  
might add more functions in future versions.




smime.p7s
Description: S/MIME cryptographic signature


Re: weakref miscompiling libgfortran

2005-12-25 Thread Geoff Keating


On 25/12/2005, at 3:08 PM, Richard Henderson wrote:


This test case, simplified from libgfortran, currently results in a
tail call to pthread_mutex_unlock on i686 with -fpic, and is the cause
of all the libgomp fortran failures on the branch.  That this isn't
seen on mainline is simply a consequence of not using any threadded
fortran code on mainline.


It looks like ix86_function_ok_for_sibcall is using !TREE_PUBLIC as a  
proxy for 'does not call through the PLT', which is not right;  
probably it should be using binds_local_p instead, although it's  
possible that some calls to binds_local_p symbols are going through  
the PLT even though they don't need to.



That targetm.binds_local_p is no longer reliable is a serious bug.


What's wrong with it?  It should be answering 'false' for a weakref,  
because the underlying object to which it refers might not be local  
(even though this particular name for the object is local).


When thinking about aliases there's a fundamental difference between  
the definition of TREE_PUBLIC, which only talks about names, and the  
definition of binds_local_p, which also talks about objects.



And unless GeoffK can be convinced that the current setting of
TREE_PUBLIC is in fact ON, then we'll have to audit every single
use of that symbol, and determine if it actually should be testing
targetm.binds_local_p, or some new predicate yet to be determined.


(I presume you meant 'correct' not 'current'.)

You can't avoid looking at every use of TREE_PUBLIC just by saying  
that TREE_PUBLIC should be set to true.  There are places that are  
expecting the documented definition of TREE_PUBLIC to hold, and won't  
work right if weakrefs are marked as TREE_PUBLIC even though they're  
not; for instance, in the IMA name-resolution logic, and in the stabs  
output code.




smime.p7s
Description: S/MIME cryptographic signature


Re: weakref miscompiling libgfortran

2005-12-26 Thread Geoff Keating

On 25/12/2005, at 8:51 PM, Richard Henderson wrote:


On Sun, Dec 25, 2005 at 07:36:16PM -0800, Geoff Keating wrote:


That targetm.binds_local_p is no longer reliable is a serious bug.



What's wrong with it?  It should be answering 'false' for a  
weakref...




It doesn't.  Exchanging the TREE_PUBLIC check for the binds_local_p
check was the first thing I tried.  At which point I realized that
we have Real Problems with this feature at the moment.


The code in default_binds_local_p_1 says:

  /* Weakrefs may not bind locally, even though the weakref itself is
 always static and therefore local.  */
  else if (lookup_attribute ("weakref", DECL_ATTRIBUTES (exp)))
local_p = false;

which comment, now that I reread it, is a little confused, but the  
code looks fine, and it worked for me on Darwin.  Perhaps you have a  
local patch which is affecting this?


We've had Real Problems with this feature since it was introduced.  I  
expect it'll take at least another two or three months before it  
settles down and starts to work on most targets; that's only to be  
expected for such a far-reaching change to GCC's capabilities.

smime.p7s
Description: S/MIME cryptographic signature


Re: weakref miscompiling libgfortran

2005-12-27 Thread Geoff Keating


On 27/12/2005, at 1:49 AM, Jakub Jelinek wrote:


On Mon, Dec 26, 2005 at 11:34:16PM -0800, Geoff Keating wrote:


We've had Real Problems with this feature since it was introduced.  I
expect it'll take at least another two or three months before it
settles down and starts to work on most targets; that's only to be
expected for such a far-reaching change to GCC's capabilities.



It works just fine on ELF targets on gcc-4_1-branch, where you haven't
applied your set of changes (as well as in the 4.0, 3.4 and 3.2  
backports

Alex did).


I'm not sure what "just fine" definition you're using here.  I don't  
think you can say it's been extensively tested, and I'm fairly sure I  
can find a bunch of bugs in it.  I have already filed one as gcc.gnu.org/bugzilla/show_bug.cgi?id=25140>; I understand that also  
occurs on ELF targets.  I don't believe it's been tested with IMA  
(and obviously the TREE_PUBLIC stuff is an issue there, because IMA  
will honour TREE_PUBLIC even if varasm.c won't).  I don't know what  
kind of debugging output is produced, and don't think anyone has  
tested that.  You can probably do really strange things with it and C+ 
+ templates.  Probably there are other interactions with GCC features  
that I haven't thought of yet.


I also note that GCC supports targets that do not have ELF, and on  
some of those targets the testcases fail in 4.1, which surely doesn't  
count as 'just fine' by anyone's definition.  I see this happens on  
HPUX, <http://gcc.gnu.org/ml/gcc-testresults/2005-12/msg01442.html>;  
it also happens on Darwin.  I don't see any AIX results either way.


I would describe the version on the 4.1 branch as a hack that's to  
work only for libgfortran on ELF targets, and that's OK; but that  
version is not ready for general use.  I'd like to see such a general- 
use feature available in 4.2, and that means working our way through  
the issues.



More importantly, it is very wrong if GCC 4.1 and 4.2 weakrefs are
incompatible in their user visible part (trunk requires
static __typeof (bar) foo __attribute__ ((weakref ("bar")))
while 4.1 requires
extern __typeof (bar) foo __attribute__ ((weakref ("bar")))
).

IMHO this needs to be resolved before GCC 4.1 is released, so very  
soon.


I agree that it would be desirable if 4.1 and 4.2 had a consistent  
syntax for this feature.


Either someone needs to investigate all TREE_PUBLIC checks in whole  
GCC

and backport trunk changes to 4.1; I'd say this is pretty dangerous.


I think that you're assuming that if TREE_PUBLIC is set to true, then  
no investigation is necessary, everything will 'just work'.  It's  
pretty clear that is not the case; it is just that a different  
collection of things will break.  It's not like someone investigated  
all the TREE_PUBLIC checks in GCC before committing the feature in  
the first place.



Or the trunk change needs to be reverted.  Or perhaps 4.1 could keep
the static __typeof (bar) ... syntax, but internally set TREE_PUBLIC
in those cases (this will need some changes if we want to allow say
static __typeof (bar) foo;
...
static __typeof (bar) foo __attribute__ ((weakref ("bar")));
or
static __typeof (bar) foo __attribute__ ((weakref ("bar")));
...
static __typeof (bar) foo;
but I don't think they really need to be allowed).


Clearly the safest thing to do is for 4.1 to back out the change that  
introduced it, and say that this is a 4.2-only feature.  If it is  
made to work on all platforms that libgfortran supports, maybe you  
could keep it but say that in 4.1 it's for internal libgfortran use  
only; perhaps you could rename the attribute to, say,  
__internal_weakref for 4.1, so that users don't get confused.




smime.p7s
Description: S/MIME cryptographic signature


Re: weakref miscompiling libgfortran

2005-12-28 Thread Geoff Keating


On 27/12/2005, at 3:36 PM, Jakub Jelinek wrote:


On Tue, Dec 27, 2005 at 02:20:44PM -0800, Geoff Keating wrote:


I'm not sure what "just fine" definition you're using here.  I don't
think you can say it's been extensively tested, and I'm fairly sure I
can find a bunch of bugs in it.  I have already filed one as ; I understand that also
occurs on ELF targets.  I don't believe it's been tested with IMA



PR25140 talks about something common to all kinds of aliases (replace
`weakref' with `alias' or `weak, alias' and you'll get exactly the  
same

result), so I don't know why you make a case against weakref from it.


Well, there are two reasons:
1. Normally, with aliases, you only use one of the aliases in any  
given source file.  With weakrefs, you are supposed to be able to use  
both, in fact it would be common to do so.
2. On Darwin aliases don't work at all, so I don't care how broken  
they would have been if they did work.



Weakrefs aren't really needed on Darwin for the purpose they have been
added, so they could very well just be unsupported there too (if the
target object file format doesn't support any kinds of aliases, why  
should

it support weakrefs?).


I think that's what I said below; the current implementation is  
really only for ELF targets.


However, the general feature *is* useful on Darwin.  We haven't yet  
hit a case where the libstdc++ headers would prefer to use a function  
defined only in a later Darwin version, and so would like to make it  
weak unless the user uses it directly, but in the future we probably  
will.  Likewise, user C++ libraries might want to do the same thing.   
So, I'd like to see this work in 4.2.  I think it actually does work  
on Darwin now in 4.2.



I would describe the version on the 4.1 branch as a hack that's to
work only for libgfortran on ELF targets, and that's OK; but that
version is not ready for general use.  I'd like to see such a  
general-




It has nothing to do with libgfortran actually, libgfortran only ever
uses the weak pthread function aliases within libgfortran.
The reason why weakref attribute has been added is primarily libstdc 
++,

see PR4372, because unlike libgfortran or libobjc, libstdc++ installed
headers were using #pragma weak on all pthread_* functions it wanted
to use.


smime.p7s
Description: S/MIME cryptographic signature


Re: [HELP] GCC 4.1 branch Ada status on powerpc-darwin?

2006-01-19 Thread Geoff Keating


On 19/01/2006, at 9:08 AM, Peter O'Gorman wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Eric Botcazou wrote:
|>Yes the workaround is to add -fexceptions or -shared-libgcc to the
|>command line when linking libgnat but I don't know if that is the  
correct

|>fix or some hacking to config/darwin.h is needed.
|
|
| Thanks.  However, that's not sufficient because the tools fail to  
build too:


I'm adding Geoff Keating to the CC, hoping that he'll both shout at  
me while
explaining why this change to darwin.h is broken, and suggest a  
real fix.


If ADA is going to use exceptions, then it needs to do what G++ does,  
which is pass -shared-libgcc in its driver.



This change allows gcc to build on powerpc-apple-darwin8.4 with ada.


It's not OK because for forwards binary compatibility the shared  
libgcc must be used for exception handling.



Peter
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (Darwin)

iQCVAwUBQ8/HhbiDAg3OZTLPAQJpowQAnZLRilL7mE1l9LLwETXKWYFarWC+2DSl
J00YAywB5cDF+J1emf3ET7S4ZFgZ1Wvl9fJVutvgnVTWkvnnBm8nI+hFSHY93dUZ
9jK7/dyzWUQol4kG55bmNJDNjxr0wSx27RHafo6ktxQF0CwXQN+nzGJo9AU6mnaf
foTpzV+E64s=
=NJtm
-END PGP SIGNATURE-
Index: gcc/config/darwin.h
===
--- gcc/config/darwin.h (revision 109965)
+++ gcc/config/darwin.h (working copy)
@@ -324,6 +324,7 @@
-lgcc; \
   :%:version-compare(>< 10.3.9 10.5 mmacosx-version-min= - 
lgcc_s.10.4) \
%:version-compare(>= 10.5 mmacosx-version-min= -lgcc_s. 
10.5)	   \
+   %:version-compare(!> 10.3.9 mmacosx-version-min= - 
lgcc_eh)	   \

-lgcc}"

 /* We specify crt0.o as -lcrt0.o so that ld will search the  
library path.  */




smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH]: bump minimum MPFR version, (includes some fortran bits)

2008-10-26 Thread Geoff Keating
"Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes:

> Since we're in stage3, I'm raising the issue of the MPFR version we
> require for GCC, just as in last year's stage3 for gcc-4.3:
> http://gcc.gnu.org/ml/gcc/2007-12/msg00298.html
>
> I'd like to increase the "minimum" MPFR version to 2.3.0, (which has been
> released since Aug 2007).  The "recommended" version of MPFR can be bumped
> to the latest which is 2.3.2.

I note that the configure script, and
, says

error: Building GCC requires GMP 4.1+ and MPFR 2.3.2+.

not "MPFR 2.3.0+".

I found this gave me significant trouble attempting to build GCC from
SVN sources on the regression tester, .

The regression tester is now running CentOS 5.2, basically the same as
RHEL 5.2; this is the latest available CentOS.  On that distribution,
an older version of mpfr is included with the system.  It is provided
as a static library but in the same RPM as gmp, which is a dynamic
library and used by at least gfortran (of course) and php.

There appears to be no Red Hat-based distribution that comes with
2.3.2 or later.  Even Fedora 10, which is not yet released, does not
appear to include it.

I found that simply building MPFR in a non-default location (configure
--prefix && make) and then pointing GCC at it with --with-mpfr, as in
the installation instructions, causes the bootstrap to fail when first
running xgcc, because xgcc can't find the built MPFR dynamic library.

I eventually resolved this by uninstalling php, gfortran, and gmp, and
installing gmp, gmp-devel, and mpfr packages built (by my lovely
assistant) by taking the Fedora (10, I think) packaging and upgrading
the contained upstream sources to the latest versions.

If this is what users (or even developers) of GCC are supposed to be
doing, I'd suggest more documentation on what to do and how to do it.


Re: [PATCH]: bump minimum MPFR version, (includes some fortran bits)

2008-10-26 Thread Geoff Keating


On 26/10/2008, at 12:25 PM, Kaveh R. Ghazi wrote:


From: "Geoff Keating" <[EMAIL PROTECTED]>

I found that simply building MPFR in a non-default location  
(configure

--prefix && make) and then pointing GCC at it with --with-mpfr, as in
the installation instructions, causes the bootstrap to fail when  
first

running xgcc, because xgcc can't find the built MPFR dynamic library.


First I'd like to thank you for running your regression tester, and  
I'm

sorry this upgrade cause you so much difficulty.

The issue you describe is PR 21547.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21547

It's important IMHO to point out that this issue is not confined to  
MPFR.
While MPFR's more stringent version requirements expose the problem  
more

often, you'll find the same issue with GMP as well if you compile and
install your own copy of that library in a non-default location.

Basically, the problem is that the bootstrap process doesn't encode
the -rpath during the link process of cc1.  Doing this in a portable  
fashion
and resolving the PR requires linking cc1 et al. with libtool.  I  
don't have
any experience with libtool myself.  However since we use it  
elsewhere in

GCC, it may be easy to incorporate.

(Any libtool experts available to help out, or at least talk us  
through the process?)



If this is what users (or even developers) of GCC are supposed to be
doing, I'd suggest more documentation on what to do and how to do it.


I think the steps you (or your assistant) went through was  
unfortunately
unnecessarily complicated.  The solution I use is to build MPFR and/ 
or GMP
using --disable-shared.  Then you can install them anywhere you  
like, pass
the location with --with-{mpfr,gmp} to GCC and since only the static  
library

is available everything links and runs fine.


I did consider this but wasn't sure it would work, and it didn't  
really seem less complicated, because then I'd have had to change the  
build scripts to figure out where MPFR was in order to pass the flag.


Do you think putting this recommendation in the docs somewhere would  
have
been useful to you in this situation?  If so, I would be happy to  
prepare a

patch.


Yes, if that's what people are expected to do, it would be good to  
document it.


I guess my real question was, "what do typical developers actually do,  
or are expected to do, to handle this"?  I'd like the regression  
tester to be doing what everyone else does, so as to be a more  
accurate test of how GCC is really built.


Does everyone really type --with-mpfr= on every build?  Looking at  
gcc-testresults, I see:


Kaveh: yes
Joey Ye: no
HJ: no
Guerby: yes
Andreas Jaeger: no
Dave Anglin: no
Janis: yes
Gerald Pfeifer: no
Diego: yes
Andreas Krebbel: no

It seems like mostly, anyone using the compiler farm has to use --with- 
mpfr, and otherwise people avoid it.




smime.p7s
Description: S/MIME cryptographic signature


Re: __Unwind_GetIPInfo on Darwin 8.11

2008-11-26 Thread Geoff Keating


On 26/11/2008, at 4:16 PM, Jack Howarth wrote:


On Wed, Nov 26, 2008 at 01:22:35PM -0800, Geoffrey Keating wrote:

Jack Howarth <[EMAIL PROTECTED]> writes:


Iain,
 The use of the system libgcc simply won't work on Mac OS X 10.4.
The missing __Unwind_GetIPInfo only exists in libgcc_s.10.5.dylib
and not libgcc_s.10.4.dylib...


Replacing or modifying the system libgcc is not recommended and may
break in the next version of Mac OS X.  It's not clear to me what  
this

will mean for GCC development.

You can see the exact commands the regression tester used in the  
build

log file at
;  
basically,


+ /Users/regress/tbox/svn-gcc/configure --prefix=/Users/regress/ 
tbox/objs --target=powerpc-apple-darwin8.5.0

+ make -j2 bootstrap
+ make -j2 -k check

No extra flags, no moving stuff around, nothing added or deleted from
the GCC source tree; that would defeat the purpose of the regression
tester, which is to test the actual GCC in the repository.  There is
some strangeness in the system configuration: GMP and MPFR are
installed in /usr/local as static libraries, and I seem to
remember the system is running with a modified kernel, containing a
patch which makes dejagnu work, which is why it's running 10.4.5.

10.4.11 is significantly different from 10.4.5 and from 10.5.  I
believe it adds a shared libgcc and libstdc++.  It may be that GCC
does not work on 10.4.11.

You can find the exact scripts the tester uses to run the build in
contrib/regression in the GCC source tree.  The tester checks out the
tree and runs the scripts from the checkout.


Geoff,
 I think you misunderstood my intention with that statement. I wasn't
suggesting that Iain move a libgcc.so.10.5.dylib onto a different  
machine.
Rather I meant that the offending symbol, __Unwind_GetIPInfo, was  
only added
to the system libgcc for 10.5 so that Tiger's system libgcc would  
never

be able to provide it.


I should correct an earlier statement of mine: 10.4.11 does not add a  
shared libgcc.  My 10.4.5 machine has a shared libgcc which indeed  
does not have __Unwind_GetIPInfo.  However, this does not prevent GCC  
building on it.

smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH, DARWIN] fix emutls exports in libgcc_s10.{4,5}.dylib

2008-12-10 Thread Geoff Keating


On Dec 10, 2008, at 7:32 AM, Jack Howarth wrote:


On Wed, Dec 10, 2008 at 02:55:11PM +, IainS wrote:


On 10 Dec 2008, at 14:43, Jack Howarth wrote:


shipped by Apple with its OS releases. I think what you want to do
make sure you are using the FSF libgcc's and not the system ones
while having environmental MACOSX_DEPLOYMENT_TARGET unset. The  
latter

step will cause the unversioned libgcc to be used with all the newer
symbols specific to the FSF gcc 4.4 release (that are not listed in
the darwin-libgcc.10.4.ver and darwin-libgcc.10.5.ver files).


I have not found a way (MACOSX_DEPLOYMENT_TARGET set or unset) of
getting the loader to look at the FSF libgcc_s.1.dylib unless I
specifically name it on the command line with -lgcc_s.1

That does work ( as does -lgcc_eh. )

I guess I misunderstood the purpose of libgcc_s10.x as being "all the
symbols added since this release"

It would help if someone could point me at some clear documentation
about what

libgcc_eh
libgcc_s.1
libgcc_s.10.X

*should* contain.

other than that, it's a case of returning to some solution which
involves a tls.exp... etc.

thanks,
Iain


Iain,
  Actually, on reflection, I'm not really sure how one gets the
complete set of symbols out of libgcc on darwin any more. The patch...

http://gcc.gnu.org/ml/gcc-patches/2007-06/msg00475.html

would suggest that the compiler now defaults to the libgcc of
the system it is running on, so it is unclear what unsetting
MAC_OS_X_DEPLOYMENT_TARGET would achieve. Perhaps Geoff can
clarify this behavior and explain how the unversioned set
of libgcc symbols can be used?



All this is documented in darwin.h:

/* Support -mmacosx-version-min by supplying different (stub)  
libgcc_s.dylib

   libraries to link against, and by not linking against libgcc_s on
   earlier-than-10.3.9.

   Note that by default, -lgcc_eh is not linked against!  This is
   because in a future version of Darwin the EH frame information may
   be in a new format, or the fallback routine might be changed; if
   you want to explicitly link against the static version of those
   routines, because you know you don't need to unwind through system
   libraries, you need to explicitly say -static-libgcc.

   If it is linked against, it has to be before -lgcc, because it may
   need symbols from -lgcc.  */

libgcc_s.10.x.dylib are stub libraries that list the symbols that were  
shipped in libgcc_s.1.dylib in Mac OS version 10.x.  The compiler  
links with '-lgcc_s.10.x -lgcc' and so any particular routine comes  
either from libgcc_s.10.x.dylib, if it's there, or from libgcc.a, if  
it wasn't present on that system.


The particular 'x' is based on the -mmacosx-version-min flag.  A long  
time ago the MACOSX_DEPLOYMENT_TARGET environment variable was used  
for this but it should not be used today, because environment  
variables are bad.


To use the 'unversioned set' implies that you're compiling for a  
version of Mac OS that Apple has not yet created and most likely will  
never exist.  This is not useful.


One way to get extra runtime support is put routines in libgcc.a which  
can be statically linked into executables if they aren't present in  
the system.


The routines in libgcc_eh.a are routines which should normally never  
be statically linked into executables, because they won't work if you  
do that; they must be in the system.  If you need a routine that's in  
libgcc_eh.a, and it's not in the system, you're out of luck; you can't  
use it.  You'd need to rewrite it so it can handle being linked into  
multiple executables, or you'd need to create a new shared library,  
put the routine in it, and ship that shared library with every  
executable you create.




smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH, DARWIN] fix emutls exports in libgcc_s10.{4,5}.dylib

2008-12-10 Thread Geoff Keating


On Dec 10, 2008, at 3:24 PM, IainS wrote:


Thanks Geoff,
that's v. useful doc.

On 10 Dec 2008, at 22:36, Geoff Keating wrote:



On Dec 10, 2008, at 7:32 AM, Jack Howarth wrote:


On Wed, Dec 10, 2008 at 02:55:11PM +, IainS wrote:




To use the 'unversioned set' implies that you're compiling for a  
version of Mac OS that Apple has not yet created and most likely  
will never exist.  This is not useful.


One way to get extra runtime support is put routines in libgcc.a  
which can be statically linked into executables if they aren't  
present in the system.



if one did -lgcc_s.10.x -lgcc_s.1   would that break it?
... should it not pick up only the unresolved symbols from s.1

( you would also have to be  prepared to install libgcc_s.1 in a  
suitable place).


The second part here is the tricky part.  I would not recommend  
installing your libgcc_s.1.dylib in /usr/lib, or anywhere else really,  
since you don't know that it'll be compatible with the system one.


The routines in libgcc_eh.a are routines which should normally  
never be statically linked into executables, because they won't  
work if you do that; they must be in the system.  If you need a  
routine that's in libgcc_eh.a, and it's not in the system, you're  
out of luck; you can't use it.  You'd need to rewrite it so it can  
handle being linked into multiple executables, or you'd need to  
create a new shared library, put the routine in it, and ship that  
shared library with every executable you create.


OK, we've got quite a bit of work to do then, all the runtime libs  
(gfortran, stdc++v3, gomp, ffi, java)  link on libgcc_s.1.dylib


That's normal, every program and shared library on Darwin does that.



smime.p7s
Description: S/MIME cryptographic signature


Re: GCC build failure, h...@149166 on native

2009-07-02 Thread Geoff Keating
On Jul 2, 2009, at 2:35 AM, Richard Guenther  
 wrote:


On Thu, Jul 2, 2009 at 11:19 AM, Dominique Dhumieres> wrote:

In http://gcc.gnu.org/ml/gcc-regression/2009-07/msg00038.html
Arnaud Charlet wrote:

Can someone please fix or disable these runs? They are getting very
irritating.


What I find extremely irritating is that it takes so long to
fix bootstrap failures. Meanwhile I hope to see such mails
until the problem(s) is (are) fixed.


Well, I suppose "native" is *-darwin which then boils down to the fact
that this is not a freely available host operating system and the
respective maintainers of that target/host combination seem to not
care.

But yes, Geoff - can you adjust the regression mails to not blame
people for build failures that persist for some time?


The regression tester's mail should not be taken as blaming anyone for  
anything. It's a machine and not yet capable of such complex thoughts.


Indeed, it's not capable of reliably working out whether a build  
failure has persisted for some time or has changed. At present it  
looks at the log file length but because the log file length can vary  
for many reasons it can have both false positives and false negatives.


The powerpc tester won't do a run more often than once every 15  
minutes, and that only if the build fails that quickly (indicating a  
pretty bad build breakage) and there are commits since the previous  
run. In those circumstances I think it's reasonable to hope that the  
commit will fix the build breakage and so should be tested as quickly  
as possible...


If anyone is having trouble reproducing a problem reported by the  
tester, send me mail!




Re: bootstrap failure on i686-pc-linux-gnu

2006-04-17 Thread Geoff Keating


On 17/04/2006, at 9:55 PM, Ben Elliston wrote:


Hi Geoff

I'm seeing a bootstrap failure on x86 Linux that looks to be due to
your change (noted below):

/home/bje/build/gcc-clean/./gcc/xgcc -B/home/bje/build/gcc-clean/./ 
gcc/ -B/usr/local/i686-pc-linux-gnu/bin/ -B/usr/local/i686-pc-linux- 
gnu/lib/ -isystem /usr/local/i686-pc-linux-gnu/include -isystem / 
usr/local/i686-pc-linux-gnu/sys-include -O2  -O2 -g -O2   - 
DIN_GCC-W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing- 
prototypes -Wold-style-definition  -isystem ./include  -fPIC -g - 
DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED  -msse -c \
/home/bje/source/gcc-clean/gcc/config/i386/ 
crtfastmath.c \

-o crtfastmath.o
/home/bje/source/gcc-clean/gcc/config/i386/crtfastmath.c:110:  
internal compiler error: in prune_unused_types_update_strings, at  
dwarf2out.c:14009

Please submit a full bug report,
with preprocessed source if appropriate.
See http://gcc.gnu.org/bugs.html> for instructions.

2006-04-17  Geoffrey Keating  <[EMAIL PROTECTED]>

* dwarf2out.c (free_AT): Delete.
(remove_AT): Update string ref counts.
(remove_child_TAG): Don't call free_die.
(free_die): Delete.
(break_out_includes): Don't call free_die on DW_TAG_GNU_BINCL
or DW_TAG_GNU_EINCL.
(prune_unused_types_walk_attribs): Reset string refcounts.


Does this help?

@@ -13802,9 +13777,8 @@
s->refcount++;
/* Avoid unnecessarily putting strings that are used less than
   twice in the hash table.  */
-   if (s->refcount == 2
-   || (s->refcount == 1
-   && (DEBUG_STR_SECTION_FLAGS & SECTION_MERGE) != 0))
+   if (s->refcount
+   == ((DEBUG_STR_SECTION_FLAGS & SECTION_MERGE) ? 1 : 2))
  {
void ** slot;
slot = htab_find_slot_with_hash (debug_str_hash, s->str,



smime.p7s
Description: S/MIME cryptographic signature


Re: ___divti3 and ___umodti3 missing on Darwin

2006-08-06 Thread Geoff Keating


On 05/08/2006, at 5:19 PM, Jack Howarth wrote:

While testing the state of gfortran in gcc trunk at -m64 on  
MacOS X 10.4
I discovered a huge number of test failures (848 compared to 26  
with -m32).
Almost all of these failures appear to be due to two undefined  
symbols in

libgfortran's shared library in the ppc64 version...

http://gcc.gnu.org/ml/fortran/2006-08/msg00112.html

The symbols, ___divti3 and ___umodti3,  are not present in darwin- 
libgcc.10.4.ver
or darwin-libgcc.10.5.ver found in gcc/config/rs6000 but they are  
present
in the libgcc-std.ver in the gcc directory. I haven't found  
anything in
bugzilla about this issue, however it really should be addressed  
before

gcc 4.2 is released.


I believe this is a problem in libgcc.a and/or optabs.c, it should  
support __divti3.  Apparently TImode is supposed to be supported by  
generic code, I see this comment in targhooks.c:


   By default we guess this means that any C type is supported.  If
   we can't map the mode back to a type that would be available in C,
   then reject it.  Special case, here, is the double-word arithmetic
   supported by optabs.c.  */

Here 'double-word' means 128-bits, TImode, since a word is 64 bits on  
ppc64.

smime.p7s
Description: S/MIME cryptographic signature


Re: RFC: Make dllimport/dllexport imply default visibility

2007-07-05 Thread Geoff Keating


On 03/07/2007, at 9:18 PM, Mark Mitchell wrote:


Geoffrey Keating wrote:

On 03/07/2007, at 7:37 PM, Mark Mitchell wrote:


Geoffrey Keating wrote:


Yes.  __attribute__((visibility)) has consistent GNU semantics, and
other features (eg. -fvisibility-ms-compat, __dllspec) match other
compilers.


The only semantics that make sense on SymbianOS are the ones that  
allow

default visibility within a hidden class.


I think we're talking past each other.  What semantics exactly are
these?  Who created them?  Where are they implemented?  Are they
documented?  Who documented them?  Why can't they be changed?


SymbianOS allows you to declare a member dllimport or dllexport even
though the class is declared with hidden visibility.  dllimport and
dllexport imply default visibility.


OK.  So we are *not* talking about __attribute__((visibility)), we are  
talking about dllimport, dllexport, and (importantly) notshared.



The semantics, as I described earlier in this thread, are simple: the
visibility you specify for the class is the default for all members,
including vtables and other compiler-generated stuff, but the user may
explicitly override that by specifying a visibility for the member.


... these are not exactly 'semantics'.  They are a description of an  
implementation, and rely on other undescribed features of the  
implementation, like "this implementation does not strictly check the  
ODR".  If the implementation changed in future, it would be very  
unclear as to how, or whether, to preserve these properties.



 The
semantics I describe are a conservative extension to the semantics you
describe; they are operationally identical, except that the semantics
you describe forbid giving a member more visibility than its class.   
The
change that I made was to stop G++ from silently ignoring such a  
request.


I'm not sure you've really described the semantics.  For this  
extension to be actually useful, you're going to want an ODR exception  
like the one for -fvisibility-ms-compat, otherwise you won't be able  
to do things like access fields of the class or other members without  
invoking undefined behaviour.  Then, what happens if you write this  
and it turns out to not quite be the same as what the other compiler  
does, but people are already relying on the meaning you documented/ 
implemented?



 So, whether or not there's a
command-line option, it's going to be on by default, and therefore  
is

going to be inconsistent with a system on which GCC disallows (or
ignores) that.


Maybe, and maybe not.  In particular, an option that changes  
defaults is

one thing, but if the user overrides the default with GCC-specific
syntax and the compiler does something different, that's another  
thing

altogether.


Here, the compiler was silently ignoring an attribute explicitly given
by the user.  I changed the compiler not to ignore the attribute.   
That

did not alter the GNU semantics, unless we consider it an intentional
feature that we ignored the attribute.  (I can't imagine that we would
consider that a feature; if for some reason we are going to refuse to
let users declare members with more visibility than specified for the
class, we should at least issue an error message.)


Yes, you've convinced me that there should be an error message.

RealView 3.0.x doesn't support the visibility attribute, but it  
does

support other GNU attributes.


So it can't conflict.


I don't find that a convincing argument.  The two compilers have
different syntaxes for specifying ELF visibility.  But, they meant the
same thing in all respects (so far as I know) except for this case.


I expect they're different in a lot of other cases too.

For example, in the GNU semantics it's clear that you can have:

struct S __attribute__((visibility("hidden")));
void f(struct S *p) { };

in two different shared libraries, and they do not conflict, and it so  
happens that on ELF systems this is implemented by automatically  
marking 'f' as hidden.


It also so happens that if you write

struct S __attribute__((visibility("hidden")));
inline void f() {
  struct S * p;
};

that the compiler is permitted to optimise this by making 'f' hidden,  
because 'f' can be defined and used only in this shared object.  The  
compiler does not presently do this, but it could, and in future it  
probably will.


This is why I was asking about documentation.  From GCC's visibility  
documentation, you can deduce what is a valid program, what is an  
invalid program, and what the valid programs do.



 We
could invent some alternative attribute to mark a class "hidden, but  
in
the SymbianOS way that allows you to override the visibility of  
members,

not in the GNU way that doesn't", but that seems needlessly complex.


It's really not that complex.  In fact, we already have syntax for  
this attribute, "__declspec(notshared)".  Or, at least, it's only as  
complex as implementing the semantics in the first place.



Concretely, are 

Re: Test gcc.c-torture/execute/align-3.c

2007-07-11 Thread Geoff Keating


On 11/07/2007, at 4:48 PM, Steve Ellcey wrote:



The test gcc.c-torture/execute/align-3.c is failing on most of my
platforms, including IA64 HP-UX and Linux.  The test consists of:

 void func(void) __attribute__((aligned(256)));

 void func(void)
 {
 }

 int main()
 {
   if (((long)func & 0xFF) != 0)
 abort ();
   if (__alignof__(func) != 256)
 abort ();
   return 0;
 }

The problem I am having is with the first test, checking that the
address of func is aligned.  The problem is that using func in this  
way
on IA64 gives me the address of the function descriptor, not the  
address

of the function itself.  And while the function address does appear to
be aligned on a 256 byte boundry, the function descriptor has its  
normal
8 byte alignment.  My question is, should function descriptors have  
the

same alignment as functions or is this just an invalid test on systems
that use function descriptors?


I knew there was a reason that first test wouldn't work on some  
platform, it seemed too good to be true.


The compiler understands that a pointer-to-function need not actually  
have the same low bits clear as the address of the function itself, so  
it's OK for the descriptor to not be aligned as much as the function  
to which it points.


Feel free to either (a) #ifdef out the first part of the test on IA64,  
or (b) delete the first part of the test altogether.