Howto biarch compiler default to -m32?

2005-06-08 Thread René Rebe
Hi all,

I did this once in the past but lost my transscript ... What was the 
recommended way to get a sparc64-gnu-linux (or other biarch) compiler that 
defaults to -m32? Is there a config or was the way to patch the linux64.h in 
the arch config dir?

Thanks in advance,

-- 
René Rebe - Rubensstr. 64 - 12157 Berlin (Europe / Germany)
http://www.exactcode.de | http://www.t2-project.org
+49 (0)30  255 897 45


pgpb5FiMRagjl.pgp
Description: PGP signature


Re: Howto biarch compiler default to -m32?

2005-06-08 Thread Andreas Jaeger
On Wednesday 08 June 2005 09:56, René Rebe wrote:
> Hi all,
>
> I did this once in the past but lost my transscript ... What was the
> recommended way to get a sparc64-gnu-linux (or other biarch) compiler that
> defaults to -m32? Is there a config or was the way to patch the linux64.h
> in the arch config dir?

For PowerPC I just add:
--with-cpu=default32

But I do not know whether this is generic,
Andreas
-- 
 Andreas Jaeger, [EMAIL PROTECTED], http://www.suse.de/~aj
  SUSE Linux Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


pgpZu0zVfTfBD.pgp
Description: PGP signature


Re: Howto biarch compiler default to -m32?

2005-06-08 Thread René Rebe
HI again,

On Wednesday 08 June 2005 09:56, René Rebe wrote:

> I did this once in the past but lost my transscript ... What was the
> recommended way to get a sparc64-gnu-linux (or other biarch) compiler that
> defaults to -m32? Is there a config or was the way to patch the linux64.h
> in the arch config dir?

Well - at least this patch:

--- gcc/config/sparc/linux64.h.oir2005-06-08 09:17:32.0 +0200
+++ gcc/config/sparc/linux64.h2005-06-08 09:26:59.0 +0200
@@ -45,11 +45,13 @@
 /* A 64 bit v9 compiler with stack-bias,
in a Medium/Low code model environment.  */
 
+#ifdef SPARC64_USERSPACE_BECAME_ROCK_SOLID_AND_X_USABLE_RENER
 #undef TARGET_DEFAULT
 #define TARGET_DEFAULT \
   (MASK_V9 + MASK_PTR64 + MASK_64BIT /* + MASK_HARD_QUAD */ \
+ MASK_STACK_BIAS + MASK_APP_REGS + MASK_FPU + MASK_LONG_DOUBLE_128)
 #endif
+#endif
 
 #undef ASM_CPU_DEFAULT_SPEC
 #define ASM_CPU_DEFAULT_SPEC "-Av9a"


Just finished and it defaults to -m32:

sparc64-t2-linux-gnu-gcc -dumpspecs  | grep -A 1 multilib_default
*multilib_defaults:
m32

Linking 32 and 64 bit libtraries worked for hello-world.c style programs. 
Whether it survives a system bootstrap takes some time to find out @ 
360Mhz ...

I hope it is not too much a hack ,-)

Yours,

-- 
René Rebe - Rubensstr. 64 - 12157 Berlin (Europe / Germany)
http://www.exactcode.de | http://www.t2-project.org
+49 (0)30  255 897 45


pgpmoO2oZ1BDm.pgp
Description: PGP signature


Re: Howto biarch compiler default to -m32?

2005-06-08 Thread Jakub Jelinek
On Wed, Jun 08, 2005 at 10:50:16AM +0200, Andreas Jaeger wrote:
> On Wednesday 08 June 2005 09:56, René Rebe wrote:
> > Hi all,
> >
> > I did this once in the past but lost my transscript ... What was the
> > recommended way to get a sparc64-gnu-linux (or other biarch) compiler that
> > defaults to -m32? Is there a config or was the way to patch the linux64.h
> > in the arch config dir?
> 
> For PowerPC I just add:
> --with-cpu=default32
> 
> But I do not know whether this is generic,

It is not generic (yet).
On sparc64-linux, --with-cpu=v7 can be used to create a compiler defaulting
to -m32, but being able to handle -m64 as well.

Jakub


Re: Andreas Schwab m68k Maintainer

2005-06-08 Thread Andreas Schwab
"Joel Sherrill <[EMAIL PROTECTED]>" <[EMAIL PROTECTED]> writes:

> I'm happy to anounce that Andreas Schwab <[EMAIL PROTECTED]>
> as the new m68k port maintainer.

Thank you for the appointment.  I have installed this patch to
MAINTAINERS.

Andreas.

2005-06-08  Andreas Schwab  <[EMAIL PROTECTED]>

* MAINTAINERS: Move myself from 'Write After Approval' to
'CPU Port Maintainers' section as m68k maintainer.

--- MAINTAINERS.~1.423.~2005-06-06 12:11:49.0 +0200
+++ MAINTAINERS 2005-06-08 11:01:20.0 +0200
@@ -60,6 +60,7 @@ iq2000 port   Nick Clifton[EMAIL 
PROTECTED]
 m32r port  Nick Clifton[EMAIL PROTECTED]
 m68hc11 port   Stephane Carrez [EMAIL PROTECTED]
 m68k port (?)  Jeff Law[EMAIL PROTECTED]
+m68k port  Andreas Schwab  [EMAIL PROTECTED]
 m68k-motorola-sysv portPhilippe De Muyter  [EMAIL PROTECTED]
 mcore port Nick Clifton[EMAIL PROTECTED]
 mips port   Eric Christopher[EMAIL PROTECTED]
@@ -309,7 +310,6 @@ Douglas Rupp[EMAIL 
PROTECTED]
 Matthew Sachs  [EMAIL PROTECTED]
 Alex Samuel[EMAIL PROTECTED]
 Tobias Schl�  [EMAIL PROTECTED]
-Andreas Schwab [EMAIL PROTECTED]
 Svein Seldal[EMAIL PROTECTED]
 Franz Sirl [EMAIL PROTECTED]
 Michael Sokolov[EMAIL PROTECTED]

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


tree-ssa-address ICE

2005-06-08 Thread Nick Burrett

The inclusion of this patch:

2005-06-07  Zdenek Dvorak  <[EMAIL PROTECTED]>

* tree-ssa-address.c: New file.
* Makefile.in (tree-ssa-address.o): Add.
* expr.c (expand_expr_real_1): Do not handle REF_ORIGINAL on
INDIRECT_REFs.  Handle TARGET_MEM_REFs.
...

causes an ICE when compiling the following code for the ARM/Thumb 
instruction set:


int
foo (char *s1, char *s2, int size)
{
  while (size > 0)
{
  char c1 = *s1++, c2 = *s2++;
  if (c1 != c2)
return c1 - c2;
  size--;
}
  return 0;
}

$ ../configure --target=arm-elf-linux --enable-languages=c
$ ./cc1 -quiet test.c -mthumb -O2
../../bug.c: In function ‘foo’:
../../bug.c:3: internal compiler error: in create_mem_ref, at 
tree-ssa-address.c:585

Please submit a full bug report,

Nick.



Re: ARM and __attribute__ long_call error

2005-06-08 Thread Jani Monoses

void pig(void) __attribute__ ((long_call));
void pig(void)
{
}


Yes, that's the way it's currently coded.

The problem, it seems to me, is that we want to fault:

void pig(void) __attribute__ ((long_call));
...
void pig(void);

and


void pig(void);
...
void pig(void) __attribute__((long_call));

both of which would be potentially problematical (which do we believe?)

from the case that you have.  AFAICT there is nothing on the types
passed into the back-end to distinguish a declaration from a definition
in this context.


The cases you mention are two declarations right? My example had the attribute on the 
declaration and not on the definition just like with the other attribute types.
I suppose your explanation went way over my head, since I still don't understand what is 
different with the long-call attribute that the same approach which works with other 
function attributes yields an error in this case.

Is there anything tricky that prevents an easy implementation?

thanks
Jani



Re: tree-ssa-address ICE

2005-06-08 Thread Steven Bosscher
On Wednesday 08 June 2005 12:01, Nick Burrett wrote:
> $ ./cc1 -quiet test.c -mthumb -O2
> ../../bug.c: In function ‘foo’:
> ../../bug.c:3: internal compiler error: in create_mem_ref, at
> tree-ssa-address.c:585
> Please submit a full bug report,
  ^^^
;-)


And some more information in the mean time:

Starting program: /home/steven/devel/build-test/gcc/cc1 -mthumb t.c -O
 foo

Breakpoint 1, fancy_abort (file=0xa19258 
"../../mainline/gcc/tree-ssa-address.c", line=585,
function=0xa192d3 "create_mem_ref") at diagnostic.c:588
588   internal_error ("in %s, at %s:%d", function, trim_filename (file), 
line);
(gdb) up
#1  0x005930da in create_mem_ref (bsi=0x7fbfffd370, type=0x2a95896750, 
addr=0x7fbfffd390)
at tree-ssa-address.c:585
585   gcc_unreachable ();
(gdb) p debug_generic_stmt(bsi.tsi.ptr.stmt)
#   VUSE ;
c1D.1197_10 = *s1D.1192_27;

$12 = void
(gdb) p parts
$13 = {symbol = 0x0, base = 0x0, index = 0x2a95981380, step = 0x0, offset = 0x0}

Gr.
Steven



MEMBER_TYPE_FORCES_BLK on IA-64/HP-UX

2005-06-08 Thread Eric Botcazou
Hi,

The HP-UX port on the IA-64 architecture defines the MEMBER_TYPE_FORCES_BLK 
macro with this comment:

/* This needs to be set to force structure arguments with a single
   integer field to be treated as structures and not as the type of
   their field.  Without this a structure with a single char will be
   returned just like a char variable, instead of being returned at the
   top of the register as specified for big-endian IA64.  */

#define MEMBER_TYPE_FORCES_BLK(FIELD, MODE) \
  (!FLOAT_MODE_P (MODE) || (MODE) == TFmode)

That's problematic for Ada because record types with BLKmode are far less easy 
to manipulate (in particular to pack in containing records) than record types 
with integer modes.

It seems to me that it's an implementation bias that can be eliminated.  
Firstly, SPARC64 has the same set of constraints and doesn't define 
MEMBER_TYPE_FORCES_BLK.  Secondly, ia64_function_value reads:

  else
{
  if (BYTES_BIG_ENDIAN
  && (mode == BLKmode || (valtype && AGGREGATE_TYPE_P ('
{
  rtx loc[8];
  int offset;
  int bytesize;
  int i;

  offset = 0;
  bytesize = int_size_in_bytes (valtype);
  for (i = 0; offset < bytesize; i++)
{
  loc[i] = gen_rtx_EXPR_LIST (VOIDmode,
  gen_rtx_REG (DImode,
   GR_RET_FIRST + i),
  GEN_INT (offset));
  offset += UNITS_PER_WORD;
}
  return gen_rtx_PARALLEL (mode, gen_rtvec_v (i, loc));
}
  else
return gen_rtx_REG (mode, GR_RET_FIRST);
}

Note that we already test the type 'valtype'.  Moreover, int_size_in_bytes is 
invoked unconditionally on 'valtype' and would segfault if it was 0, so 
valtype is set every time mode == BLKmode.  So I think having a non-BLKmode 
on records with a single integer field would not change anything as far as 
the return value is concerned.

What do you think?  Thanks in advance.

-- 
Eric Botcazou


Re: ARM and __attribute__ long_call error

2005-06-08 Thread Richard Earnshaw
On Wed, 2005-06-08 at 11:11, Jani Monoses wrote:
> >>void pig(void) __attribute__ ((long_call));
> >>void pig(void)
> >>{
> >>}
> > 
> > Yes, that's the way it's currently coded.
> > 
> > The problem, it seems to me, is that we want to fault:
> > 
> > void pig(void) __attribute__ ((long_call));
> > ...
> > void pig(void);
> > 
> > and
> > 
> > void pig(void);
> > ...
> > void pig(void) __attribute__((long_call));
> > 
> > both of which would be potentially problematical (which do we believe?)
> > from the case that you have.  AFAICT there is nothing on the types
> > passed into the back-end to distinguish a declaration from a definition
> > in this context.
> 
> The cases you mention are two declarations right? 

Erm, yes (though I'd intended to write the second example as a
declaration followed by a definition).

> My example had the attribute on the 
> declaration and not on the definition just like with the other attribute 
> types.
> I suppose your explanation went way over my head, since I still don't 
> understand what is 
> different with the long-call attribute that the same approach which works 
> with other 
> function attributes yields an error in this case.
> Is there anything tricky that prevents an easy implementation?

Yes, internally the routine that's doing the comparison can't
distinguish declarations from definitions.  We need to diagnose
conflicting declarations, and also where the *definition* is attributed
but the declaration was not.

We could relax the rules to allow your case, but only at the expense of
losing checking for the above scenarios.  I'm not convinced that is a
good trade-off.

R.


Re: ARM and __attribute__ long_call error

2005-06-08 Thread Jani Monoses

Is there anything tricky that prevents an easy implementation?



Yes, internally the routine that's doing the comparison can't
distinguish declarations from definitions.  We need to diagnose


Is the routine arm_comp_type_attributes() in gcc/config/arm/arm.c by any chance?


conflicting declarations, and also where the *definition* is attributed
but the declaration was not.


How can you attribute the definition? This results in syntax error.

void pig(void) __attribute__ ((long_call)
{
}


We could relax the rules to allow your case, but only at the expense of
losing checking for the above scenarios.  I'm not convinced that is a
good trade-off.


I don't need changes to the existing rules, just to find out how attribute long_call is 
_supposed_ to be used :). I see something similar exists for the RS6000 back-end.

Do these not work but are added and waiting for a future implementation?

thanks
Jani



Re: ARM and __attribute__ long_call error

2005-06-08 Thread Richard Earnshaw
On Wed, 2005-06-08 at 11:51, Jani Monoses wrote:
> >>Is there anything tricky that prevents an easy implementation?
> > 
> > Yes, internally the routine that's doing the comparison can't
> > distinguish declarations from definitions.  We need to diagnose
> 
> Is the routine arm_comp_type_attributes() in gcc/config/arm/arm.c by any 
> chance?
> 
Yes.

> > conflicting declarations, and also where the *definition* is attributed
> > but the declaration was not.
> 
> How can you attribute the definition? This results in syntax error.
> 
> void pig(void) __attribute__ ((long_call)
> {
> }
> 

void __attribute__ ((long_call)) pig(void)
{
}

R.


Re: ARM and __attribute__ long_call error

2005-06-08 Thread Jani Monoses


void __attribute__ ((long_call)) pig(void)
{
}



Ok thanks, with such change to the definition the file compiles.
I didn't know one can attribute definitions too, and info gcc says:

"
 The keyword `__attribute__' allows you to specify special attributes
when making a declaration.
"

So I supposed that for any function that has an attribute you need an explicit function 
declaration stating that, and then the definition as usual.


Jani



Re: Proposed obsoletions

2005-06-08 Thread Paul Koning
> "Nathanael" == Nathanael Nerode <[EMAIL PROTECTED]> writes:

 Nathanael> Paul Koning wrote:
 >>> "Nathanael" == Nathanael Nerode <[EMAIL PROTECTED]>
 >>> writes:
 >>
 Nathanael> * pdp11-*-* (generic only) Useless generic.
 >> I believe this one generates DEC (as opposed to BSD) calling
 >> conventions, so I'd rather keep it around.  It also generates .s
 >> files that can (modulo a few bugfixes I need to get in) be
 >> assembled by gas.

 Nathanael> Hmm, OK.  Could it be given a slightly more descriptive
 Nathanael> name, perhaps?

Sure, I'll add that to my list.

  paul



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
> 
>> - yes, it certainly enables an implementation to generate more efficient
>>   code which has no required behavior; so in effect basically produce more
>>   efficient programs which don't reliably do anything in particular; which
>>   doesn't seem particularly useful?
> 
> You keep saying this over and over, but it does not make it true. Once
> again, the whole idea of making certain constructs undefined, is to
> ensure that efficient code can be generated for well defined constructs.

- Can you give an example of an operation which may yield an undefined
  non-deterministic result which is reliably useful for anything?

>> - Essentially yes; as FP is an approximate not absolute representation
>>   of a value, therefore seems reasonable to accept optimizations which
>>   may result in some least significant bits of ambiguity.
> 
> Rubbish, this shows a real misunderstanding of floating-point. FP values
> are not "approximations", they are well defined values in a system of
> arithmetic with precisely defined semantics, just as well defined as
> integer operations. Any compiler that followed your misguided ideas
> above would be a real menace and completely useless for any serious
> fpt work.

- I'm sorry I wasn't aware that C/C++ for example specified the bit exact
  representation and semantic requirements for FP. (possibly because it's
  not defined as being so?)

   What's silly, is claiming that such operations are bit exact when even
   something as basic as their representational base radix number systems
   isn't even defined by the standard, nor need necessarily be the same
   between different FP types; thereby an arbitrary value is never always
   guaranteed to exactly representable as an FP value in all
   implementations (therefore test for equivalence with an arbitrary value
   is equally ambiguous, as would any operations on that value, unless it is
   known that within a particular implementation that it's value and any
   resulting intermediate operation values are correspondingly precisely
   representable, which is both implementation and target specific, although
   hopefully constrained to be as closely approximated as possible within
   it's representational constraints.)

> As it is, the actual situation is that most serious fpt programmers
> find themselves in the same position you are with integer arithmetic.
> They often don't like the freedom given by the language to e.g. allow
> extra precision (although they tend to be efficiency hungry, so one
> doesn't know in practice that this is what they really want, since they
> want it without paying for it, and they don't know how much they would
> have to pay :-)

- Agreed, therefore because FP is inherently an imprecise representation,
  and bit exact equivalences between arbitrary real numbers and their
  representational form is not warranted, therefore should never be relied
  upon; therefore seems reasonable to enable optimizations which may alter
  the computed results as long as they are reasonably known to constrain
  the result's divergence to some few number least significant bits
  of precision. (as no arbitrary value is warranted to be representable,
  with the possible exception of some implementation/target specific
  whole number integer values, but who's overflow semantics are also
  correspondingly undefined.)

>>   Where integer operations are relied upon for state representations,
>>   which are in general must remain precisely and deterministically
>>   calculated, as otherwise catastrophic semantic divergences may result.
> 
> Nonsense, losing the last bit in an FP value can be fatal to many algorithms.
> Indeed, some languages allow what seems to FP programmers to be too much
> freedom, but not for a moment can a compiler writer contemplate doing an
> optimization which is not allowed. For instance, in general replacing
> (a+b)+c by a+(b+c) is an absolute no-no in most languages.

- only if it's naively relied upon to be precise to some arbitrary
  precision, which as above is not warranted in general, so an algorithm's
  implementation should not assume it to be in general, as given in your
  example, neither operation is warranted to compute to an equivalent value
  in any two arbitrary implementations (although hopefully consistent within
  their respective implementations).

>> - No, exactly the opposite, the definition of an order of evaluation
>>   eliminates ambiguities, it does not prohibit anything other than the
>>   compiler applying optimizations which would otherwise alter the meaning
>>   of the specified expression.
> 
> No, the optimizations do not alter the meaning of any C expression. If the
> meaning is undefined, then

- yes I understand C/C++ etc. has chosen to define overflow and
  evaluation order (among a few other things) as being undefined.

> a) the programmer should not have written this rubbish.

- or the language need 

RE: Ada front-end depends on signed overflow

2005-06-08 Thread Dave Korn
Original Message
>From: Paul Schlie
>Sent: 08 June 2005 14:40


> - Can you give an example of an operation which may yield an undefined
>   non-deterministic result which is reliably useful for anything?


  Random number generation?



cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Dave Korn <[EMAIL PROTECTED]>
> Original Message
>> From: Paul Schlie
>> Sent: 08 June 2005 14:40
>  
>> - Can you give an example of an operation which may yield an undefined
>>   non-deterministic result which is reliably useful for anything?
> 
>   Random number generation?

- which if algorithmically computed relies on deterministic semantics.
  as otherwise the pseudo random distribution the algorithm is relying
  on may break.




RE: Ada front-end depends on signed overflow

2005-06-08 Thread Dave Korn
Original Message
>From: Paul Schlie
>Sent: 08 June 2005 14:49

>> From: Dave Korn <[EMAIL PROTECTED]>
>> Original Message
>>> From: Paul Schlie
>>> Sent: 08 June 2005 14:40
>> 
>>> - Can you give an example of an operation which may yield an undefined
>>>   non-deterministic result which is reliably useful for anything?
>> 
>>   Random number generation?
> 
> - which if algorithmically computed relies on deterministic semantics.
>   as otherwise the pseudo random distribution the algorithm is relying
>   on may break.



  I didn't say "Pseudo random number generation".  I said "Random number
generation".


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Robert Dewar

Paul Schlie wrote:

From: Dave Korn <[EMAIL PROTECTED]>
Original Message


From: Paul Schlie
Sent: 08 June 2005 





- Can you give an example of an operation which may yield an undefined
 non-deterministic result which is reliably useful for anything?


 Random number generation?


randomness has nothing whatever to do with non-determinisn. They
are completely different concepts.

But there are of course many examples.

THere are many examples in concurrent programming where non-determinism
is useful, and in set programming, arbitrary non-deterministic selection
from a set is fundamental.

But this is a complete red herring in this discussion

The reason that for example in Ada we say that
a+b means non-determinisitically either compute a then b, or
b then a, is not that it is useful for these results to be
different, but precisely because we expect NO reasonable
program to ever write an expression a+b in which the two
semantic meanings that are possible are different, and
we want the compiler to take advantage of this to generate
better code.

For example

  a + f(b)

we typically expect f(b) to be called first, even though
formally it might be the case that f(b) modifies a, so
this choice could have some effect from the formal
non-determinism of the semantics. Our actual attitude
is that if anyone writes code like this in which f(b)
modifies a, they are highly incompetent and we don't
care what happens.




Re: Ada front-end depends on signed overflow

2005-06-08 Thread Robert Dewar

Dave Korn wrote:


  I didn't say "Pseudo random number generation".  I said "Random number
generation".


which once again has nothing whatever to do with non-determinism.

TO illustrate this, suppose I have a language which has sets of
integers. I have an operator ARB whose semantics is to select
an element of such a set non-deterministically. It is a fine
implementation to always return the smallest number in the
set (of course no legitimate program can rely on this artifact
of the implementation). It would not be at all fine to use
this implementation for the RAND operator that selects a
random element from the set.

Many people mix these concepts up, it always causes trouble.
I once remember a very senior member of the CS community
(I will keep the name to myself), during a discussion of
Ada semantics being dismayed at the overhead required for
non-deterministic selection of an open SELECT alternative,
since he assumed it meant that a random number generator
would have to be used :-)




Re: Ada front-end depends on signed overflow

2005-06-08 Thread Robert Dewar

Paul Schlie wrote:


   What's silly, is claiming that such operations are bit exact when even
   something as basic as their representational base radix number systems
   isn't even defined by the standard, nor need necessarily be the same
   between different FP types; thereby an arbitrary value is never always
   guaranteed to exactly representable as an FP value in all
   implementations (therefore test for equivalence with an arbitrary value
   is equally ambiguous, as would any operations on that value, unless it is
   known that within a particular implementation that it's value and any
   resulting intermediate operation values are correspondingly precisely
   representable, which is both implementation and target specific, although
   hopefully constrained to be as closely approximated as possible within
   it's representational constraints.)


You are really just digging yourself into a hole here. It is clear
that you know very little about floating-point arithmetic. If you
are interested in learning, there are quite a lot of good references.
I would suggest Michael Overton's new book as a good starting point.


- Agreed, therefore because FP is inherently an imprecise representation,
  and bit exact equivalences between arbitrary real numbers and their
  representational form is not warranted, therefore should never be relied
  upon; therefore seems reasonable to enable optimizations which may alter
  the computed results as long as they are reasonably known to constrain
  the result's divergence to some few number least significant bits
  of precision. (as no arbitrary value is warranted to be representable,
  with the possible exception of some implementation/target specific
  whole number integer values, but who's overflow semantics are also
  correspondingly undefined.)


There is nothing imprecise about IEEE floating-point operations


- only if it's naively relied upon to be precise to some arbitrary
  precision, which as above is not warranted in general, so an algorithm's
  implementation should not assume it to be in general, as given in your
  example, neither operation is warranted to compute to an equivalent value
  in any two arbitrary implementations (although hopefully consistent within
  their respective implementations).


More complete nonsense. Of course we do not rely on fpt operations being
precise to arbitrary precision, we just expect well defined IEEE results
which are defined to the last bit, and all modern hardware provides this
capability.


- yes I understand C/C++ etc. has chosen to define overflow and
  evaluation order (among a few other things) as being undefined.



a) the programmer should not have written this rubbish.



- or the language need not have enabled a potentially well defined
  expression to be turned into rubbish by enabling an implementation
  to do things like arbitrarily evaluate interdependent sub-expressions
  in arbitrary orders, or not require an implementation to at least
  optimize expressions consistently with their target's native semantics.


Well it is clear that language designers generally disagree with you.
Are you saying they are all idiots and you know better, or are you
willing to try to learn why they disagree with you?


- agreed, an operation defined as being undefined enables an implementation
  to produce an arbitrary result (which therefore is reliably useless).


Distressing nonsense, sounds like you have learned nothing from this
thread. Well hopefully others have, but anyway, last contribution from
me, I think everything has been said that is useful.




Re: Ada front-end depends on signed overflow

2005-06-08 Thread Lassi A . Tuura

- Can you give an example of an operation which may yield an undefined
  non-deterministic result which is reliably useful for anything?


Hm.  int foo (const char *x, int y) { return printf (x, y); }

Lassi
--
If you would know the value of money, go try to borrow some.
--Ben Franklin



Re: tree-ssa-address ICE

2005-06-08 Thread Zdenek Dvorak
Hello,

> On Wednesday 08 June 2005 12:01, Nick Burrett wrote:
> > $ ./cc1 -quiet test.c -mthumb -O2
> > ../../bug.c: In function ?foo?:
> > ../../bug.c:3: internal compiler error: in create_mem_ref, at
> > tree-ssa-address.c:585
> > Please submit a full bug report,
>   ^^^
> ;-)
> 
> 
> And some more information in the mean time:
> 
> Starting program: /home/steven/devel/build-test/gcc/cc1 -mthumb t.c -O
>  foo
> 
> Breakpoint 1, fancy_abort (file=0xa19258 
> "../../mainline/gcc/tree-ssa-address.c", line=585,
> function=0xa192d3 "create_mem_ref") at diagnostic.c:588
> 588   internal_error ("in %s, at %s:%d", function, trim_filename (file), 
> line);
> (gdb) up
> #1  0x005930da in create_mem_ref (bsi=0x7fbfffd370, 
> type=0x2a95896750, addr=0x7fbfffd390)
> at tree-ssa-address.c:585
> 585   gcc_unreachable ();
> (gdb) p debug_generic_stmt(bsi.tsi.ptr.stmt)
> #   VUSE ;
> c1D.1197_10 = *s1D.1192_27;
> 
> $12 = void
> (gdb) p parts
> $13 = {symbol = 0x0, base = 0x0, index = 0x2a95981380, step = 0x0, offset = 
> 0x0}

a patch, I will submit it once passes regtesting (only the
tree-ssa-address.c:addr_for_mem_ref part is important, but I have rather
changed also the tree-ssa-loop-ivopts.c parts that could cause similar
problems).

Zdenek

Index: tree-ssa-address.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-ssa-address.c,v
retrieving revision 2.1
diff -c -3 -p -r2.1 tree-ssa-address.c
*** tree-ssa-address.c  7 Jun 2005 12:01:28 -   2.1
--- tree-ssa-address.c  8 Jun 2005 14:34:35 -
*** addr_for_mem_ref (struct mem_address *ad
*** 198,205 
  
  templates_initialized = true;
  sym = gen_rtx_SYMBOL_REF (Pmode, ggc_strdup ("test_symbol"));
! bse = gen_raw_REG (Pmode, FIRST_PSEUDO_REGISTER);
! idx = gen_raw_REG (Pmode, FIRST_PSEUDO_REGISTER + 1);
  
  for (i = 0; i < 32; i++)
gen_addr_rtx ((i & 16 ? sym : NULL_RTX),
--- 198,205 
  
  templates_initialized = true;
  sym = gen_rtx_SYMBOL_REF (Pmode, ggc_strdup ("test_symbol"));
! bse = gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 1);
! idx = gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 2);
  
  for (i = 0; i < 32; i++)
gen_addr_rtx ((i & 16 ? sym : NULL_RTX),
Index: tree-ssa-loop-ivopts.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-ssa-loop-ivopts.c,v
retrieving revision 2.78
diff -c -3 -p -r2.78 tree-ssa-loop-ivopts.c
*** tree-ssa-loop-ivopts.c  7 Jun 2005 22:44:56 -   2.78
--- tree-ssa-loop-ivopts.c  8 Jun 2005 14:34:35 -
*** add_cost (enum machine_mode mode)
*** 3149,3156 
  
start_sequence ();
force_operand (gen_rtx_fmt_ee (PLUS, mode,
!gen_raw_REG (mode, FIRST_PSEUDO_REGISTER),
!gen_raw_REG (mode, FIRST_PSEUDO_REGISTER + 1)),
 NULL_RTX);
seq = get_insns ();
end_sequence ();
--- 3149,3156 
  
start_sequence ();
force_operand (gen_rtx_fmt_ee (PLUS, mode,
!gen_raw_REG (mode, LAST_VIRTUAL_REGISTER + 1),
!gen_raw_REG (mode, LAST_VIRTUAL_REGISTER + 2)),
 NULL_RTX);
seq = get_insns ();
end_sequence ();
*** multiply_by_cost (HOST_WIDE_INT cst, enu
*** 3221,3228 
(*cached)->cst = cst;
  
start_sequence ();
!   expand_mult (mode, gen_raw_REG (mode, FIRST_PSEUDO_REGISTER), GEN_INT (cst),
!  NULL_RTX, 0);
seq = get_insns ();
end_sequence ();

--- 3221,3228 
(*cached)->cst = cst;
  
start_sequence ();
!   expand_mult (mode, gen_raw_REG (mode, LAST_VIRTUAL_REGISTER + 1),
!  gen_int_mode (cst, mode), NULL_RTX, 0);
seq = get_insns ();
end_sequence ();

*** multiplier_allowed_in_address_p (HOST_WI
*** 3247,3253 

if (!valid_mult)
  {
!   rtx reg1 = gen_raw_REG (Pmode, FIRST_PSEUDO_REGISTER);
rtx addr;
HOST_WIDE_INT i;
  
--- 3247,3253 

if (!valid_mult)
  {
!   rtx reg1 = gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 1);
rtx addr;
HOST_WIDE_INT i;
  
*** get_address_cost (bool symbol_present, b
*** 3305,3316 
HOST_WIDE_INT i;
initialized = true;
  
!   reg1 = gen_raw_REG (Pmode, FIRST_PSEUDO_REGISTER);
  
addr = gen_rtx_fmt_ee (PLUS, Pmode, reg1, NULL_RTX);
for (i = 1; i <= 1 << 20; i <<= 1)
{
! XEXP (addr, 1) = GEN_INT (i);
  if (!memory_address_p (Pmode, addr))
break;
}
--- 3305,3316 
HOST_WIDE_INT i;
initialized = true;
  
!   reg1 = gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 1);
  
addr = gen_rtx_fmt_ee (PLUS, Pmode, reg1, NULL_RTX);
for (i = 1; i <= 1 << 20; i

Re: Ada front-end depends on signed overflow

2005-06-08 Thread Gabriel Dos Reis
Robert Dewar <[EMAIL PROTECTED]> writes:

[...]

| You are really just digging yourself into a hole here. It is clear
| that you know very little about floating-point arithmetic.

[...]

| More complete nonsense.

[...]

| Are you saying they are all idiots and you know better, or are you
| willing to try to learn why they disagree with you?

[...]

| Distressing nonsense, sounds like you have learned nothing from this
| thread. Well hopefully others have, but anyway, last contribution from
| me, I think everything has been said that is useful.

Ahem.

-- Gaby


Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Robert Dewar <[EMAIL PROTECTED]>
> Date: Wed, 08 Jun 2005 10:16:23 -0400
> To: Paul Schlie <[EMAIL PROTECTED]>
> Cc: Florian Weimer <[EMAIL PROTECTED]>, Andrew Pinski <[EMAIL PROTECTED]>,
> GCC List , <[EMAIL PROTECTED]>
> Subject: Re: Ada front-end depends on signed overflow
>
> You are really just digging yourself into a hole here. It is clear
> that you know very little about floating-point arithmetic. If you
> are interested in learning, there are quite a lot of good references.
> I would suggest Michael Overton's new book as a good starting point.
> 
>> - Agreed, therefore because FP is inherently an imprecise representation,
>>   and bit exact equivalences between arbitrary real numbers and their
>>   representational form is not warranted, therefore should never be relied
>>   upon; therefore seems reasonable to enable optimizations which may alter
>>   the computed results as long as they are reasonably known to constrain
>>   the result's divergence to some few number least significant bits
>>   of precision. (as no arbitrary value is warranted to be representable,
>>   with the possible exception of some implementation/target specific
>>   whole number integer values, but who's overflow semantics are also
>>   correspondingly undefined.)
> 
> There is nothing imprecise about IEEE floating-point operations

- agreed, however nor is it mandated by most language specifications,
  so seemingly irrelevant.

>> - only if it's naively relied upon to be precise to some arbitrary
>>   precision, which as above is not warranted in general, so an algorithm's
>>   implementation should not assume it to be in general, as given in your
>>   example, neither operation is warranted to compute to an equivalent value
>>   in any two arbitrary implementations (although hopefully consistent within
>>   their respective implementations).
> 
> More complete nonsense. Of course we do not rely on fpt operations being
> precise to arbitrary precision, we just expect well defined IEEE results
> which are defined to the last bit, and all modern hardware provides this
> capability.

- as above (actually most, if inclusive of all processors in production,
  don't directly implement fully compliant IEEE FP math, although many
  closely approximate it, or simply provide no FP support at all; and
  as an aside: far more processors implement wrapping signed overflow
  semantics than provide IEEE fp support, as most do not differentiate
  between basic signed and unsigned 2's complement integer operations,
  so if expectations are based on likelihood of an arbitrary production
  processor supporting one vs. the other, one would expect wrapped overflow
  with a high likely-hood, and fully compliant IEEE support with a less
  likely-hood).

>>> a) the programmer should not have written this rubbish.
> 
>> - or the language need not have enabled a potentially well defined
>>   expression to be turned into rubbish by enabling an implementation
>>   to do things like arbitrarily evaluate interdependent sub-expressions
>>   in arbitrary orders, or not require an implementation to at least
>>   optimize expressions consistently with their target's native semantics.
> 
> Well it is clear that language designers generally disagree with you.
> Are you saying they are all idiots and you know better, or are you
> willing to try to learn why they disagree with you?

- I'm saying/implying nothing of that sort; as I happen to believe that
  the reason things are the way they are for the most part, is that although
  most knew better, the committees needed to politically accommodate
  varied implementation practices and assumptions, as otherwise would end
  up forcing some companies to invest a great deal of time and money to
  either re-implement their existing compilers, processor implementations,
  or application programs to accommodate a stricter set of specifications.
  (which most commercial organizations would lobby strongly against), which
  is one of the things that Java had the luxury of being able to somewhat
  side step.
 
>> - agreed, an operation defined as being undefined enables an implementation
>>   to produce an arbitrary result (which therefore is reliably useless).
> 
> Distressing nonsense, sounds like you have learned nothing from this
> thread. Well hopefully others have, but anyway, last contribution from
> me, I think everything has been said that is useful.

- I would have if someone could provide a concrete example of an undefined
  behavior which produces a reliably useful/predictable result.




RE: Ada front-end depends on signed overflow

2005-06-08 Thread Dave Korn
Original Message
>From: Paul Schlie
>Sent: 08 June 2005 15:53

>> From: Robert Dewar

>> There is nothing imprecise about IEEE floating-point operations
> 
> - agreed, however nor is it mandated by most language specifications,
>   so seemingly irrelevant.

I refer you to "Annex F (normative) IEC 60559 floating-point arithmetic" of the 
C language spec.  "Normative" implies a mandate, does it not?

F.1 Introduction
1 This annex specifies C language support for the IEC 60559 floating-point 
standard. The IEC 60559 floating-point standard is specifically Binary 
floating-point arithmetic for microprocessor systems, second edition (IEC 
60559:1989), previously designated IEC 559:1989 and as IEEE Standard for Binary 
Floating-Point Arithmetic (ANSI/IEEE 754−1985). IEEE Standard for 
Radix-Independent Floating-Point Arithmetic (ANSI/IEEE 854−1987) generalizes 
the binary standard to remove dependencies on radix and word length. IEC 60559 
generally refers to the floating-point standard, as in IEC 60559 operation, IEC 
60559 format, etc. An implementation that defines
__STDC_IEC_559__ conforms to the specifications in this annex. Where a binding 
between the C language and IEC 60559 is indicated, the IEC 60559-specified 
behavior is adopted by reference, unless stated otherwise.

> - as above (actually most, if inclusive of all processors in production,
>   don't directly implement fully compliant IEEE FP math, although many
>   closely approximate it, or simply provide no FP support at all; 

  Pretty much every single ix86 and rs6000, and many m68 arch CPUs provide 
last-bit-exact IEEE implementations in hardware these days.  Your statement is 
simply factually incorrect.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Michael Veksler






Paul Schlie wrote on 08/06/2005 17:53:04:

>
> - I would have if someone could provide a concrete example of an
undefined
>   behavior which produces a reliably useful/predictable result.
>

Well this is a simple hackery quiz, which is irrelevant to GCC.


  1: int a, b;
  2: int f() {   return b++; }
  3: int main(int argc)
  4: {
  5:   b= argc;
  6:   a= b + f();  /* a==b*2  or  a==b*2+1 */
  7:   a= a/2;  /* a=b */
  8:   return a;
  9: }

If one would claim that a is totally unconstrained at line 6, then this
example will be invalid. In that case, I can give a more restricted
example, where 'a' is computed speculatively and is discarded
in exactly the same cases when it is undefined.

Oh, well here it is.
  1: int a, b, c;
  2: int f() {  return a ? b++ : b; }
  3: int main()
  4: {
  5:   scanf("%d %d", &a, &b);
  6:   c= b + f();   /* C is undefined if a != 0*/
  7:   if (a)  c = b;
  8:   return c;
  9: }

This example is predictable. I argue that it may be also
useful performance-wise to do speculative computations.



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Bernd Schmidt

Paul Schlie wrote:

From: Robert Dewar <[EMAIL PROTECTED]>
You keep saying this over and over, but it does not make it true. Once
again, the whole idea of making certain constructs undefined, is to
ensure that efficient code can be generated for well defined constructs.



- Can you give an example of an operation which may yield an undefined
  non-deterministic result which is reliably useful for anything?


The simplest example is a shift operation, x << n, where the compiler 
may assume that the shift count is smaller than the width of the type. 
All sane machines agree on shift behaviour for 0 <= n < width, but there 
are differences between machines for n >= width.  Since this case is 
undefined by the language, it is ensured that shifts take only a single 
instruction on essentially all machines.



Bernd


Re: Ada front-end depends on signed overflow

2005-06-08 Thread Joe Buck
On Wed, Jun 08, 2005 at 10:53:04AM -0400, Paul Schlie wrote:
> > From: Robert Dewar <[EMAIL PROTECTED]>
> > There is nothing imprecise about IEEE floating-point operations
> 
> - agreed, however nor is it mandated by most language specifications,
>   so seemingly irrelevant.

In real life, there are no longer any significant non-embedded
architectures out there that don't use IEEE floating point, so
it is a widely used practice to assume it and document the requirement.
The resulting programs might not work on a Vax, Cray, or IBM 370.
C'est la vie.



What to do with (known) ABI mismatches during compilation

2005-06-08 Thread Richard Guenther
Consider the (two) case(s)

 double foo(double) __attribute__((sseregparm));
 static double bar(double) __attribute__((sseregparm));
 static double bar(double x) { return x; }

now, with -mno-sse we have the following choices for call
to function foo:
 1 Emit the call with SSE arguments regardless of -mno-sse
   (we get (maybe) SIGILL at runtime)
 2 Emit the call with FP stack arguments regardless of
   attribute sseregparm (bad - we will get wrong results
   if SSE is actually supported at runtime, else we either
   get SIGILL or correct result(?))
 3 Error out at compile-time (bad - the call may actually
   be reached at runtime)
 4 Emit a call to abort()

with a call to function bar there is a fifth possibility:
 5 Ignore the sseregparm attribute for both emitting the
   function and the call.

Currently we do 2, which is not really better than 3 which
I implemented before.  Bonzini suggested creating an
"unreachable" attribute and transforming calls to such
function to abort() in the middle-end; I'm currently
trying if I can make the backend emit SSE code for the
call regardless of -mno-sse, but it looks complicated.

So, the first (separate) question is, do we want to support 5?
Which leaves us to decide between 1 and 4.

Thoughts?
Richard.


GCC 4.01 RC1 Available

2005-06-08 Thread Mark Mitchell

The GCC 4.0.1 RC1 prerelease is available here:

  ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050607/

Please test these tarballs, and let me know about showstoppers.

I'm aware of the request to fix PR 21364 before the final release, and 
I'll be looking into that.


I'll also consider patches for critical PRs, but the bar will be pretty 
high.


I want to avoid another situation where people feel we need to do an 
early release, which means that if we find similarly severe PRs we might 
try to fix them -- but I also want to get this release out the door so 
that we can close the door on the code-generation bugs that are causing 
us to do *this* release.


I plan to make the final release this weekend, unless major problems arise.

Thanks,

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.01 RC1 Available

2005-06-08 Thread Mws
On Wednesday 08 June 2005 18:57, Mark Mitchell wrote:
> The GCC 4.0.1 RC1 prerelease is available here:
> 
>ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050607/
> 
> Please test these tarballs, and let me know about showstoppers.

already done a changelog in advance?

i am looking for further optimizations for code generation on mpc/ppc 
architectures.

regards 
mws



Re: GCC 4.01 RC1 Available

2005-06-08 Thread Andrew Pinski


On Jun 8, 2005, at 12:57 PM, Mark Mitchell wrote:


The GCC 4.0.1 RC1 prerelease is available here:

  ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050607/

Please test these tarballs, and let me know about showstoppers.


Can I revert a patch which I accidentally applied with another patch?
See .

Thanks,
Andrew Pinski



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Joe Buck <[EMAIL PROTECTED]>
>> On Wed, Jun 08, 2005 at 10:53:04AM -0400, Paul Schlie wrote:
>>> From: Robert Dewar <[EMAIL PROTECTED]>
>>> There is nothing imprecise about IEEE floating-point operations
>> 
>> - agreed, however nor is it mandated by most language specifications,
>>   so seemingly irrelevant.
> 
> In real life, there are no longer any significant non-embedded
> architectures out there that don't use IEEE floating point, so
> it is a widely used practice to assume it and document the requirement.
> The resulting programs might not work on a Vax, Cray, or IBM 370.
> C'est la vie.

- With the not so minor exception that IEEE strict semantics are basically
  counter productive for most real-time fp signal processing tasks, as
  simple saturation and consistent reciprocal semantics tend to be preferred
  as they lose precision gracefully, not catastrophically as sticky inf and
  nan semantics do at the limits of it's representation bounds; which is why
  fp signal processor architectures tend not to implement IEEE semantics,
  and arguably rely on it's "imprecision" in lieu of failure at it's bounds.
  (also most "embedded" processors do not have any FP support, and often
  typically benefit from looser soft implementations of IEEE, as strict
  bit-exact behavior is typically of less significance when all that may
  be occasionally required is just a bit more dynamic range than an fixed
  point representation my reasonably provide, and often gladly trade a few
  bits of precision for a less bulky implementations and never need to fool
  with nan or inf semantics).




Re: GCC 4.01 RC1 Available

2005-06-08 Thread Mark Mitchell

Andrew Pinski wrote:


On Jun 8, 2005, at 12:57 PM, Mark Mitchell wrote:


The GCC 4.0.1 RC1 prerelease is available here:

  ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050607/

Please test these tarballs, and let me know about showstoppers.



Can I revert a patch which I accidentally applied with another patch?
See .


Yes, please.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Bernd Schmidt <[EMAIL PROTECTED]>
> Paul Schlie wrote:
>>> From: Robert Dewar <[EMAIL PROTECTED]>
>>> You keep saying this over and over, but it does not make it true. Once
>>> again, the whole idea of making certain constructs undefined, is to
>>> ensure that efficient code can be generated for well defined constructs.
>> 
>> - Can you give an example of an operation which may yield an undefined
>>   non-deterministic result which is reliably useful for anything?
> 
> The simplest example is a shift operation, x << n, where the compiler
> may assume that the shift count is smaller than the width of the type.
> All sane machines agree on shift behaviour for 0 <= n < width, but there
> are differences between machines for n >= width.  Since this case is
> undefined by the language, it is ensured that shifts take only a single
> instruction on essentially all machines.

- How is it necessary or desirable to define that the result is undefined
  vs. being target defined? As undefined seems to be presumed to give an
  implementation the license to presume it will yield any specific value
  it chooses (such as 0 for example, and presume that value in subsequent
  optimizations), where the target implementation may actually yield
  something reliably different?

  For example given that an implementation knows how a runtime value
  shift value will be treated, it seems much more reasonable to treat
  compile time constant shift values consistently, as otherwise constant
  variable values will have different different semantics effects than
  non-constant variable values may? (Which doesn't seem like a good thing,
  or strictly necessary to enable an arbitrary implementation to efficiently
  implement consistent bit-shift semantics although it's behavior beyond
  implementation-independent bounds may differ between differing
  implementations?)




Re: GCC 4.01 RC1 Available

2005-06-08 Thread Andrew Pinski


On Jun 8, 2005, at 1:24 PM, Mark Mitchell wrote:


Andrew Pinski wrote:

On Jun 8, 2005, at 12:57 PM, Mark Mitchell wrote:

The GCC 4.0.1 RC1 prerelease is available here:

  ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050607/

Please test these tarballs, and let me know about showstoppers.

Can I revert a patch which I accidentally applied with another patch?
See .


Yes, please.


And this is the patch which I applied to fix this:
2005-06-08  Andrew Pinski  <[EMAIL PROTECTED]>

PR tree-opt/19768
* tree-ssa-dse.c: Revert accidental committed patch.


Index: ChangeLog
===
RCS file: /cvs/gcc/gcc/gcc/ChangeLog,v
retrieving revision 2.7592.2.283
diff -u -p -r2.7592.2.283 ChangeLog
--- ChangeLog   7 Jun 2005 23:46:24 -   2.7592.2.283
+++ ChangeLog   8 Jun 2005 18:01:58 -
@@ -1,3 +1,8 @@
+2005-06-08  Andrew Pinski  <[EMAIL PROTECTED]>
+
+   PR tree-opt/19768
+   * tree-ssa-dse.c: Revert accidental committed patch.
+
 2005-06-07  Richard Henderson  <[EMAIL PROTECTED]>
 
PR rtl-opt/21528
Index: tree-ssa-dse.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-ssa-dse.c,v
retrieving revision 2.17.4.1
diff -u -p -r2.17.4.1 tree-ssa-dse.c
--- tree-ssa-dse.c  11 May 2005 12:46:07 -  2.17.4.1
+++ tree-ssa-dse.c  8 Jun 2005 18:01:58 -
@@ -134,7 +134,13 @@ fix_phi_uses (tree phi, tree stmt)
   def_operand_p def_p;
   ssa_op_iter iter;
   int i;
-
+  edge e;
+  edge_iterator ei;
+  
+  FOR_EACH_EDGE (e, ei, PHI_BB (phi)->preds) 
+  if (e->flags & EDGE_ABNORMAL)
+break;
+  
   get_stmt_operands (stmt);
 
   FOR_EACH_SSA_MAYDEF_OPERAND (def_p, use_p, stmt, iter)
@@ -146,7 +152,12 @@ fix_phi_uses (tree phi, tree stmt)
 them with the appropriate V_MAY_DEF_OP.  */
   for (i = 0; i < PHI_NUM_ARGS (phi); i++)
if (v_may_def == PHI_ARG_DEF (phi, i))
- SET_PHI_ARG_DEF (phi, i, v_may_use);
+ {
+   SET_PHI_ARG_DEF (phi, i, v_may_use);
+   /* Update if the new phi argument is an abnormal phi.  */
+   if (e != NULL)
+ SSA_NAME_OCCURS_IN_ABNORMAL_PHI (v_may_use) = 1;
+ }
 }
 }
 




Thanks,
Andrew Pinski


Re: What to do with (known) ABI mismatches during compilation

2005-06-08 Thread Richard Henderson
On Wed, Jun 08, 2005 at 06:36:49PM +0200, Richard Guenther wrote:
> now, with -mno-sse we have the following choices for call
> to function foo:

With -mno-sse and explicit use of sseregparm, error at compile time.
This is no different from any other ISA related compile time error
that we generate.


r~


Re: Ada front-end depends on signed overflow

2005-06-08 Thread Paul Schlie
> From: Michael Veksler <[EMAIL PROTECTED]>
>> Paul Schlie wrote on 08/06/2005 17:53:04:
>> - I would have if someone could provide a concrete example of an
>>   undefined behavior which produces a reliably useful/predictable result.
>> 
> Well this is a simple hackery quiz, which is irrelevant to GCC.
> 
>   1: int a, b;
>   2: int f() {   return b++; }
>   3: int main(int argc)
>   4: {
>   5:   b= argc;
>   6:   a= b + f();  /* a==b*2  or  a==b*2+1 */
>   7:   a= a/2;  /* a=b */
>   8:   return a;
>   9: }
> 
> If one would claim that a is totally unconstrained at line 6, then this
> example will be invalid. In that case, I can give a more restricted
> example, where 'a' is computed speculatively and is discarded
> in exactly the same cases when it is undefined.

- unless I misunderstand, it actually is undefined if the value of argc
  may be as large as the largest representable positive int value, if
  both signed int overflows and evaluation order are undefined/unspecified.

  (although confess to missing your point, as it's not clear how an
  undefined behavior is productively contributing to the observation that
  a == argc if agc++ doesn't overflow or evaluated l-r; as opposed to
  observing that if the evaluation order were fixed l-r and not undefined,
  then the optimization may be derived?)

> Oh, well here it is.
>   1: int a, b, c;
>   2: int f() {  return a ? b++ : b; }
>   3: int main()
>   4: {
>   5:   scanf("%d %d", &a, &b);
>   6:   c= b + f();   /* C is undefined if a != 0*/
>   7:   if (a)  c = b;
>   8:   return c;
>   9: }
> 
> This example is predictable. I argue that it may be also
> useful performance-wise to do speculative computations.

- I believe it's subject to the same problem?




vector alignment question

2005-06-08 Thread Steve Ellcey

I noticed that vectors are always aligned based on their size, i.e.  an
8 byte vector has an aligment of 8 bytes, 16 byte vectors an alignment
of 16, a 256 byte vector an alignment of 256, etc.

Is this really intended?

I looked in stor-layout.c and found:

  /* Always naturally align vectors.  This prevents ABI changes
 depending on whether or not native vector modes are supported.  */
  TYPE_ALIGN (type) = tree_low_cst (TYPE_SIZE (type), 0);

so it seems to be intentional, but it still seems odd to me, especially
for very large vectors.

Steve Ellcey
[EMAIL PROTECTED]


Re: Ada front-end depends on signed overflow

2005-06-08 Thread Michael Veksler






Paul Schlie <[EMAIL PROTECTED]> wrote on 08/06/2005 21:16:46:

> > From: Michael Veksler <[EMAIL PROTECTED]>
> >> Paul Schlie wrote on 08/06/2005 17:53:04:
> >> - I would have if someone could provide a concrete example of an
> >>   undefined behavior which produces a reliably useful/predictable
result.
> >>
> > Well this is a simple hackery quiz, which is irrelevant to GCC.
> >
> >   1: int a, b;
> >   2: int f() {   return b++; }
> >   3: int main(int argc)
> >   4: {
> >   5:   b= argc;
> >   6:   a= b + f();  /* a==b*2  or  a==b*2+1 */
> >   7:   a= a/2;  /* a=b */
> >   8:   return a;
> >   9: }
> >
> > If one would claim that a is totally unconstrained at line 6, then this
> > example will be invalid. In that case, I can give a more restricted
> > example, where 'a' is computed speculatively and is discarded
> > in exactly the same cases when it is undefined.
>
> - unless I misunderstand, it actually is undefined if the value of argc
>   may be as large as the largest representable positive int value, if
>   both signed int overflows and evaluation order are
undefined/unspecified.
>

It is undefined due to evaluation order.
Is it:
  tmp= b; // b is fetched first.
  a= tmp + b++; // then f() is called and added
or is it:
  tmp= b++;   // f() is called first.
  a= b + tmp; // then b is fetched and added

Please note that this has nothing to do with overflow.
I argue that although 'a' is undefined due to
evaluation order, it may still be used as long as
its undefined-ness is constrained (which the std may
or may not guarantee).
Anyway, as I said, it is more of a quiz than a practical
question and example. You may come up with a bizarre
example where it makes some sense.


>   (although confess to missing your point, as it's not clear how an
>   undefined behavior is productively contributing to the observation that
>   a == argc if agc++ doesn't overflow or evaluated l-r; as opposed to
>   observing that if the evaluation order were fixed l-r and not
undefined,
>   then the optimization may be derived?)
>

Now it is me who missed the above point. What is 1-r ?

> > Oh, well here it is.
> >   1: int a, b, c;
> >   2: int f() {  return a ? b++ : b; }
> >   3: int main()
> >   4: {
> >   5:   scanf("%d %d", &a, &b);
> >   6:   c= b + f();   /* C is undefined if a != 0*/
> >   7:   if (a)  c = b;
> >   8:   return c;
> >   9: }
> >
> > This example is predictable. I argue that it may be also
> > useful performance-wise to do speculative computations.
>
> - I believe it's subject to the same problem?
>
No, here also the result is undefined due to evaluation
order. My point in this example is that you may operate
on 'c' which is defined for some inputs and undefined
for others, and still get an observably consistent results.
If you know that the result may be undefined only when you
do not intend to use it, then you don't care about it
(if guaranteed not to get SIGBUS).

As other have noted, the effects of constraining the
compiler to "platform defined" behavior, you lose
performance.
For example:
  double a, b;
  int f();
  int main()
  {
int i;
for (i=0 ; i < 100 ; ++i)
   b += (double)f() + sin(a);
return foo(b);
  }
Let's assume that on a given platform the FPU
can calculate sin(a) in parallel to f().

The compiler cannot prove that f() does not
modify a, because f() is not given here or
because it is intractable.

If, calls to f() and sin(a) take 10 ns each, then
calculating them in parallel will save 50% of the
total run-time. To do that, the compiler will have
to change the order of function calls.

You may argue that even a x100 speed up is not worth
an undefined behavior. I will argue that in this case
most programmers will avoid modifying 'a' inside f()
anyway (due to maintainability) and will avoid the
undefined result.

I think that normative code should be penalized for
the possibility of bizarre code. This bizarre code
should be fixed anyway to avoid maintenance
nightmares.




[wwwdocs] IEEE 754r

2005-06-08 Thread Steven Bosscher
Hi,

This adds a link in "Further readings" to the Wikipedia page
about IEEE 754r.  Seems interesting enough...

The change to the existing link is necessary to make the page
render without unnecessary spaces before "Differences".

OK?

Gr.
Steven


Index: htdocs/readings.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/readings.html,v
retrieving revision 1.144
diff -u -4 -p -r1.144 readings.html
--- htdocs/readings.html29 May 2005 20:58:23 -  1.144
+++ htdocs/readings.html8 Jun 2005 21:06:44 -
@@ -541,12 +541,15 @@ names.
   http://www.validlab.com/goldberg/paper.pdf";>What Every
   Computer Scientist Should Know about Floating-Point Arithmetic
   by David Goldberg, including Doug Priest's supplement (PDF format)
 
-  http://www.validlab.com/goldberg/addendum.html";>
-  Differences Among IEEE 754 Implementations
+  http://www.validlab.com/goldberg/addendum.html";>Differences
+  Among IEEE 754 Implementations
   by Doug Priest (included in the PostScript-format document above)
 
+  http://en.wikipedia.org/wiki/IEEE_754r";>IEEE 754r, an
+  ongoing revision to the IEEE 754 floating point standard.
+
   ftp://cs.rice.edu/public/preston/optimizer";>Massively
   Scalar Compiler Project
 
   http://www.tru64unix.compaq.com/docs/base_doc/DOCUMENTATION/V51_HTML/SUPPDOCS/OBJSPEC/TITLE.HTM";>


Re: vector alignment question

2005-06-08 Thread Richard Henderson
On Wed, Jun 08, 2005 at 12:50:32PM -0700, Steve Ellcey wrote:
> I noticed that vectors are always aligned based on their size, i.e.  an
> 8 byte vector has an aligment of 8 bytes, 16 byte vectors an alignment
> of 16, a 256 byte vector an alignment of 256, etc.
> 
> Is this really intended?

Yes.

> so it seems to be intentional, but it still seems odd to me, especially
> for very large vectors.

Hardware usually requires such alignment.  Most folk don't use vectors
larger than some bit of hardware supports.  One wouldn't want the ABI
to depend on whether that bit of hardware were actually present, IMO.


r~


Re: vector alignment question

2005-06-08 Thread Steve Ellcey
> On Wed, Jun 08, 2005 at 12:50:32PM -0700, Steve Ellcey wrote:
> > I noticed that vectors are always aligned based on their size, i.e.  an
> > 8 byte vector has an aligment of 8 bytes, 16 byte vectors an alignment
> > of 16, a 256 byte vector an alignment of 256, etc.
> > 
> > Is this really intended?
> 
> Yes.
> 
> > so it seems to be intentional, but it still seems odd to me, especially
> > for very large vectors.
> 
> Hardware usually requires such alignment.  Most folk don't use vectors
> larger than some bit of hardware supports.  One wouldn't want the ABI
> to depend on whether that bit of hardware were actually present, IMO.
> 
> r~

I guess that makes sense but I wonder if the default alignment should be
set to "MIN (size of vector, BIGGEST_ALIGNMENT)" instead so that we
don't default to an alignment larger than we know we can support.  Or
perhaps there should be a way to override the default alignment for
vectors on systems that don't require natural alignment.

Steve Ellcey
[EMAIL PROTECTED]


Tracking down source of libgcc_s.so compatibility?

2005-06-08 Thread Daniel Kegel

Can somebody suggest a place to start looking for
why the libgcc_s.so built by crosstool's gcc-3.4 can't handle
exceptions from apps built by fc3's gcc-3.4?

The C++ program

#include 
void foo() throw (int) {
 std::cout << "In foo()" << std::endl;
 throw 1;
}
int main() {
 try {
   foo();
 } catch (int i) {
   std::cout << "Caught " << i << std::endl;
 }
 std::cout << "Finished" << std::endl;
 return 0;
}

works fine when built by FC3's gcc-3.4.
It also works fine when built by crosstool's gcc-3.4.

But when you add the libgcc_s.so built by crosstool into
ld.so.conf, the results are different; apps built
by fc3's gcc-3.4 die when they try to throw exceptions,
but apps built by crosstool's gcc-3.4 keep working.
Help!

Thanks,
Dan

p.s. here's a log of a failure with crosstool's libgcc, and
a success with fc3's libgcc.  It doesn't seem to matter
whether I use gcc-3.4 or gcc-4.0 to build crosstool's libgcc_s.so.
Similar results are obtained with rh9.

$ sudo rpm -i 
crosstool-gcc-4.0.0-glibc-2.2.2-hdrs-2.6.11.2-i686-libgcc-0.35-1.i386.rpm
$ g++ x.cc
$ ./a.out
In foo()
terminate called after throwing an instance of 'int'
Aborted
$ ldd a.out
libstdc++.so.6 => /usr/lib/i686-unknown-linux-gnu/libstdc++.so.6 
(0xf6f18000)
libm.so.6 => /lib/tls/libm.so.6 (0x00cb1000)
libgcc_s.so.1 => /lib/i686-unknown-linux-gnu/libgcc_s.so.1 (0xf6f0c000)
libc.so.6 => /lib/tls/libc.so.6 (0x00b88000)
/lib/ld-linux.so.2 (0x00b6f000)

$ sudo rpm -e crosstool-gcc-4.0.0-glibc-2.2.2-hdrs-2.6.11.2-i686-libgcc-0.35-1
$ ./a.out
In foo()
Caught 1
Finished
$ ldd a.out
libstdc++.so.6 => /usr/lib/i686-unknown-linux-gnu/libstdc++.so.6 
(0xf6f18000)
libm.so.6 => /lib/tls/libm.so.6 (0x00cb1000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x009e4000)
libc.so.6 => /lib/tls/libc.so.6 (0x00b88000)
/lib/ld-linux.so.2 (0x00b6f000)


Re: Tracking down source of libgcc_s.so compatibility?

2005-06-08 Thread Daniel Jacobowitz
On Wed, Jun 08, 2005 at 03:57:26PM -0700, Daniel Kegel wrote:
> Can somebody suggest a place to start looking for
> why the libgcc_s.so built by crosstool's gcc-3.4 can't handle
> exceptions from apps built by fc3's gcc-3.4?

Try diffing the output of configure from building one and the other.
Probably some important linker feature is misdetected.

-- 
Daniel Jacobowitz
CodeSourcery, LLC


Re: Tracking down source of libgcc_s.so compatibility?

2005-06-08 Thread Daniel Kegel

Daniel Jacobowitz wrote:

Daniel Kegel wrote:

Can somebody suggest a place to start looking for
why the libgcc_s.so built by crosstool's gcc-3.4 can't handle
exceptions from apps built by fc3's gcc-3.4?


Try diffing the output of configure from building one and the other.
Probably some important linker feature is misdetected.


OK, good idea.

One other data point: the crosstool gcc was built against glibc-2.2.2,
but the fc3 gcc was of course built against glibc-2.3.x.
How likely is that to cause problems in exception handling?

Thanks,
Dan



Re: Ada front-end depends on signed overflow

2005-06-08 Thread Georg Bauhaus

Paul Schlie wrote:


- How is it necessary or desirable to define that the result is undefined
  vs. being target defined?


What does C say about how a target performs an instruction?
And why shouldn't GCC take advantage of this?


combine register note problem with multi-word values

2005-06-08 Thread Miles Bader
When `try_combine' combines two instructions, it adds the register notes
from the old insns to the new combined insn.  It also adds new register
notes calculated from the new insn's clobbers by `recog_for_combine'.

In this process, it (`distribute_notes') suppresses duplicate register
notes, but in doing so, seems to basically ignore the mode used in the
notes' REG expressions -- so if the new note really refers to multiple
registers because the mode is larger than word size, and the old note
does _not_ (but does refer to the same starting register), then the
result will only contain the register note for the smaller value.

Since the new notes calculated by `recog_for_combine', which contain
precise information from the new insn's clobbers, are added _last_, it
seems as if this may result in incorrect register notes.

Would it be OK to add the new reg notes calculated from clobbers first,
before the notes from the old insns?

Thanks,

-Miles
-- 
Come now, if we were really planning to harm you, would we be waiting here,
 beside the path, in the very darkest part of the forest?


Re: Will Apple still support GCC development?

2005-06-08 Thread Dale Johannesen

On Jun 6, 2005, at 12:17 PM, Samuel Smythe wrote:

It is well-known that Apple has been a significant provider of GCC 
enhancements. But it is also probably now well-known that they have 
opted to drop the PPC architecture in favor of an x86-based 
architecture. Will Apple continue to contribute to the PPC-related 
componentry of GCC, or will such contributions be phased out as the 
transition is made to the x86-based systems? In turn, will Apple be 
providing more x86-related contributions to GCC?


Nobody from Apple has yet responded to this because Apple does not 
generally like
its employees to make public statements about future plans.  I have 
been authorized
to say this, however:   Apple will be using gcc as its development 
compiler for producing
Mac OS X Universal Binaries which target both PowerPC and Intel 
architectures.

We will continue to contribute patches to both efforts.