Re: GCC optimizes integer overflow: bug or feature?

2006-12-19 Thread Florian Weimer
* Andrew Pinski:

> A simple loop like:
> int foo ()
> {
>   int a[N];
>   int i;
>   int n;
>
>   for (i = 0; i <= n; i++)
>   ca[i] = 2;
> }
>
> we cannot find how many iterations it runs without knowing that signed
> types overflow.

In this case, the assumption is not needed because the lack of
overflow can be inferred from the validity of the expression ca[i] for
all relevant i.  However, in the general case, such information might
not be available.  I wonder if it is feasible to duplicate the loop
code, once for positive n, and once for negative, or if this would
lead to too much code bloat in real-world applications.

By the way, as I've tried to describe here:

variable range tracking can result in reintroduction of
supposedly-fixed security vulnerabilities. 8-(


Re: GCC optimizes integer overflow: bug or feature?

2006-12-19 Thread Florian Weimer
* Paolo Bonzini:

> Interesting read.  I agree with the proposed fix; however, note that
> GCC does not make the result of overflowing signed left-shifts
> undefined, exactly because in this case the overflow is relied upon by
> too many existing programs

Is this documented somewhere?  Without documentation, it could also be
an oversight (lack of optimization) from a programmer perspective.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-19 Thread Florian Weimer
* Joseph S. Myers:

> On Tue, 19 Dec 2006, Florian Weimer wrote:
>
>> * Paolo Bonzini:
>> 
>> > Interesting read.  I agree with the proposed fix; however, note that
>> > GCC does not make the result of overflowing signed left-shifts
>> > undefined, exactly because in this case the overflow is relied upon by
>> > too many existing programs
>> 
>> Is this documented somewhere?  Without documentation, it could also be
>> an oversight (lack of optimization) from a programmer perspective.
>
> Certainly, in implement-c.texi:
>
>   GCC does not use the latitude given in C99 only to treat certain
>   aspects of signed @samp{<<} as undefined, but this is subject to
>   change.

Thanks, I missed that paragraph.  But I fail to see how this is
helpful in any way to a programmer.  To me, it reads like "we've got
some predictable semantics in some cases, but we don't tell you what
they are, and if you rely on them by chance, your code may break with
the next compiler release, without notice".

Something like:

GCC does not use the latitude given in C99 only to treat
certain aspects of signed @samp{<<} as undefined: If the right
operand @var{n} is non-negative and less than the width of the
left operand @var{val}, the resulting value @[EMAIL PROTECTED] <<
@var{n}} is guaranteed to be equal to @var{val} times 2 to the
@var{n}th power under 2-complement arithmetic.

would probably settle the issue, but it's unwieldy.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-19 Thread Florian Weimer
* Robert Dewar:

> What is (a*2)/2 optimized to? certainly it has the value a if you
> wrap,

Huh?  Isn't (INT_MAX*2)/2 negative (and thus not equal to INT_MAX) in
2-complement arithmetic?


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2006-12-29 Thread Florian Weimer
* Andrew Pinski:

>> If what you propose is the only way out, and there is no way to make
>> GCC optimizers reasonable, then I believe Paul's proposal is the next
>> option. 
>
> But that still does not address the issue is that this is not just about
> GCC any more since autoconf can be used many different compilers and is right
> now.  So if you change autoconf to default to -fwrapv and someone comes alongs
> and tries to use it with say ACC (some made up compiler right now).  The loop
> goes into an infinite loop because they treat (like GCC did) signed type 
> overflow
> as undefined, autoconf still becomes an issue.

Does autoconf enable higher optimization levels for other compilers by
default?

(BTW, I would be somewhat disappointed if this had to be pampered over
on the autoconf side.  If the GNU project needs -fwrapv for its own
software by default, this should be reflected in the compiler's
defaults.  I wish more C programs could be moved towards better
conformance, but this could be unrealistic, especially in the short
term.)


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2006-12-29 Thread Florian Weimer
* Daniel Berlin:

> OTOH, people who rely on signed overflow being wraparound generally
> *know* they are relying on it.
> Given this seems to be some  small number of people and some small
> amount of code (since nobody has produced any examples showing this
> problem is rampant, in which case i'm happy to be proven wrong), why
> don't they just compile *their* code with -fwrapv?

A lot of security patches to address integer overflow issues use
post-overflow checks, unfortunately.  Even if GCC optimizes them away,
it's unlikely that it'll break applications in an obvious way.
(Security-related test cases are typically not publicly available.)


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Florian Weimer
* Joe Buck:

> You appear to mistakenly believe that wrapping around on overflow is
> a more secure option.  It might be, but it usually is not.  There
> are many CERT security flaws involving integer overflow; the fact
> that they are security bugs has nothing to do with the way gcc
> generates code, as the "wrapv" output is insecure.

These flaws are typically fixed by post-overflow checking.  A more
recent example from PCRE:

| /* Read the minimum value and do a paranoid check: a negative value indicates
| an integer overflow. */
| 
| while ((digitab[*p] & ctype_digit) != 0) min = min * 10 + *p++ - '0';
| if (min < 0 || min > 65535)
|   {
|   *errorcodeptr = ERR5;
|   return p;
|   }

Philip Hazel is quite a diligent programmer, and if he gets it wrong
(and the OpenSSL and Apache developers, who are supposed to do code
review on their own, not relying on random external input), maybe this
should tell us something.

Of course, it might be possible that the performance gains are worth
reintroducing security bugs into object code (where previously,
testing and perhaps even manual code inspection has shown they have
been fixed).  It's not true that -fwrapv magically makes security
defects involving integer overflow disappear (which is quite unlikely,
as you point out).  It's the fixes which require -fwrapv semantics
that concern me.


Re: Signed int overflow behavior in the security context

2007-01-27 Thread Florian Weimer
* Paul Schlie:

>> People always say this, but they don't really realize what they are
>> saying. This would mean you could not put variables in registers, and
>> would essentially totally disable optimization.
>
> - can you provide an example of a single threaded program where the
> assignment of variable to a machine register validly changes its
> observable logical results?

Here's the usual suspect: 

Although the "validly" part is somewhat debatable.


Re: vsftpd 2.0.5 vs. gcc 4.1.2

2007-02-27 Thread Florian Weimer
* BuraphaLinux Server:

> Does anybody have a patch or know the trick to fix this?

Debian has got a patch.  I think the error message is wrong, it's a
const mismatch in pointer conversion, not an actual assignment.


Re: vsftpd 2.0.5 vs. gcc 4.1.2

2007-02-27 Thread Florian Weimer
* Andrew Pinski:

>> 
>> * BuraphaLinux Server:
>> 
>> > Does anybody have a patch or know the trick to fix this?
>> 
>> Debian has got a patch.  I think the error message is wrong, it's a
>> const mismatch in pointer conversion, not an actual assignment.
>
> Actually it is a bug in glibc's header with WIFEXITED, WEXITSTATUS, etc.
>
> See http://sourceware.org/bugzilla/show_bug.cgi?id=1392 .

Ah, I was looking at the half-fixed headers which still suffer from
the -Wcast-qual issue, but have the assigned fixed.  Thanks for
setting me straight.


Building mainline and 4.2 on Debian/amd64

2007-03-18 Thread Florian Weimer
Is there a convenient switch to make GCC bootstrap on Debian/amd64
without patching the build infrastructure?  Apparently, GCC tries to
build 32-bit variants of all libraries (using -m32), but the new
compiler uses the 64-bit libc instead of the 32-bit libc, hence
building them fails.

I don't need the 32-bit libraries, so disabling their compilation
would be fine. --enable-targets at configure time might do the trick,
but I don't know what arguments are accepted.


Re: Building mainline and 4.2 on Debian/amd64

2007-03-18 Thread Florian Weimer
* Steven Bosscher:

> On 3/18/07, Florian Weimer <[EMAIL PROTECTED]> wrote:
>> I don't need the 32-bit libraries, so disabling their compilation
>> would be fine. --enable-targets at configure time might do the trick,
>> but I don't know what arguments are accepted.
>
> Would --disable-multilib work?

I'll try, but I doubt it.  According to the installation
documentation, amd64 is not a multilib target.


Re: Building mainline and 4.2 on Debian/amd64

2007-03-18 Thread Florian Weimer
* Andrew Pinski:

> On 3/18/07, Florian Weimer <[EMAIL PROTECTED]> wrote:
>>
>> I'll try, but I doubt it.  According to the installation
>> documentation, amd64 is not a multilib target.
>
> HUH??? Which documentation?

I misinterpreted the installation manual, sorry.  I thought that all
the multilib targets were listed there, but the list only covers those
with more fine-grained configuration options.

Anyway, thanks to both of you.  Bootstrapping now works for me.


Re: Building mainline and 4.2 on Debian/amd64

2007-03-19 Thread Florian Weimer
* Andrew Pinski:

> Actually it brings up an even more important thing, distros that don't
> include a 32bit user land is really just broken.

Are they?  I haven't run into problems yet.

(And pretty please, I misread the documentation.  It does *not* state
that amd64 is not a multilib target.  Sorry about that.)


Re: error: "no newline at end of file"

2007-03-27 Thread Florian Weimer
* Manuel López-Ibáñez:

> C++ preprocessor emits errors by default for nonconformant code,
> following the C++ frot-end default behaviour.

Neither the C standard nor the C++ standard imposes any requirements
on concrete source code representation, so it's not quite right to
blame this issue on nonconformant code.

Unless there are other compilers which require a LF character at the
end of a file by default (and following this rule would increase
portability as a result), I don't think it's a good idea to impose
such a backwards incompatibility on users.


Re: error: "no newline at end of file"

2007-03-27 Thread Florian Weimer
* Ian Lance Taylor:

> I don't think we necessarily have to change anything.

Yes, I think that the standard does not require a particular approach
to this problem.

> But I think that Florian's point is that we don't have to confuse the
> concrete implementation with the abstract source representation.  We
> could define gcc such that when it sees a concrete file which does not
> end in ASCII 0x0a, it automatically appends ASCII 0x0a in the abstract
> representation.

Exactly.  During the first translation phase, end-of-line indicators
are converted to new-line characters.  If the majority of compilers
treats end-of-file as an end-of-line indicator (which wouldn't
surprise me), GCC can and should follow the crowd.


Re: Integer overflow in operator new

2007-04-07 Thread Florian Weimer
* Karl Chen:

> "4 * n", unchecked, is vulnerable to integer overflow.  On IA-32,
> "new int[0x4001]" becomes equivalent to "new int[1]".  I've
> verified this on gcc-2.95 through 4.1.  For larger objects the
> effects are exaggerated; smaller counts are needed to overflow.

This PR19351, by the way.

The most widespread interpretation of the standard is that conforming
implementations aren't allowed to raise an exception in this case:
the arithmetic is defined to occur in terms of an unsigned type.

See the 2005 discussion on comp.std.c++ on this topic.

> This is similar to the calloc integer overflow vulnerability in
> glibc, which was fixed back in 2002.  Interestingly, RUS-CERT
> 2002-08:02 did mention 'operator new', and so did Bugtraq 5398.
> http://cert.uni-stuttgart.de/advisories/calloc.php
> http://www.securityfocus.com/bid/5398/discuss

Yeah, I've essentially given up on this one.

The official response from the C++ folks is here:

http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_closed.html#256

| Each implementation is required to document the maximum size of an
| object (Annex B limits). It is not difficult for a program to check
| array allocations to ensure that they are smaller than this
| quantity. Implementations can provide a mechanism in which users
| concerned with this problem can request extra checking before array
| allocations, just as some implementations provide checking for array
| index and pointer validity. However, it would not be appropriate to
| require this overhead for every array allocation in every program.


Re: Integer overflow in operator new

2007-04-08 Thread Florian Weimer
* Joe Buck:

> Consider an implementation that, when given
>
>Foo* array_of_foo = new Foo[n_elements];
>
> passes __compute_size(elements, sizeof Foo) instead of n_elements*sizeof Foo
> to operator new, where __compute_size is
>
> inline size_t __compute_size(size_t num, size_t size) {
> size_t product = num * size;
> return product >= num ? product : ~size_t(0);
> }

I don't think this check is correct.  Consider num = 0x3334 and
size = 6.  It seems that the check is difficult to perform efficiently
unless the architecture provides unsigned multiplication with overflow
detection, or an instruction to implement __builtin_clz.


Re: Integer overflow in operator new

2007-04-08 Thread Florian Weimer
* Ross Ridge:

> Florian Weimer writes:
>>I don't think this check is correct.  Consider num = 0x3334 and
>>size = 6.  It seems that the check is difficult to perform efficiently
>>unless the architecture provides unsigned multiplication with overflow
>>detection, or an instruction to implement __builtin_clz.
>
> This should work instead:
>
>   inline size_t __compute_size(size_t num, size_t size) {
>   if (num > ~size_t(0) / size) 
>   return ~size_t(0);
>   return num * size;
>   }

Yeah, but that division is fairly expensive if it can't be performed
at compile time.  OTOH, if __compute_size is inlined in all places,
code size does increase somewhat.


Re: Question w.r.t. `'class Foo' has virtual functions but non-virtual destructor` warning.

2005-03-04 Thread Florian Weimer
* Jonathan Wakely:

> e.g. this is undefined behaviour:
>
> class Base {};
> class Derived : public Base {};
>
> Base* p = new Derived;
> delete p;

Wouldn't it make more sense to issue the warning at the point of the
delete, then?


Re: MetaC++ announcement

2005-03-05 Thread Florian Weimer
* Stefan Strasser:

> What?
> -
> MetaC++
> It is a library which is able to read and write C++ source code and
> makes the tree available to its clients by API and by a XML format it
> can read and write.
> Parsing is done by a patched GCC.
> The tree is a representation of language constructs, not a syntax tree.

Does it include representation information, e.g. offsets of class
members?  Unfortunately, the XML sample is too unwieldy to tell. 8-/


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Florian Weimer
* Robert Dewar:

> Marcin Dalecki wrote:
>
>> There is no reason here and you presented no reasoning. But still
>> there is a
>> *convenient* extension of the definition domain for the power of
>> function for the
>> zero exponent.
>
> The trouble is that there are *two* possible convenient extensions.

>From a mathematical point of view, 0^0 = 1 is the more convenient one
in most contexts.  Otherwise, you suddenly lack a compact notation of
polynomials (and power series).  However, this definition is only used
in a context were the exponent is an integer, so it's not really
relevant to the current discussion.


Re: GCC Status Report (2005-03-09)

2005-03-11 Thread Florian Weimer
* Joe Buck:

> If it is only Debian on non-shipped platforms, it would be reasonable to
> ask the Debian x64-64 people to apply the one-line patch to glibc pointed
> to by the PR.  It could be a hassle for them now because of the sarge
> freeze, though, so maybe fixincludes would be the way to go.

AMD64 is not covered by the freeze.  sarge will be released with
GNU libc 2.3.2, I believe, so it's not clear to me if it is affected at all.


Re: Copyright status of example code in Bugzilla - how to deal with when writing testcases.

2005-03-29 Thread Florian Weimer
* Robert Dewar:

> Unfortunately, you can't rely on sane judges, since the plaintiff can
> always demand a jury trial, and you would be surprised what juries think.
> Furthermore, deleting the test case makes no sense as a remedy. Either
> there is or there is not a copyright violation. The judge could only
> require the removal of the test case if there is a copyright violation,
> but then the plaintiff is entitled to substantial compensation without
> having to show any harm.

Even if the plaintiff submitted the test case in the first place, for
inclusion or not?


Re: how small can gcc get?

2005-04-24 Thread Florian Weimer
* Philip George:

> it needs only to be able to compile extremely simple c apps from a 
> shell opened from within the gui app.

Have a look at tcc.  It might be more suited to your needs than GCC.


Ada test suite

2005-04-28 Thread Florian Weimer
Some time ago, someone posted a patch which provided beginnings of a
general-purpose Ada test suite infrastructure (in addition to the
current ACATS tests, which cannot be used for regression tests).  The
patch was not integrated, and I can't find it at the moment. 8-(

Does anybody know which patch I'm talking about?


Re: Ada test suite

2005-04-28 Thread Florian Weimer
* Arnaud Charlet:

>> Some time ago, someone posted a patch which provided beginnings of a
>> general-purpose Ada test suite infrastructure (in addition to the
>> current ACATS tests, which cannot be used for regression tests).  The
>
> Note that this is technically incorrect: the ACATS infrastructure can
> be used for regression tests, as long as they are using the few acats packages
> to report success/failure. See the directory tests/gcc.

I thought that there were some reservations about changing the ACATS
test suite.

>> patch was not integrated, and I can't find it at the moment. 8-(
>> 
>> Does anybody know which patch I'm talking about?
>
> There is a GCC PR about it, so it should be fairly easy to find.

Yes, indeed, it's PR 18692.

> I have no knowledge on expect nor the dejagnu framework, so that's
> why I haven't commented on the proposed patch, otherwise the general
> idea is fine.

So how we can make sure that this work is not lost?  Who would be in a
position to approve a patch?


Re: Borland software patent restricting GNU compiler development

2005-05-11 Thread Florian Weimer
* Ingrid Marson:

> The Borland patent is a patent for standard exception handling
> http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL
> &p=1&u=/netahtml/srchnum.htm&r=1&f=G&l=50&s1=5,628,016.WKU.&OS=PN/5,628,
> 016&RS=PN/5,628,016

At least on Linux, GCC implements a different scheme which is
optimized for the more common case of execution without throwing any
exception.  Borland's approach doesn't even work without frame
pointers, so it's quite limited.

Maybe the Windows ABI prescribes the use of Borland's approach?  This
could explain the gripes of the WINE developers.


Re: Borland software patent restricting GNU compiler development

2005-05-11 Thread Florian Weimer
* Ranjit Mathew:

> Without looking at the patent, I would hazard the guess
> that it is about Win32 Structured Exception Handling (SEH):
>
>   http://www.microsoft.com/msj/0197/exception/exception.aspx
>
> (The linked-to mail from the GCC mailing list seems to
> confirm this.)

Indeed.  Explicitly pushing an exception frame pointer on the main
stack is covered by the patent.


Re: Borland software patent restricting GNU compiler development

2005-05-11 Thread Florian Weimer
* Paul Koning:

>>>>>> "Florian" == Florian Weimer <[EMAIL PROTECTED]> writes:
>
>  Florian> Indeed.  Explicitly pushing an exception frame pointer on
>  Florian> the main stack is covered by the patent.
>
> Oh, like VMS has done since V1.0, in 1978?

I fail to see the inventive step in that patent, too.  But as it's
worded, it seems to cover SEH when used to implement implicit
destruction of local variables. 8-(

I'm not sure if it makes sense to worry about this patent.  There
seems to be a straightforward workaround: always push the same
exception frame which points to cxxabi-like unwinder.  This should be
interoperable with other SEH users.


Exporting structure layout

2005-05-11 Thread Florian Weimer
When interfacing with C with other languages, it is often necessary to
write wrapper functions to access C structs because there are no
general (and free) tools available which can generate structure layout
information from C sources.

For example, suppose that I want to write Ada code which uses the
poll(2) from POSIX.  POSIX only guarantees that the following fields
are available in struct pollfd:

| intfd   The following descriptor being polled. 
| short  events   The input event flags (see below). 
| short  revents  The output event flags (see below). 

Additional members are permitted, and the fields can be order.  In
order to create a portable Ada interface, I have to write a short C
program which uses sizeof and offsetof to extract the structure
layout.  In theory, it is possible to create compile-time-only tests
suitable for cross-compilation (compile-time constant expressions,
invalid zero-length arrays, and a binary search), but this is rather
messy.  This is probably one of the reason why the GNAT run-time
library currently uses manually translated record definitions, which
reduces its portability.

Are there any objections to exporting structure layout from GCC, in a
format which can be parsed in a straightforward manner?  Such a patch
could be used as a GPL circumvention device, but I'm not sure how
relevant this is in practice because GCC follows published ABIs, so a
clean-room reimplementation would be straightforward (IOW, there isn't
much to lose for us because our competetive edge is pretty minimal).


Re: Exporting structure layout

2005-05-11 Thread Florian Weimer
* Paolo Bonzini:

>> Additional members are permitted, and the fields can be order.  In
>> order to create a portable Ada interface, I have to write a short C
>> program which uses sizeof and offsetof to extract the structure
>> layout.  In theory, it is possible to create compile-time-only tests
>> suitable for cross-compilation (compile-time constant expressions,
>> invalid zero-length arrays, and a binary search), but this is rather
>> messy.
>
> Look at AC_COMPUTE_INT.  

This is exactly the thing I'm scared of. 8-)

> It should be easy to start from there and do a macro like
>
> GCC_COMPUTE_MEMBERS_SIZE_OFFSET([#include ],
>   [struct pollfd],
>   [fd events revents]).

If we'd use such a macro to extract values of various E* constants,
bootstrap times would increase significantly, even while not
cross-compiling.  That's why I think it's not a viable approach.


Re: Exporting structure layout

2005-05-11 Thread Florian Weimer
* Ian Lance Taylor:

> I actually have a vague recollection that gcc used to implement
> something along these lines, but I couldn't find it in five minutes of
> searching.

There are several patches for (or forks of) GCC which implement
similar functionality.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Florian Weimer
* Russ Allbery:

> Seriously, though, I think the above only tests things out to the degree
> that Autoconf would already be warning about no default specified for
> cross-compiling, yes?  Wouldn't you have to at least cross-compile from a
> system with one endianness and int size to a system with a different
> endianness and int size and then try to run the resulting binaries to
> really see if the package would cross-compile?

Is this really necessary?  I would think that a LD_PRELOADed DSO which
prevents execution of freshly compiled binaries would be sufficient to
catch the most obvious errors.

If configure is broken, you can still bypass it and manually write a
config.h.  Even I can remember the days when this was a rather common
task, even when you were not cross-compiling.

> It's not just that it's perceived as hard.  It's that it's perceived as
> hard *and* obscure.

Well, it's hard to keep something working which you cannot test
reliably.  I think it would be pretty straightforward to support some
form of cross-compiling for the software I currently maintain
(especially if I go ahead and write that GCC patch for exporting
structure layout and compile-time constants), but there's no point in
doing so if it's not requested by users, I cannot test it as part of
the release procedures, and anybody who needs a binary can typically
cross-compile it without much trouble anyway ("vi config.h ; gcc
*/*.o").


Re: GPU-aware compiling?

2005-05-20 Thread Florian Weimer
* Tomasz Chmielewski:

> Well, wouldn't it be a GCC improvement? :)

I think it's mostly a GPU issue because GPU implementation details are
highly proprietary and usually treated as trade secrets.


Re: Compiling GCC with g++: a report

2005-05-24 Thread Florian Weimer
* Gabriel Dos Reis:

> The first resistance seems to come from the pervasive use of the implicit
> conversion void* -> T*, mostly with storage allocating functions.

This can be worked around on the C++ side, see the example code below.
It's a kludge, but it's not too bad IMHO.

class xmalloc_result;
xmalloc_result xmalloc (size_t);

class xmalloc_result
{
  friend xmalloc_result xmalloc (size_t);
  const size_t size_;

  xmalloc_result (size_t size)
: size_ (size)
  {
  }

  xmalloc_result operator= (const xmalloc_result&);
  // not implemented

public:
  template  operator T* () const
  {
return static_cast (malloc(size_));
  }
};

inline xmalloc_result
xmalloc (size_t size)
{
  return xmalloc_result (size);
}

char *
foo (void)
{
  return xmalloc (1);
}

void
bar (int **result)
{
  *result = xmalloc (sizeof (int));
}


Re: Compiling GCC with g++: a report

2005-05-25 Thread Florian Weimer
* Christoph Hellwig:

> Why can't libstdc++ use symbol versioning?

Via stack allocation, templates and inline functions, the internal
representation of libstdc++ data structures is exported.  All of its
users would have to be versioned, too, and you'd need bridging code
between the ABIs (e.g. to pass a std::string from a v6 library to a v7
library).


Re: %dil in x86 inline asm

2005-05-25 Thread Florian Weimer
* Phil Endecott:

> I've found bug 10153 and its duplicates which seem to be describing 
> essentially the same thing, but say that the input is invalid because it 
> uses "r" rather than "q".  I don't know enough about x86 to understand 
> this; presumably only certain registers can be used with this 
> instruction, or something.
>

The "q" registers form a proper subset of the "r" registers, precisely
those for which your assembler statement seems valid (the "a", 'b",
"c", "d" registers).

Questions of this kind sould be sent to the gcc-help mailing list.


Stickiness of TYPE_MIN_VALUE/TYPE_MAX_VALUE

2005-05-30 Thread Florian Weimer
How sticky are TYPE_MIN_VALUE and TYPE_MAX_VALUE?  Is it possible to
get rid of their effect using a NOP_EXPR, CONVERT_EXPR or
VIEW_CONVERT_EXPR?

If this is impossible, the Ada front end should probably stop setting
these fields because it assumes that it can use values outside that
range:



Current mainline does not optimize array range checks away (even with
-ftree-vrp), but I'm not sure if this is just a missed optimization
opportunity as far as the optimizers are concerned, or something which
is guaranteed to work in the future.


Re: Stickiness of TYPE_MIN_VALUE/TYPE_MAX_VALUE

2005-05-30 Thread Florian Weimer
* Diego Novillo:

>> 
>> 
>> Current mainline does not optimize array range checks away (even with
>> -ftree-vrp), but I'm not sure if this is just a missed optimization
>> opportunity as far as the optimizers are concerned, or something which
>> is guaranteed to work in the future.
>>
> Missed on purpose because of the bootstrap bug I worked-around a
> few weeks ago.  See the comment in tree-vrp.c:extract_range_from_assert
> regarding integral types with super-types.

Thanks, but this doesn't really answer my question. 8-) Do you
consider your patch as a temporary workaround, or should front ends
refrain from emitting TYPE_MIN_VALUE/TYPE_MAX_VALUE fields if they
cannot prove that no out-of-bounds values exist at run time?

> http://gcc.gnu.org/ml/gcc-patches/2005-05/msg00127.html
>
> If this is not what's biting you, send me a test case?

This is PR21573, a miscompiled SWITCH_EXPR.  TYPE_MIN_VALUE and
TYPE_MAX_VALUE are used to avoid some comparisons in the expand_case
machinery in stmt.c.  It's not VRP-related, and it also fails on
GCC 4.0.


Re: Stickiness of TYPE_MIN_VALUE/TYPE_MAX_VALUE

2005-05-30 Thread Florian Weimer
* Richard Kenner:

> How sticky are TYPE_MIN_VALUE and TYPE_MAX_VALUE?  Is it possible to
> get rid of their effect using a NOP_EXPR, CONVERT_EXPR or
> VIEW_CONVERT_EXPR?
>
> I don't really understand either question.  Also, as to the second,
> keep in mind their role in array indexes.

I'll try to phrase it differently: If you access an object whose bit
pattern does not represent a value in the range given by
TYPE_MIN_VALUE .. TYPE_MAX_VALUE of the corresponding type, does this
result in erroneous execution/undefined behavior?  If not, what is the
exact behavior WRT to out-of-bounds values?

> If this is impossible, the Ada front end should probably stop setting
> these fields because it assumes that it can use values outside that
> range:
>
> 
> http://gcc.gnu.org/onlinedocs/gcc-4.0.0/gnat_ugn_unw/Validity-Checking.html
>
> I don't understand what that chapter has to do with your statement.

These checks are implemented using the 'Valid attribute (see
Checks.Ensure_Valid and Checks.Insert_Valid_Check).


Re: configure test for a mmap flag

2005-05-30 Thread Florian Weimer
* Steve Kargl:

> I need to add a configure test to determine if MAP_NOSYNC is
> present.

What about "#ifdef MAP_NOSYNC" in the code?  Or do you invoke mmap
directly from Fortran?


Re: Ada front-end depends on signed overflow

2005-06-03 Thread Florian Weimer
* Andrew Pinski:

> The Ada front-end is still being missed compiled by VRP but VRP is doing
> the correct thing as the type is signed and overflow on signed is 
> undefined
> (-fwrapv is not turned on by default for Ada).

It probably makes sense to turn on -fwrapv for Ada because even
without -gnato, the behavior is not really undefined:

| The reason that we distinguish overflow checking from other kinds of
| range constraint checking is that a failure of an overflow check can
| generate an incorrect value, but cannot cause erroneous behavior.



(Without -fwrapv, integer overflow is undefined, and subsequent range
checks can be optimized away, so that it might cause erroneous
behavior.)



Re: Ada front-end depends on signed overflow

2005-06-03 Thread Florian Weimer
* Paul Schlie:

>> (Without -fwrapv, integer overflow is undefined, and subsequent range
>> checks can be optimized away, so that it might cause erroneous
>> behavior.)
>
> - Since for all practical purposes most (if not all) target's use
>   2's complement integer representations which naturally "wrap", might
>   it be simply best to presume that all do "wrap" by default, but allow
>   -fnowrapv to disable it if ever required by the odd target/language?

Enabling -fwrapv disables quite a few optimizations on signed integer
types in C code.  OTOH, you should compile most real-world C code with
-fwrapv anyway.  See my security advisory on incorrect overflow
checking in C; this is a rather widespread issue, even in new code.


Re: Ada front-end depends on signed overflow

2005-06-03 Thread Florian Weimer
* Paul Schlie:

>> No they should be using -ftrapv instead which traps on overflow and then
>> make sure they are not trapping when testing.
>
> - why? what language or who's code/target ever expects such a behavior?

I think Andrew wants programmers to fix their code, instead of
papering over problems. 8-)

All code has seen wide testing essentially with -fwrapv enabled
because in previous GCC version, -fwrapv had only a limited effect,
especially across multiple statmeents.  That's why I don't prefer the
-ftrapv approach, even though its technically the correct one.

It's a real pity that we have to trust so much C code which has been
written and reviewed by developers who aren't aware that signed
integer overflow is undefined.


Re: Ada front-end depends on signed overflow

2005-06-03 Thread Florian Weimer
* Joe Buck:

> I'm sure there are plenty of production codes that assume signed integer
> overflow wraps, or at least make the weaker assumption that in
>
>a = b + c + d;
>
> where all variables are integers, if one of the intermediate terms
> can't be represented in an integer, we still get the correct result
> if the final result is representable in an integer.

There's also a fair amount of code whih relies on -1 ==
(int)0x.

Or is there any truly portable and efficient way to convert a sequence
of bytes (in big-endian order) to a signed integer?


Re: collab.net have a spam open relay in operation at the moment. please give them grief about it.

2005-06-05 Thread Florian Weimer
>> the collab.net server in question has been utilised to forge messages
>> from openoffice.org to [EMAIL PROTECTED], which i am guessing
>> is a sf.net internal email-absolutely-everybody mailing list.
>
> We opened an internal issue at CollabNet to look into this, and found
> that we don't have an open relay.

Luke and I analyzed this issue.  I'm surprised he hasn't told you
about the result.  It's not an open relay.


Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Robert Dewar:

> Defintiely not, integer overflow generates async traps on very few
> architectures. Certainly synchronous traps can be generated (e.g.
> with into on the x86).

Or the JO jump instruction.  Modern CPUs choke on the INTO
instruction.


Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Paul Schlie:

> - I'm not attempting to design a language, but just defend the statement
>   that I made earlier; which was in effect that I contest the assertion
>   that undefined evaluation semantics enable compilers to generate more
>   efficient useful code by enabling them to arbitrarily destructively alter
>   evaluation order of interdependent sub-expressions, and/or base the
>   optimizations on behaviors which are not representative of their target
>   machines.

But the assertion is trivially true.  If you impose fewer constraints
on an implementation by leaving some cases undefined, it always has
got more choices when generating code, and some choices might yield
better code.  So code generation never gets worse.

Whether an implementation should exercise the liberties granted by the
standard in a particular case is a different question, and has to be
decided on a case-by-case basis.

>   (With an exception being FP optimization, as FP is itself based
>only on the approximate not absolute representation of values.)

FP has well-defined semantics, and its absolutely required for
compilers to implement them correctly because otherwise, a lot of
real-world code will break.

Actually, this is a very interesting example.  You don't care about
proper floating point arithmetic and are willing to sacrifice obvious
behavior for a speed or code size gain.  Others feel the same about
signed integer arithmetic.

>>> - Agreed, I would classify any expression as being ambiguous if any of
>>>   it's operand values (or side effects) were sensitive to the allowable
>>>   order of evaluation of it's remaining operands, but not otherwise.
>> 
>> But this predicate cannot be evaluated at compile time!
>
> - Why not?

In general, it's undecidable.

>   The compiler should be able to statically determine if an
>   expression's operands are interdependent, by determining if any of
>   its operand's sub-expressions are themselves dependant on a variable
>   value potentially modifiable by any of the other operand's sub-
>   expressions.

Phrased this way, you make a lot of code illegal.  I doubt this is
feasible.


Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Paul Schlie:

>> But the assertion is trivially true.  If you impose fewer constraints
>> on an implementation by leaving some cases undefined, it always has
>> got more choices when generating code, and some choices might yield
>> better code.  So code generation never gets worse.
>
> - yes, it certainly enables an implementation to generate more efficient
>   code which has no required behavior; so in effect basically produce more
>   efficient programs which don't reliably do anything in particular; which
>   doesn't seem particularly useful?

The quality of an implementation can't be judged only based on its
conformance to the standard, but this does not mean that the
implementation gets better if you introduce additional constraints
which the standard doesn't impose.

Some people want faster code, others want better debugging
information.  A few people only want optimizations which do not change
anything which is pracitcally observable but execution time (which is
a contradiction), and so on.

> - Essentially yes; as FP is an approximate not absolute representation
>   of a value, therefore seems reasonable to accept optimizations which
>   may result in some least significant bits of ambiguity.

But the same is true for C's integers, they do not behave like the
real thing.  Actually, without this discrepancy, we wouldn't have to
worry about overflow semantics, which once was the topic of this
thread!

>>>   The compiler should be able to statically determine if an
>>>   expression's operands are interdependent, by determining if any of
>>>   its operand's sub-expressions are themselves dependant on a variable
>>>   value potentially modifiable by any of the other operand's sub-
>>>   expressions.
>> 
>> Phrased this way, you make a lot of code illegal.  I doubt this is
>> feasible.
>
> - No, exactly the opposite, the definition of an order of evaluation
>   eliminates ambiguities, it does not prohibit anything other than the
>   compiler applying optimizations which would otherwise alter the meaning
>   of the specified expression.

Ah, so you want to prescribe the evaluation order and allow reordering
under the as-if rule.  This wasn't clear to me, sorry.

It shouldn't be too hard to implement this (especially if your order
matches the Java order), so you could create a switch to fit your
needs.  I don't think it should be enabled by default because it
encourages developers to write non-portable code which breaks when
compiled with older GCC version, and it inevitably introduces a
performance regression on some targets.


Re: strange double comparison results with -O[12] on x86(-32)

2005-06-13 Thread Florian Weimer
* Andrew Pinski:

> This is known as GCC PR 323 which is not a bug: 
> .

It is a bug in GCC, the C standard, or the x86 FP hardware.  I'm
leaning towards the C standard or the hardware. 8-)


Re: strange double comparison results with -O[12] on x86(-32)

2005-06-13 Thread Florian Weimer
* Dave Korn:

>> It is a bug in GCC, the C standard, or the x86 FP hardware.  I'm
>> leaning towards the C standard or the hardware. 8-)
>
>
>   ... or it's a bug in the libc/crt-startup, which is where the
> hardware rounding mode is (or should be) set up ...

I think you'd still experience latent excess precision for floats,
though.


Re: PATCH: Explicitly pass --64 to assembler on AMD64 targets

2005-06-13 Thread Florian Weimer
* Daniel Jacobowitz:

> How would you feel about a patch that made us always pass --64
> as appropriate, at least if the assembler in question is gas?  I
> periodically bootstrap on a 64-bit kernel with a 32-bit root FS.  But
> the assembler and linker are biarch, and the 64-bit libs are installed,
> so it's just the defaults that are wrong.  If we were explicit, an
> x86_64-linux compiler would Just Work.

You still need some script hackery for objcopy and friends, I think,
that's why I didn't pursue the matter further.

Apart from that, the following entry from the specs file seems to
work (GAS does the right think with multiple --32/--64 options):

*asm:
%{v:-V} %{Qy:} %{!Qn:-Qy} %{n} %{T} %{Ym,*} %{Yd,*} %{Wa,*:%*} %{!m64:--32} 
%{m64:--64} %{m32:--32}



Re: copyright assignment

2005-06-14 Thread Florian Weimer
* James A. Morrison:

>  The form is here:
> http://gcc.gnu.org/ml/gcc/2003-06/msg02298.html
>
>  If you have any questions feel free to ask.

It's better to request the current version from the FSF, see:



Re: Reporting bugs: there is nothing to gain in frustrating reporters

2005-06-15 Thread Florian Weimer
* Gabriel Dos Reis:

> Scott Robert Ladd <[EMAIL PROTECTED]> writes:
>
> | Giovanni Bajo wrote:
> | > Agreed. But keep in mind that it is not necessary to reply: once the bug 
> is
> | > open and confirmed, the last comment "wins", in a way. If the bugmaster
> | > wanted to close it, he would just do it, so an objection in a comment does
> | > not make the bug invalid per se.
> | 
> | But an objection from one of the bugmasters *is* enough to keep people
> | from presenting a patch.
>
> Well, I'm not sure.  If the report is closed, then you're right.

Not necessarily, wrongly closed reports are reopened in my experience
(and not by the submitter 8-).


Re: basic VRP min/max range overflow question

2005-06-18 Thread Florian Weimer
* Paul Schlie:

> So in effect the standard committee have chosen to allow any program which
> invokes any undefined behavior to behave arbitrarily without diagnosis?
>
> This is a good thing?

It's the way things are.  There isn't a real market for
bounds-checking C compilers, for example, which offer well-defined
semantics even for completely botched pointer arithmetic and pointer
dereference.

C isn't a programming language which protects its own abstractions
(unlike Java, or certain Ada or Common Lisp subsets), and C never was
intended to work this way.  Consequently, the committee was right to
deal with undefined behavior in the way it did.  Otherwise, the
industry would have adopted C as we know it, and the ISO C standard
would have had the same relevance as, say, the ISO Pascal on the
evolution of Pascal.

Keep in mind that the interest in "safe" langauges (which protect
their abstractions) for commercial production code is a very, very
recent development, and I'm not sure if this is just an accident.


Re: Template declaration inside function

2005-06-21 Thread Florian Weimer
* Mattias Karlsson:

> Given:
>
> void f(void)
> {
>template class A
>{
>};
> }
>
> g++ 4.0/3.4 gives: 
> bug.cpp:4: error: expected primary-expression before 'template'
>
> Can a language lawer please confirm that this is even valid before I
> create a PR?

It's not valid (local template declarations are not allowed, see
14(2)), but it makes sense to create a PR.  The error message is very
hard to understand.


Re: signed is undefined and has been since 1992 (in GCC)

2005-07-02 Thread Florian Weimer
* Dave Korn:

>   It certainly wasn't meant to be.  It was meant to be a dispassionate
> description of the state of facts.  Software that violates the C standard
> just *is* "buggy" or "incorrect",

Not if a GCC extension makes it legal code.  And actually, I believe a
GCC extension which basically boils to -fwrapv by default makes sense
because so much existing code the free software community has written
(including critical code paths which fix security bugs) implicitly
relies on -fwrapv.


Re: signed is undefined and has been since 1992 (in GCC)

2005-07-02 Thread Florian Weimer
* Robert Dewar:

> I am puzzled, why would *ANYONE* who knows C use int
> rather than unsigned if they want wrap around semantics?

Both OpenSSL and Apache programmers did this, in carefully reviewed
code which was written in response to a security report.  They simply
didn't know that there is a potential problem.  The reason for this
gap in knowledge isn't quite clear to me.

Probably it's hard to accept for hard-code C coders that a program
which generates correct machine code with all GCC versions released so
far (modulo bugs in GCC) can still be illegal C and exhibit undefined
behavior.  IIRC, I needed quite some time to realize the full impact
of this distinction.


Re: signed is undefined and has been since 1992 (in GCC)

2005-07-03 Thread Florian Weimer
* Robert Dewar:

> Making programs bug free has more to it than understanding the language
> you are writing in, but it is a useful step forward to avoid problems
> that come from simply not knowing the rules of the language you are
> writing in (I can't guarantee that GNAT is bug free in that regard,
> but I can't remember a case where a bug stemmed from this source).

There was some dependency on argument order evaluation in GNAT, but
this was part of GIGI, so it's not the best example.


Re: Calling a pure virtual function

2005-07-09 Thread Florian Weimer
* Adam Nielsen:

> class Base {
>   public:
> Base()
> {
>   cout << "This is class " << this->number();
> }
>
> virtual int number() = 0;
> };

Roughly speaking, when number() is invoked, the object still has type
Base (with a corresponding vtable).  One's constructor will change the
type once the Base part has been constructed.

The following FAQ entry covers this:

http://www.parashift.com/c++-faq-lite/strange-inheritance.html#faq-23.3


Re: Calling a pure virtual function

2005-07-09 Thread Florian Weimer
* Adam Nielsen:

> It still makes me wonder whether GCC is reporting the correct error for
> this mistake though, I would've expected a compiler error (something
> along the lines of 'you can't call a pure virtual function') rather than
> a linker error.  Especially as GCC should be able to tell at compile
> time the base constructor is calling a pure virtual function.  I guess
> it's treating the constructor like any other function, where this
> behaviour would be permitted.

I think C++ allows for a definition for a purely abstract function
(which would be called in this case).


Re: Large, modular C++ application performance ...

2005-07-29 Thread Florian Weimer
* michael meeks:

>   I've been doing a little thinking about how to improve OO.o startup
> performance recently; and - well, relocation processing happens to be
> the single, biggest thing that most tools flag.

Have you tried prelinking?


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-05 Thread Florian Weimer
* Richard Henderson:

> For the record, I believe we've addressed these issues sometime
> within the last year or two.  The TYPE_MIN/MAX_VALUE for an enum
> should be set to the range truely required by the relevant language
> standards (different between C and C++).
>
> I don't know for a fact that Ada has been adjusted for this though.

>From GIGI (around line 4115 in declc):

  if ((kind == E_Enumeration_Type && Present (First_Literal (gnat_entity)))
  || (kind == E_Floating_Point_Type && !Vax_Float (gnat_entity)))
{
  tree gnu_scalar_type = gnu_type;

  [...]

  TYPE_MIN_VALUE (gnu_scalar_type)
= gnat_to_gnu (Type_Low_Bound (gnat_entity));
  TYPE_MAX_VALUE (gnu_scalar_type)
= gnat_to_gnu (Type_High_Bound (gnat_entity));

This is wrong (as discussed before) and is likely the cause of PR21573
(not VRP-related, the expanders for SWITCH_EXPR look at these
attributes, too).  I'm not sure if it is safe to delete these
assignment statmeents because TYPE_MIN_VALUE/TYPE_MAX_VALUE are used
quite extensively throught GIGI.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-05 Thread Florian Weimer
* Richard Kenner:

>  This is wrong (as discussed before) and is likely the cause of PR21573
>  (not VRP-related, the expanders for SWITCH_EXPR look at these
>  attributes, too).  I'm not sure if it is safe to delete these
>  assignment statmeents because TYPE_MIN_VALUE/TYPE_MAX_VALUE are used
>  quite extensively throught GIGI.
>
> Well, what *should* they be set to?  That is indeed setting them to the
> minimum and maximum values as defined by the language.

No, the language (or, more precisely, GNAT) defines them as 0 and
2**size - 1.  Otherwise the 'Valid attribute doesn't work.  Necessary
range checks will be optimized away, too.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-05 Thread Florian Weimer
* Richard Kenner:

>  No, the language (or, more precisely, GNAT) defines them as 0 and
>  2**size - 1.  Otherwise the 'Valid attribute doesn't work.  Necessary
>  range checks will be optimized away, too.
>
> No, enumeration types are defined as having precisely the set of
> values specifically listed.

This is simply not true for Ada.  Look at the definition of the 'Valid
attribute in the standard:

  3. X'Valid

  Yields True if and only if the object denoted by X is normal
  and has a valid representation. The value of this attribute
  is of the predefined type Boolean.

If your claim were true, 'Valid could never return False for
enumeration types.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-05 Thread Florian Weimer
* Richard Henderson:

> On Fri, Aug 05, 2005 at 10:15:04PM +0200, Florian Weimer wrote:
>>   TYPE_MIN_VALUE (gnu_scalar_type)
>>  = gnat_to_gnu (Type_Low_Bound (gnat_entity));
>>   TYPE_MAX_VALUE (gnu_scalar_type)
>>  = gnat_to_gnu (Type_High_Bound (gnat_entity));
>> 
>> This is wrong (as discussed before) and is likely the cause of PR21573
>> (not VRP-related, the expanders for SWITCH_EXPR look at these
>> attributes, too).  I'm not sure if it is safe to delete these
>> assignment statmeents because TYPE_MIN_VALUE/TYPE_MAX_VALUE are used
>> quite extensively throught GIGI.
>
> Well, perhaps yes, perhaps no.  What I don't know is if it is
> actively illegal to assign 0 to an enumeration that doesn't
> contain 0 as a member.

Illegal from which viewpoint?  Language definition or GCC optimizers?

> It's clear that if it is in fact illegal, that the Ada front
> end has to use some other type than the enumeration to validate
> the values going into the enumeration.

In the Ada case, all the necessary compile-time checks should be
performed by the front end anyway.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-05 Thread Florian Weimer
* Richard Kenner:

>  This is simply not true for Ada.  Look at the definition of the 'Valid
>  attribute in the standard:
>
>3. X'Valid
>
>   Yields True if and only if the object denoted by X is normal
>   and has a valid representation. The value of this attribute
>   is of the predefined type Boolean.
>
> Right.  That says what a "valid representation" is.  Except for the
> result of an unchecked_conversion being given as the operand of 'Valid,
> any other value in that type is erroneous.

Both ARM 13.9.1 and the GNAT User Guide (in Section 3.2.4 Validity
Checking) require that such reads are NOT erroneous.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Richard Kenner:

>  Both ARM 13.9.1 and the GNAT User Guide (in Section 3.2.4 Validity
>  Checking) require that such reads are NOT erroneous.
>
> It depends what "such reads" mean.  13.9.1(12) clearly says that the
> result of an Unchecked_Conversion is erroneous if it isn't a valid
> representation.  There are some cases, however, where an out-of-range
> value is a bounded error instead of being erroneous.
>
> However, note 20 (13.9.2) says that 'Valid is not considered a "read"
> and hence its use is not erroneous.

I'm sorry for my rude discussion style.  I was a bit frustrated
because of some unrelated matters.

I think the GNAT documentation makes additional guarantees.  If you
think this is wrong, the documentation can be fixed, of course.  In
addition, the first example in PR21573 follows your advice and applies
'Valid to the result of an instantiation of Ada.Unchecked_Conversion.
This still doesn't work.

If this still doesn't convince you, here's an example which doesn't
use Ada.Unchecked_Conversion at all.

--  Another test case for PR21573.  Note that if PR23354 is fixed and
--  X is initialized to a different value, this test case might no
--  longer check the same bug (but it should still print SUCCESS).
--  (The Bug3_P package is necessary to prevent compile-time
--  evaluation.)

pragma Normalize_Scalars;

with Bug3_P; use Bug3_P;

procedure Bug3 is

   X : U;
   --  The subtype causes X to be initialized with 0, according to the 
   --  current Normalize_Scalars rules.

begin
   Test (X);
end Bug3;

with Ada.Text_IO; use Ada.Text_IO;

package Bug3_P is

   type T is (A, B, C, D);
   for T'Size use 8;
   for T use (A => 2, B => 3, C => 5, D => 7);

   subtype U is T range B .. D;

   procedure Test (X : T);

end Bug3_P;

package body Bug3_P is

   procedure Test (X : T) is
   begin
  --  Check with a debugger that X is zero at this point.
  if X'Valid then
 Put_Line ("FAIL");
  else
 Put_Line ("SUCCESS");
  end if;
   end Test;
end Bug3_P;


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Robert Dewar:

> Florian Weimer wrote:
>> If this still doesn't convince you, here's an example which doesn't
>> use Ada.Unchecked_Conversion at all.
>
> this example must print Success, that is guaranteed by the RM

Yes, I think so.

What about the first one in PR21573?  IMHO, the GNAT Reference Manual
makes a guarantee that it prints SUCESS, too, but I could be
misreading the documentation.

> it is definitely critical that 'Valid not make "in-range"
> assumptions.

> the actual problem is optimization of this
> routine presumably:

>   function bug3_p__tRP (A : bug3_p__t; F : boolean) return integer is

Indeed.  In this case, bug3_p__t has TYPE_MIN_VALUE and TYPE_MAX_VALUE
set according to T'First'Enum_Rep and T'Last'Enum_Rep.  Even without
VRP, add_case_node and node_has_high_bound in stmt.c check these
attributes and use them in optimizations.

> the unchecked conversion to unsigned must prevent any optimization.
> the optimizer must not be able to "see through" an unchecked conversion!

I don't think we currently have a convenient way to express such an
optimization barrier in the tree language.

I fear that such barriers are also needed for all checks on scalars,
by the way, not just 'Valid.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Robert Dewar:

> Florian Weimer wrote:
>
>> I fear that such barriers are also needed for all checks on scalars,
>> by the way, not just 'Valid.
>
> indded, and we do unchecked conversions to the base type in these
> cases. i guess we could fix the enum case by using unsigned as the
> arg type, but that would not help the general case.

It would also fail when the 'Valid code is inlined and the tree
optimizers propagate the range information present in the enumeration
type.

> why can't we just completely turn off this optimization for Ada
> since it is wrong!

If I read the documentation correctly, you have to set
TYPE_MAX_VALUE/TYPE_MIN_VALUE to the values for the base type.
However, GIGI uses these attributes for other things besides
passing data to the middle end, so it's not an easy change.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Richard Kenner:

> If this still doesn't convince you, here's an example which doesn't
> use Ada.Unchecked_Conversion at all.
>
> Well sure, reading an uninitialized value is erroneous except for the use
> of 'Valid.

No, it's not, as Ada is not C.  And please not the presence of pragma
Normalize_Scalars.

> I'm not saying that things aren't broken, just being very careful in the
> definition of what a "valid" value in an object is.

To be honest, I think your definitions don't match what is described
in the in the ARM and the GNAT RM.

> The point is that these values are not "valid" (which is why 'Valid
> returns FALSE) and that the compiler (specifically VRP)

This is not a VRP bug.  It also happens with GCC 4.0.1.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Richard Kenner:

> > Well sure, reading an uninitialized value is erroneous except for the 
> use
> > of 'Valid.
>
> No, it's not, as Ada is not C.  
>
> What's "not"?  My statement is based on the Ada RM.

Quote from section 13.9.1 follows.  Note the "but does not by itself
lead to erroneous or unpredictable execution" part.

 Bounded (Run-Time) Errors

  9. If the representation of a scalar object does not represent a
 value of the object's subtype (perhaps because the object was not
 initialized), the object is said to have an invalid
 representation. It is a bounded error to evaluate the value of
 such an object. If the error is detected, either Constraint_Error
 or Program_Error is raised. Otherwise, execution continues using
 the invalid representation. The rules of the language outside this
 subclause assume that all objects have valid representations. The
 semantics of operations on invalid representations are as follows:

  10. If the representation of the object represents a value of the
  object's type, the value of the type is used.

  11. If the representation of the object does not represent a
  value of the object's type, the semantics of operations on
  such representations is implementation-defined, but does not
  by itself lead to erroneous or unpredictable execution, or to
  other objects becoming abnormal.

> And please note the presence of pragma Normalize_Scalars.
>
> That doesn't affect validity or erroneousness.

It affects predictability.  As Robert wrote, the test case must print
SUCCESS.


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Florian Weimer
* Laurent GUERBY:

> An implementation model could be for the front-end to generate for each
> family of scalar type T a function Base_Type_Internal_Valid (X, Min,
> Max : in Base_Type_Of_T) return Boolean, generate a call to it at all
> 'Valid uses and then tell the compiler to never do any inlining at all
> on such generated function.

I think you could have a similar effect with an empty machine code
insertion.  But it's still a kludge.


Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-17 Thread Florian Weimer
* Ian Lance Taylor:

> I haven't tried to flesh this out any further.  I'd be curious to hear
> how people react to it.

Can't we just use some inline function written in plain C to check the
arguments and execute it at compile time using constant folding etc.?


Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-17 Thread Florian Weimer
* Ian Lance Taylor:

> Florian Weimer <[EMAIL PROTECTED]> writes:
>
>> * Ian Lance Taylor:
>> 
>> > I haven't tried to flesh this out any further.  I'd be curious to hear
>> > how people react to it.
>> 
>> Can't we just use some inline function written in plain C to check the
>> arguments and execute it at compile time using constant folding etc.?
>
> I don't really see how that could work and still do what we want it to
> do.  Could you give an example of what it would look like?

If I understand your %A/%B example correctly, it would look like this:

/* FORMAT is the complete format string, POS the offset of the current %
   directive.  Returns a C type specifcier as a string.  NULL means: do not
   consume any argument */
static inline const char *
printf_checker_bfd (const char *format, size_t pos)
{
  if (strncmp (format + pos, "%A", 2) == 0)
{
  if (pos != 0)
{
   __builtin_warn ("`%A' must occur at the start of the format string");
  return "void *"; // accept anything
}
  return "asection *";
}
  if (strncmp (format + pos, "%B", 2) == 0)
{
  if (pos != 0)
{
   __builtin_warn ("`%B' must occur at the start of the format string");
  return "void *"; // accept anything
}
  return "bfd *";
}
  return __builtin_printf_checker (format, pos); // handle printf format string
}

#pragma GCC format "bfd" "invoke printf_checker_bfd"

The interface still needs some polishing; it might be desirable to be
able to pass along some kind of flag.  Perhaps it's more obvious to
express the scanning loop in the checking code and explicitly compare
the type using some builtin, but this is probably even more
challenging on the optimiziers.


Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-17 Thread Florian Weimer
* Ian Lance Taylor:

> Florian Weimer <[EMAIL PROTECTED]> writes:
>
>> If I understand your %A/%B example correctly, it would look like this:
>
> OK, I can see how that might work in a simple case.  Now, can you give
> me an example of matching %d with the various flags?  In particular,
> are you going to write a loop, and is gcc going to somehow fully
> unroll that loop at compile time?

This is indeed a problem (with GCC 4.0 at least).  A regexp builtin
which returns the length of the matched string probably could probably
solve this.  Managing state so that you can still compose multiple
checkers is the harder part, I think.


Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-18 Thread Florian Weimer
* Giovanni Bajo:

> Do we have a sane way to (partially) execute optimizers at -O0
> without screwing up with the pass manager too much?

Do we have to provide user-defined format string warnings at -O0?


Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-18 Thread Florian Weimer
* Dave Korn:

>   PMFBI, but how is all this going to work on a cross compiler?

Constant folding works in a cross-compiler, too. 8-)


Re: please update the gcj main page

2005-08-23 Thread Florian Weimer
* Gerald Pfeifer:

> On Sun, 31 Jul 2005, Daniel Berlin wrote:
>> For code.
>> I have never seen such claims made for documentation, since it's much
>> easier to remove and deal with infringing docs than code.
>
> I have seen such statements, by RMS himself.

The official position might have changed (e.g. copyright assignments
and documentation).


Re: [GCC 4.x][AMD64 ABI] variadic function

2005-08-23 Thread Florian Weimer
* Matteo Emanuele:

> Is it possible to  find the register save area and the
> overflowing arguments within the called function
> without using %ebp (that means with
> -fomit-frame-pointer set) and knowing nothing of the
> caller?

You mean, if the caller called the function as it were a non-variadic
function?


Re: 4.2 Project: "@file" support

2005-08-25 Thread Florian Weimer
* Andi Kleen:

> Linux has a similar limit which comes from the OS (normally around 32k) 
> So it would be useful there for extreme cases too.

IIRC, FreeBSD has a rather low limit, too.  And there were discussions
about command line length problems in the GCC build process on VMS.


Re: RFC: bug in combine

2005-08-25 Thread Florian Weimer
* Dale Johannesen:

> The test of f->b comes out as
>
>   testl  $1048512, 73(%eax)
>
> This is wrong, because 4 bytes starting at 73 goes outside the
> original object and can cause a page fault.

sizeof (struct Flags) is 76, so this isn't a bug, IMHO.


Re: RFC: bug in combine

2005-08-25 Thread Florian Weimer
* Florian Weimer:

> * Dale Johannesen:
>
>> The test of f->b comes out as
>>
>>   testl  $1048512, 73(%eax)
>>
>> This is wrong, because 4 bytes starting at 73 goes outside the
>> original object and can cause a page fault.
>
> sizeof (struct Flags) is 76, so this isn't a bug, IMHO.

Scratch that, I obviously cannot count. 8-(

Anyway, why is GCC emitting an unaligned load?  This is very strange
indeed, and fixing this should both increase code quality and
eliminate the page fault problem.


Re: Warning C vs C++

2005-09-18 Thread Florian Weimer
* Tommy Vercetti:

>> The warning is controlled by -Wsign-compare, which is turned on by
>> -Wextra (also known as -W) but not by -Wall.  It's not turned on by
>> -Wall because it is not normally a problem.

> That's strange, all users I know expected it to turn ALL warnings,
> hence name.

Some people claim it's a homage to Larry Wall, inventor of Perl.

Back in the 90s, there used to be a joke among Python developers that
a "uido" debugging format should be added to GCC.


Re: pointer checking run time code

2005-09-18 Thread Florian Weimer
* Robert Dewar:

> shreyas krishnan wrote:
>
>> Ideas, other pointers would be great
>
> Note that of course this kind of check is standard in Ada
> and hence in GNAT, so you can get an idea from GNAT
> generated code how well the backend can eliminate
> such checks (answer: getting better with gcc 4).

Doesn't it eleminate too many checks, even? 8-/

And, to be absolutely honest, Ada only requires a small subset of all
the checks that are required to make pointers completely safe.  Once
you use 'Unchecked_Access, Unchecked_Deallocation, or GNAT's
'Unrestricted_Access, all bets are off.


Re: Undefined behavior in genautomata.c?

2005-09-19 Thread Florian Weimer
* Sebastian Pop:

> By the way, how is this different than detecting a bound on:
>
> {
>   int foo[1335];
>
>   for (i = 0; i < some_param; i++)
> foo[i];
> }
>
> vs.
>
> {
>   some_struct{ int foo[1335];} s;
>
>   for (i = 0; i < some_param; i++)
> s.foo[i];
> }

Nothing.  But in the genautomata case, struct description is allocated
on the heap, with appropriate room for storing more elements.


Re: Final Subversion testing this week

2005-10-16 Thread Florian Weimer
* Daniel Berlin:

>> Is it okay to make an unreviewed test commit?

> Uh, commit all you want.

Permissions don't seem to be set correctly:

SendingChangeLog
Sendinglibgcc2.h
Transmitting file data ..svn: Commit failed (details follow):
svn: Can't create directory '/svn/gcc/db/transactions/105361-1.txn': Permission 
denied

(My user account is "fw", in case this matters.)


Re: error: forward declaration of `struct bit::bitObject'

2005-10-17 Thread Florian Weimer
* Roel Bindels:

> I posted this question on the GCC-help list but maybe someone here
> can give me some advice on how to proceed also.

The advice you'll get here is exactly the same: post a small example
which reproduces the error message which troubles you.

(Please continue the discussion on gcc-help; that list is the correct
one.)


Re: Moving to subversion?

2005-10-17 Thread Florian Weimer
* Steve Kargl:

>> Uh, since it appears you are logged in with a different name, you want
>> svn co svn+ssh://[EMAIL PROTECTED]/svn/gcc/trunk
>
> Odd, I don't need to do anything special with cvs.

Once you've checked out a tree, CVS stores the remote user in the
CVS/Root file.  Maybe you have only forgotten how you created that
tree?  Pretty easy if you switch branches using cp & cvs update.


Re: using multiple trees with subversion

2005-10-19 Thread Florian Weimer
* François-Xavier Coudert:

> Having 5 subversion trees will need much more space (for local
> pristine copies), which I don't really have. Is there any way to force
> subversion use one pristine tree for all modified trees, or is my way
> of handling things completely rotten?

You could try svk, it doesn't use pristine copies.


Re: RFC: future gfortran development and subversion

2005-10-20 Thread Florian Weimer
* Daniel Berlin:

> You could simply do non-recursive checkouts (svn co -N) of the dirs you
> want.
> SVN doesn't care how you piece together the working copy.

Doesn't "commit -N" cause the working copy to become fragmented, so
that you cannot issue a working-copy-wide commit or diff anymore?


Re: RFC: IPO optimization framework for GCC

2005-10-23 Thread Florian Weimer
* Sebastian Pop:

> Steve Ellcey wrote:
>> 
>> In the meantime I would be interested in any opinions people have on
>> what level we should be writing things out at.  Generic?  Gimple?  RTL?
>
> Or just dumping plain C code?  This is almost what the pretty printers
> are doing, and the way back to the compiler is already there ;-)

Some front ends generate trees which cannot be generated by any C
program.  It might be possible to add some extensions (which would
also help to come up with C test cases for bugs which are currently
exposed by non-C front ends only), and this might even ease concerns
that an ISO C backend might make it possible to use GCC as a library.

But I think it's still better to use some binary serialized IL, just
to discourage external reuse and make clear that it's an internal
format, subject to frequent changes.


Re: backslash whitespace newline

2005-10-24 Thread Florian Weimer
* Mike Stump:

> On Oct 24, 2005, at 5:52 PM, Vincent Lefevre wrote:
>> But then, copy-paste would no longer always work since spaces are
>> sometimes added at the end of some lines (depending on the terminal
>> and the context).
>
> Please name such systems.  We can then know to not use them, and can  
> document in the manual they are broken if we wish.

Emacs in an xterm, from time to time.  I don't know if this is fixed
in Emacs 22, though.


Re: Question on proposed use of "svn switch"

2005-10-25 Thread Florian Weimer
* Richard Kenner:

> Here's what I need to do and I welcome suggestions: one of the working
> directories I have is the FSF GCC repository (from HEAD), but the
> gcc/ada subdirectory is the AdaCore repository.  For cvs, what I do in
> gcc/CVS/Entries is delete the line for "ada" and then checkout the AdaCore
> repository into there.  What's the way to do this in svn?

Just apply the "svn switch" trick, and check out that single directory
from the AdaCore CVS tree?  (Assuming that AdaCore continues to use
CVS.)

If this results in unwanted output from "svn status", we probably
should set the svn:ignore property on the dummy directory
svn+ssh://gcc.gnu.org/svn/gcc/emptydir to "*".


Re: backslash whitespace newline

2005-10-25 Thread Florian Weimer
* Mike Stump:

> On Oct 24, 2005, at 10:39 PM, Florian Weimer wrote:
>> Emacs in an xterm, from time to time.
>
> Yeah, I knew about that one, cutting and pasting from any full screen  
> program running in a terminal emulator tends to be wrong.  Tab  
> characters are usually the first causalities, along with long  
> lines.  :-(  I'd propose to encourage people submit bug reports  
> against xterm/emacs/terminfo/termcap/curses if they care much about  
> the issue.  I think it might be possible to improve the situation alot.

There's a mitigating factor: If you past it into another Emacs in an
xterm of the same width, all lines are wrapped.  This provides a
strong incentive to remove at least some of the trailing whitespace.


Re: Revisiting generalized lvalues

2005-10-29 Thread Florian Weimer
* Michael Krasnik:

> #ifdef  PRODUCTION
> #define X_ABC(x)   ( check( x ), x->abc )
> #else
> #define X_ABC(x)x->abc
> #endif
>
> which expands
>
> X_ABC(x) = y;
>
> to:
>
> ( check( x ), x->abc ) = y;

> Eliminating this construct makes macros much less flexible
> and requires much more work for creating self-verifying
> frameworks, which is a big issue for small companies with
> large codebase.

There seems to be a trivial fix: modifiy the check function to return
its argument.


Re: Update on GCC moving to svn

2005-10-31 Thread Florian Weimer
* Joe Buck:

> Well, maybe.  But what about a revision that modifies code and that
> also modifies the WWW to describe the code modification?  If everything
> were in the same subversion repository, it could be one change.

Only if you check out a common parent directory, which is probably not
a common configuration.  (At least that's my experience with previous
Subversion releases, don't know the current situation.)


Re: GPL question

2005-10-31 Thread Florian Weimer
* dfhgjwetgtry:

> If I compile source code using GCC, that does not require me to
> open-source the resulting program under the GPL, correct?

Compiling a program with GCC does not by itself cause the resulting
executable to be covered by the GNU General Public License.  This does
not however invalidate any other reasons why the executable file might
be covered by the GNU General Public License.


Re: insufficient inline optimisation?

2005-11-01 Thread Florian Weimer
* Steven Bosscher:

>> I think that the optimiser should get rid of the loop once it has got
>> rid of
>> the body!

> I don't think so.  This kind of thing is optimized away by gcc 4.1
> already.

Shouldn't this be listed in the changes.html file?


  1   2   3   4   5   6   7   8   >