Re: Question for removing trailing whitespaces (not vertical tab) from source

2007-03-13 Thread Pedro Alves

Andrew Haley wrote:

 > I think removing trailing whitespace would be OK,

Please don't check it in, though!  That would be really bad.  It would
do horrible things to diffs, particularly between branches.

  


Not if you run the script on all active branches.  Not to say it is 
worth it, though.


Cheers,
Pedro Alves




Re: Variable scope debug info

2007-04-06 Thread Pedro Alves

Joe Buck wrote:


It might be worth doing.  I think that, in addition to a patch,
I'd like to see measurements (maybe just the size increase in
libstdc++.{a,so}).  If the cost is small, I will not object.



If the cost turns out non-small, this could be enabled at -g3?

Cheers,
Pedro Alves




Re: gnu-gabi group

2016-02-12 Thread Pedro Alves
On 02/11/2016 06:20 PM, Mark Wielaard wrote:

> If we could ask overseers to setup a new group/list gnu-gabi on sourceware
> where binutils, gcc, gdb, glibc and other interested parties could join
> to maintain these extensions and ask for clarifications that would be
> wonderful. I am not a big fan of google groups mailinglists, they seem
> to make it hard to subscribe and don't have easy to access archives.
> Having a local gnu-gabi group on sourceware.org would be better IMHO.

+1

-- 
Pedro Alves



Re: gnu-gabi group

2016-02-15 Thread Pedro Alves
On 02/15/2016 08:17 PM, Florian Weimer wrote:
> * Frank Ch. Eigler:

>> Done [1] [2].  If y'all need a wiki too, just ask.
>>
>> [1] gnu-g...@sourceware.org
>> [2] https://sourceware.org/ml/gnu-gabi/
> 
> And to subscribe, send mail to .
> Somehow, this is missing on the web page above.
> 

One can also subscribe using the form at:

  https://sourceware.org/lists.html

(I used that.)

BTW, is the intention for the mailing lists list on that page to be
fully comprehensive?  Off the top of my head, I notice that at least
the infinity@ list is missing, as well as this new list.

Thanks,
Pedro Alves



Re: GCC GSOC 2016

2016-03-03 Thread Pedro Alves

On 03/03/2016 10:32 AM, Manuel López-Ibáñez wrote:



[*] Projects I would be willing to mentor:




* Revive the gdb compile project
(https://sourceware.org/gdb/wiki/GCCCompileAndExecute), which seems dead.


This one's very much alive, actually.  I've added a few folks who are
working on it (Keith, Alex, Phil), who could perhaps fill us in on
status.  It'd be great if more people got engaged in the project,
of course!

Thanks,
Pedro Alves



Re: Should we import gnulib under gcc/ or at the top-level like libiberty?

2016-06-23 Thread Pedro Alves
On 06/22/2016 07:17 PM, ayush goel wrote:
> 
> Hi, I am working on importing gnulib library inside the gcc tree.
> Should the library be imported in the top level directory along with
> other libraries (like libiberty, libatomic, liboffloadmic etc), or
> should it be imported inside gcc/ like it is done in the binutils-gdb
> tree. There they have a gnulib directory inside gdb/ in the top level
> directory.

I think that top level is better.

Let me touch a bit on why gdb doesn't put it at top level.
Follow the URL further below for more.

The way gdb's gnulib import is set up nowadays, we have a single
gnulib copy that is used by both gdb and gdbserver (two separate programs).

gdb imports gnulib in a way that makes it a separate library, configured
separately from the programs that use it (gdb and gdbserver), which is 
unlike the usual way gnulib is imported, but I think it's the
right thing to do.

gdb doesn't put that gnulib wrapper library at the top level, mainly
just because of history -- we didn't always have that wrapper
library -- and the fact that gdb/gdbserver/ itself is not at top
level either, even though it would be better moved to top level.

See this long email, explaining how the current gdb's gnulib import
is set up:

 https://sourceware.org/ml/gdb-patches/2012-04/msg00426.html

I suggest gcc reuses the whole of gdb's wrapper library and scripts:

 
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=tree;f=gdb/gnulib;h=cdf326774716ae427dc4fb47c9a410fcdf715563;hb=HEAD

... but put it in the top level instead.

A side effect of putting gnulib in a separately configured directory, is
that you end up with a config.h in the gnulib build directory that needs
to be included by the programs that make use of gnulib.  Meaning,
gdb #includes _two_ "config.h" files.  See gdb/common/common-defs.h.

Thanks,
Pedro Alves


Re: Should we import gnulib under gcc/ or at the top-level like libiberty?

2016-06-23 Thread Pedro Alves
On 06/23/2016 03:54 PM, Szabolcs Nagy wrote:

> if both gcc and binutils used a toplevel gnulib directory
> then shared tree build would have the same problem as
> libiberty has now: gcc and binutils can depend on different
> versions of libiberty and then the build can fail.
> as far as i know the shared tree build is the only way to
> build a toolchain without install (using in tree binutils)
> and it would be nice to fix that use case.

Sharing/not-sharing vs top-level-or-not are orthogonal issues.

As you note, combined tree conflicts are not a new issue introduced
by the potintialy shared top level gnulib.  A way to sort that out
that crosses my mind immediately, would be give gcc's and binutil's copies
different top level directory names, like say libiberty-gcc
and libiberty-binutils or some such.  You'd need to do the same
to shared files in the include/ directory, and probably others
that I'm not recalling off hand.  So if we wanted to aim for that, we
could call the new toplevel directory gnulib-gcc or some such.

IMO, combining gcc and binutils trees of different enough vintage
can't be guaranteed to work, so my suggestion is to treat that
as "it hurts when I do this; then don't do that".

Frankly, the idea of sharing a single gnulib import between
gcc and binutils-gdb scares me a bit, because on the gdb side
we're used to only care about making sure gdb works when we
need to think about  importing a new module.  Not that it
happens all that often though; maybe once a year.

But on the other hand, the idea of maintaining multiple gnulib
copies isn't that appealing either.  Considering that the long
term desired result ends up with a libiberty that is no longer a
portability library, but instead only an utilities library, then to
get to that stage, the other programs in the binutils-gdb repo which
rely on libiberty too, binutils proper, gas, ld, gold, etc., need
to be converted to use gnulib as well.  And then a single
gnulib sounds even more appealing.

In any case, we don't really _need_ to consider sharing right now.
gcc can start slow, and import and convert to use gnulib modules 
incrementally, instead of having it import all the modules
gdb is importing from the get go.

Thanks,
Pedro Alves



Re: Weird behaviour with --target_board="unix{var1,var2}"

2016-08-22 Thread Pedro Alves
On 08/22/2016 03:40 PM, Jonathan Wakely wrote:

> What's going on?!
> 
> Have I fundamentally misunderstood something about how RUNTESTFLAGS or
> effective-target keywords work?
> 

Here's a wild guess.

In gdb's testsuite, I've seen odd problems like these being caused by some
tcl global getting set on the first run using one board, then then the
second run using the second board finds the variable already set and
skips something.  E.g.:

  https://sourceware.org/ml/gdb-patches/2015-04/msg00261.html

I'd look at the code that decides whether to run a test in a particular
mode with the above in mind.  Given that reordering the boards makes
a difference, it kind of sounds like some "last checked mode" or some
such variable is bleeding between runs.

Thanks,
Pedro Alves



Re: Weird behaviour with --target_board="unix{var1,var2}"

2016-08-23 Thread Pedro Alves
On 08/23/2016 10:54 AM, Jonathan Wakely wrote:

>> That's being set by prettyprinters.exp and xmethods.exp (so it's GDB's
>> fault! ;-) 

:-)

> This seems to work. I'll do some more testing and commit later today.

LGTM.

Though IME, save/restoring globals in a constant source of trouble,
for occasionally someone adds an early return that inadvertently
skips the restore.  Of course in gdb every test must be written
using custom .exp code, so we're more prone to being bitten.  Still...

The way we solve that on gdb systematically is with convenience
wrappers that handle the save/restore.  E.g. see:

 $ grep "proc with_" *
 gdb.exp:proc with_test_prefix { prefix body } {
 gdb.exp:proc with_gdb_prompt { prompt body } {
 gdb.exp:proc with_target_charset { target_charset body } {
 gdb.exp:proc with_spawn_id { spawn_id body } {
 gdb.exp:proc with_timeout_factor { factor body } {

In this particular case, I'd add a wrapper method to
libstdc++-v3/testsuite/lib/gdb-test.exp, such as:

# Like dg-runtest but keep the .exe around.  dg-test has an option for
# this but there is no way to pass it through dg-runtest.

proc gdb-dg-runtest {args} {
  global dg-interpreter-batch-mode
  set saved-dg-interpreter-batch-mode ${dg-interpreter-batch-mode}  
  set dg-interpreter-batch-mode 1

  eval dg-runtest $args

  set dg-interpreter-batch-mode ${saved-dg-interpreter-batch-mode}
}

And then use gdb-dg-runtest instead of dg-runtest in 
prettyprinters.exp and xmethods.exp.

(Maybe put even move more of the duplicate code around the
current dg-runtest calls to the wrapper, and then give the
wrapper named arguments.)



Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-05 Thread Pedro Alves
On 10/04/2016 11:41 AM, Jonathan Wakely wrote:
> 
> IMHO even better is to not use bool and define an enumeration type, so
> the call site has something unambiguous like foo (1, 2, yes_bar) or
> foo (1, 2, no_bar).

Whole-heartily agreed.  A quite recent example.  On gdb-land, a
patch was proposing:

+  observer_notify_user_selected_thread_frame (1, 1);

(1 is true/boolean, this was a C-compatible patch.)

while the final landed version changed to:

+  observer_notify_user_selected_context_changed (USER_SELECTED_THREAD
+| USER_SELECTED_FRAME);

At least the original version provided a hint of what which
parameter meant in the function's name ("thread", "frame") though at
least USER_SELECTED_INFERIOR will be added quite soon, and likely
others "user-selected" context scopes will be added later too.
With the boolean approach, we'd either end up with an increasingly long
function name:

 observer_notify_user_selected_inferior_thread_frame_what_not (1, 1, 1, 0, 1);

at which point we'd likely remove the "parameter self-description" from the
function name, and end up with _really_ unclear call sites:

 observer_notify_user_selected_context (1, 1, 1, 0, 1);

/*foo_p=*/ -style comments would help clarify the intent at the call
sites, but then the compiler won't catch mistakes for you.  Enums are
clearly superior, at least compared to multiple boolean parameters, IMO.

Thanks,
Pedro Alves



Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-05 Thread Pedro Alves
On 10/05/2016 05:12 PM, Jeff Law wrote:
> On 10/04/2016 03:08 PM, Jason Merrill wrote:
>> On Tue, Oct 4, 2016 at 4:29 PM, Zan Lynx  wrote:
>>> On 10/04/2016 02:00 PM, Martin Sebor wrote:
 This would have been easier if C++ had allowed the same default
 value to
 be given in both the declaration and the definition:

 void foo(int x, int y, bool bar_p = false);

 void foo(int x, int y, bool bar_p = false)
 {
 }
>>>
>>> There is really no point to duplicating it. The default value goes into
>>> the headers which is what is read by users of the code.
>>
>> In GCC sources, I think users look at the function definition more
>> often than the declaration in the header, the latter of which
>> typically has neither comments nor parameter names.
> So true.  One could claim that our coding standards made a fundamental
> mistake -- by having all the API documentation at the implementation
> rather than at the declaration.  Sigh

It's never too late.  GDB used to be like that, and a few
years ago we've decided to start putting comments in the headers instead.  We
haven't done a mass conversion.  Simply new extern functions
get documented at the declaration instead of implementation, and we
also generally move the comment when we're making a static function  
extern.  Over time, old code is moved around or rewritten, and
comments migrate to API headers.  I think that has been a good idea.
Particularly more so for headers defining some module's public API.



Re: using C++ STL containers in GCC/gfortran source code

2016-12-16 Thread Pedro Alves
On 12/16/2016 05:33 PM, Janus Weil wrote:

> "You would need to make sure it uses a xmalloc based allocator first
> or at least calls xmalloc_failed upon allocation failure, otherwise it
> will be a serious regression."
> 
> I'm really not an expert on GCC's memory management principles and how
> it uses xmalloc over malloc. I'd love to hear further comments on the
> above sentence (e.g. whether that is really necessary, and if yes, how
> to accomplish it). 

I gave a suggestion in the PR.

Basically, you can replace the global operator new to call xmalloc
instead of malloc.  See the GDB url in the PR for an example.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78822#c19

> And in particular: How do the current uses of
> std::string in GCC deal with this problem? (Do they?)

Doesn't look like they do.

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-16 Thread Pedro Alves
On 12/16/2016 06:31 PM, Janus Weil wrote:
> 2016-12-16 18:53 GMT+01:00 Pedro Alves :
>> On 12/16/2016 05:33 PM, Janus Weil wrote:

>>> And in particular: How do the current uses of
>>> std::string in GCC deal with this problem? (Do they?)
>>
>> Doesn't look like they do.
> 
> Huh, that's a problem then, isn't it?

Right.  The easiest way to trigger it I think is if something
computes some size incorrectly, and calls for example string::reserve(-1)
or string::resize(-1) by mistake (likewise for std::vector, etc.).
malloc will fail, new will throw bad_alloc, and GCC will abort and
maybe generate a core dump, instead of gracefully printing
something like:

   cc1: out of memory allocating NNNN bytes ...

and existing with error status.

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-16 Thread Pedro Alves
On 12/16/2016 06:04 PM, Jakub Jelinek wrote:
> On Fri, Dec 16, 2016 at 06:55:12PM +0100, Janus Weil wrote:
>> To get to more specific questions ...
>>
>>> Basically the only STL construct used in the Fortran FE right now
>>> seems to be std::swap, and a single instance of std::map in
>>> trans-common.c.
>>
>> I see that fortran/trans-common.c has:
>>
>> #define INCLUDE_MAP
>>
>> and apparently there is also a INCLUDE_STRING macro. I guess if I want
>> to use std::string I don't #include , but #define
>> INCLUDE_STRING, right? Why are those macros needed, exactly?
> 
> They are needed because system.h poisons lots of things, including malloc
> etc.  So including system headers after system.h is problematic.

IMO, GCC's poison (or a variant) should ignore system headers.  There's
nothing one can do with those.  It's _uses_ in one's code that generally
one wants to prevent with the poisoning.

> 
> That said, using std::string for what you talk in the PR would make it
> impossible to translate it, if you build a sentence as:
>   ss << "Argument " << something () << " and '" << something_else () << "'";
> then our framework can't deal with that, translating portions of a sentence
> is not going to be useful for many languages.
> Using *printf or similar formatting strings allows the translator to see
> the whole sentence with arguments, and e.g. when needed can swap
> some arguments using %4$s syntax etc.

The problem is not std::string here, but the stream operators.
And I agree.

GDB has a string_printf function that prints into a std::string, for
example.  Like:

  std::string hello = string_printf ("%s", "hello world");

That's a function that many C++ projects reinvent.



Re: using C++ STL containers in GCC/gfortran source code

2016-12-16 Thread Pedro Alves
On 12/16/2016 06:56 PM, Jakub Jelinek wrote:
> On Fri, Dec 16, 2016 at 06:52:03PM +0000, Pedro Alves wrote:
>> GDB has a string_printf function that prints into a std::string, for
>> example.  Like:
>>
>>   std::string hello = string_printf ("%s", "hello world");
>>
>> That's a function that many C++ projects reinvent.
> 
> If you then want to work with it as std::string object, sure, it makes
> sense.  But if all you want is to pass the string to %s of another
> formatting function and free it, then going through std::string
> doesn't add many benefits over just xasprintf + free.

It has all the usual advantages of RAII.  
It completely eliminates the "forgot to call free in this
exit path" bug by design.

And then there's exception safety, in case something throws
beforeyou reach the "+ free".

(I know GCC doesn't use exceptions; GDB does.  But do note
https://gcc.gnu.org/codingrationale.html says:

 We would like the compiler to be exception safe, to permit
 reconsideration of the exception convention. This change
 would require a significant change in style,
 adopting "resource acquisition is initialization" (RAII). We would be 
 using shared_ptr (from TR1's ) or unique_ptr (from C++11).
)

We've started using RAII objects in GDB in the past couple
months, including C++11 unique_ptr, and that has already
simplified the codebase a good deal, and fixed many leaks
and bugs.

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-17 Thread Pedro Alves
On 12/17/2016 10:58 AM, Janus Weil wrote:
> 2016-12-16 19:46 GMT+01:00 Pedro Alves :
> So, it seems like it would be a good idea to follow your suggestion
> from PR 78822:
> 
> 
>> You can replace the global operator new/new[] to call xmalloc instead of 
>> malloc.
>> Then memory allocation by std::string etc. ends up in xmalloc -> 
>> xmalloc_failed.
>> That's what I did for GDB:
>>
>> https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob;f=gdb/common/new-op.c;h=c67239cbe87c1f7e22e234a1a2cc2920f9ed22a4;hb=HEAD
> 
> I'm certainly not the right person to implement this in GCC, though
> (and I'll probably discard my idea of using std::string for PR 78822
> and follow the alternative implementation from comment 14).
> 
> But I think that, given the amount of STL containers already used in
> GCC, it should definitely be clarified whether this is necessary ...

TBC, STL containers are a red herring here, and a bit orthogonal.

The root issue is that any "new" expression that calls the
global (non-placement-new, non-class-specific) operator new/new[]
in GCC is "unprotected" and inconsistent with xmalloc already,
because the default operator new/new[] calls malloc.

I.e., "new" expressions are already "unprotected" in exactly the
same way as allocations inside STL containers.

Some classes have class-specific 'operator new' implementations (grep
for "operator new"), but seems to me many don't.  E.g., grep
for " = new":

  ...
  auto-profile.c:autofdo_source_profile *map = new autofdo_source_profile 
();
  auto-profile.c:  function_instance *s = new function_instance (name, 
head_count);
  auto-profile.c:  afdo_string_table = new string_table ();
  spellcheck.c:  edit_distance_t *v0 = new edit_distance_t[len_s + 1];
  ...

new[] calls I think are more likely to cause trouble due to run-time variable
size, though non-array new calls can certainly fail too.

You should be able to trigger/see the issue by just hacking some:

   char *p = new char [-1];

somewhere in the compiler.  The compiler will likely crash with something
like:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

... instead of the xmalloc_failed message.

and then compare with a "xmalloc (-1)" call.

TBC, replacing global operator new is perfectly defined/valid C++.
The overloads in question are specified as "replaceable".  See: 

  http://en.cppreference.com/w/cpp/memory/new/operator_new

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-17 Thread Pedro Alves
On 12/17/2016 12:23 PM, Jonathan Wakely wrote:
> Instead of replacing the global new operator we could write a custom
> allocator that uses xmalloc, and we can have e.g. gcc::string as a
> typedef for std::basic_string,
> gcc::xmallocator>
> 

That doesn't make sense to me, as it leaves the problem
with all "new" expressions that call global op new in
the codebase, like I mentioned?  (And you'd have to do that
for all containers that you'd want to use.)

Am I missing something?

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-17 Thread Pedro Alves
On 12/17/2016 01:07 PM, Jonathan Wakely wrote:
> On 17 December 2016 at 12:28, Pedro Alves wrote:
>> On 12/17/2016 12:23 PM, Jonathan Wakely wrote:
>>> Instead of replacing the global new operator we could write a custom
>>> allocator that uses xmalloc, and we can have e.g. gcc::string as a
>>> typedef for std::basic_string,
>>> gcc::xmallocator>
>>>
>>
>> That doesn't make sense to me, as it leaves the problem
>> with all "new" expressions that call global op new in
>> the codebase, like I mentioned?  (And you'd have to do that
>> for all containers that you'd want to use.)
>>
>> Am I missing something?
> 
> Nope, it doesn't help that. I don't know if people use new in the GCC
> code base, or if we want them to be able to.

Seems to me they do:

 $ grep " = new " gcc/ -rn | grep -v "testsuite/" | wc -l
 1000

Several of those hit class-specific new, a few placement-new,
but seems to me most will be hitting global new.

Thanks,
Pedro Alves



Re: using C++ STL containers in GCC/gfortran source code

2016-12-17 Thread Pedro Alves
On 12/17/2016 05:24 PM, Jakub Jelinek wrote:
> On Sat, Dec 17, 2016 at 11:17:25AM -0500, Frank Ch. Eigler wrote:
>> Pedro Alves  writes:
>>
>>> [...]
>>> malloc will fail, new will throw bad_alloc, and GCC will abort and
>>> maybe generate a core dump, instead of gracefully printing
>>> something like:
>>>cc1: out of memory allocating  bytes ...
>>> and existing with error status.
>>
>> Consider having the main() function catch bad_alloc etc. and print
>> prettier error messages than the builtin uncaught-exception aborter?
> 
> GCC is built with -fno-exceptions -fno-rtti, so no catching...

Right.  Let me expand on that:

 - GCC wants to build with -fno-exceptions, so that can't work.

 - Even if GCC was built with -fexceptions, I'd suspect that there'll
   be GCC code that could throw an exception that would try to cross
   some non-C++/C code that is not built with -fexceptions, via callbacks.
   (Think qsort, but also all the C libraries in the repo).  That problem
   tends to be missed on x86_64 GNU/Linux, since the ABI
   mandates -fexceptions.  But it's a real problem on other systems.
   That's fixable, but if you're not using exceptions for anything else,
   you'll usually only notice it when things are already pretty down south.

 - You'd have to do this in the entry point of all threads.  Not sure GCC
   ever spins threads today, but I'd avoid a design that prevents it,
   because who knows what support and run time libraries do internally.

 - Lastly, even if the technical details above were all resolved,
   I'd still recommend against the top level catch(bad_alloc) approach, since
   by the time the exception is caught, you've lost context of where the
   exception was originally thrown, i.e., where the bad allocation is
   coming from.  The operator new approach allows for example having
   gcc automatically print its own backtrace for bug reports,
   with e.g., glibc's backtrace() or libbacktrace or some such.

Thanks,
Pedro Alves



Re: [PATCH][RFC] Always require a 64bit HWI

2014-05-01 Thread Pedro Alves
On 04/30/2014 05:00 PM, Jeff Law wrote:
> On 04/30/14 02:16, Richard Biener wrote:
>>
>> Testing coverage for non-64bit hwi configs is really low these
>> days (I know of only 32bit hppa-*-* that is still built and
>> tested semi-regularly - Dave, I suppose the host compiler
>> has a 64bit long long type there, right?).
> My recollection is that HP aCC supports long long, but I don't recall 
> when support for that was introduced.  I'm really trying hard to forget 
> hpux-isms.

GCC (in libdecnumber) has been relying on long long / ll existing, and on
it being 64-bits wide, for more than 7 years now, and nobody seems to have
tripped on any host compiler that doesn't support it.

See libdecnumber/bid/bid-dpd.h / bid/bid2dpd_dpd2bid.h.  git blame shows:

10de71e1 (meissner 2007-03-24 17:04:47 + 25) typedef unsigned int UINT32;
   ^^
10de71e1 (meissner 2007-03-24 17:04:47 + 26) typedef unsigned long long 
UINT64;
  ^
10de71e1 (meissner 2007-03-24 17:04:47 + 27) typedef struct { UINT64 w[2]; 
} UINT128;
...
10de71e1 (meissner 2007-03-24 17:04:47 +28)   { { 
0x3b645a1cac083127ull, 0x0083126e978d4fdfull } }, /* 3 extra digits */
10de71e1 (meissner 2007-03-24 17:04:47 +29)   { { 
0x4af4f0d844d013aaULL, 0x00346dc5d6388659ULL } }, /*  10^(-4) * 2^131 */
^^^
So the issue is moot.

> Plus, they can always start the bootstrapping process with GCC 4.9.

They'd have to go much further back than that.

-- 
Pedro Alves


Re: GSoc-2015: Modular GCC

2015-03-07 Thread Pedro Alves
On 03/05/2015 06:51 PM, Sidharth Chaturvedi wrote:
> I like the idea of making the intermediate representations more
> streamable. But I think this task should also involve creating
> separate front-end and middle-end modules, as then there can be a
> clear distinction of what IR is an input to a module and what IR is
> output from a module(including a more specific structure to each IR).
> This would give a pipeline structure to GCC. I don't know how much of
> this can be achieved via GSoc, but I would still like to give it a
> try. Any interested mentors? Thanks.

Would it make sense for someone to pick this

 https://gcc.gnu.org/wiki/GimpleFrontEnd

up where it was left off?

Thanks,
Pedro Alves



Re: Moving to C++11

2019-09-26 Thread Pedro Alves
On 9/26/19 9:08 AM, Richard Biener wrote:

> Note the main issue is host compiler support.  I'm not sure if C++11 would
> be the step we'd gain most - for some hashtable issues I'd have needed
> std::move support for example.  There's always the possibility to
> require an intermediate step (first build GCC 5, with that you can build
> trunk, etc.), a install.texi clarification could be useful here (or even
> some automation via a contrib/ script).
> 
> I'm not too worried about requiring even a C++14 compiler, for the
> set of products we still release latest compilers we have newer
> GCCs available we can use for building them (even if those are
> not our primary supported compilers which would limit us to
> GCC 4.8).

FWIW, GDB requires C++11 nowadays, and the baseline required
GCC version is GCC 4.8.1.  The current policy is here:

https://sourceware.org/gdb/wiki/Internals%20GDB-C-Coding-Standards#When_is_GDB_going_to_start_requiring_C.2B-.2B-NN_.3F

Pasted for convenience:

 
 When is GDB going to start requiring C++NN ?

 Our general policy is to wait until the oldest compiler that 
 supports C++NN is at least 3 years old.

 Rationale: We want to ensure reasonably widespread compiler 
 availability, to lower barrier of entry to GDB contributions, 
 and to make it easy for users to easily build new GDB on currently
 supported stable distributions themselves. 3 years should be sufficient
 for latest stable releases of distributions to include a compiler
 for the standard, and/or for new compilers to appear as easily
 installable optional packages. Requiring everyone to build a compiler
 first before building GDB, which would happen if we required a
 too-new compiler, would cause too much inconvenience. 
 

That was decided 3 years ago, so I guess we'd be good for a
reevaluation, though I don't particularly miss C++14 features
all that much, so I wouldn't mind staying with C++11 for a while
longer in GDB.  But if GCC jumps to C++14, I think GDB would
follow along.  Just FYI.

C++03 -> C++11 makes a great difference.  Particularly
std::move and rvalue references are a game changer.

Thanks,
Pedro Alves



Re: GCC selftest improvements

2019-10-31 Thread Pedro Alves
On 10/26/19 11:46 PM, Eric Gallager wrote:

> Nicholas Krause was also wanting to move to C++11 recently:
> https://gcc.gnu.org/ml/gcc/2019-10/msg00110.html (this month)
> https://gcc.gnu.org/ml/gcc/2019-09/msg00228.html (last month)
> As I said in that thread, I'd want to try just toggling -Wnarrowing
> from off to on first before going full C++11. 

Why?  GDB went the other way when it moved to C++11.  It switched
to C++11, and for several months, used -Wno-narrowing to quiet
the thousands of warnings.

https://gcc.gnu.org/wiki/FAQ#Wnarrowing

> So, GCC 10 would be
> C++98 + -Wnarrowing, and then GCC 11 could be full C++11. Plus then
> the GCC version numbers would also line up with the version of C++
> being used.

Thanks,
Pedro Alves


Re: GCC selftest improvements

2019-10-31 Thread Pedro Alves
On 10/29/19 8:40 AM, Richard Biener wrote:
> On Mon, Oct 28, 2019 at 10:47 PM Jakub Jelinek  wrote:
>>

>> As discussed earlier, we gain most through C++11 support, there is no need
>> to jump to C++17 or C++20 as requirement.
> 
> Yes, I've agreed to raise the requirement to GCC 4.8 which provides
> C++11 support.
> 
> For convenience we could also provide a configure-time hint if the host 
> compiler
> doesn't have C++11 support or is older than 4.8.2 (I think .1 has some 
> issues).
> Rather than only running into some obscure errors later on.

FWIW, GDB uses a slightly modified AX_CXX_COMPILE_STDCXX to check for C++11
support at configure time, and add -std=gnu++11 if the necessary, adding nothing
if the compiler supports C++11 or later OOTB (so that you can still access
C++14-or-later features&optimizations conditionally).

 
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob;f=gdb/ax_cxx_compile_stdcxx.m4

 https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx.html

 https://sourceware.org/ml/gdb-patches/2016-10/msg00775.html

In practice, that returns "supports" for GCC 4.8 and above, which is
GDB's minimum requirement.  I'm not sure about 4.8.x patch level.

Thanks,
Pedro Alves


Re: R: R: R: Plugin development under windows

2017-03-29 Thread Pedro Alves
On 03/29/2017 08:30 AM, Davide Piombo wrote:
> Hi Trevor, thanks for your hint.
> 
> Yesterday I made some other tests. I tried to use CygWin instead of
> MinGW and the POSIX missing references are now solved. Now the error
> have moved from the compiler to the linker and the build stops
> because of undefined references. The missing symbols are included in
> GCC executable and are declared as external symbols in GCC plugin
> header files.

Declared as external is not sufficient.  They also need to be
declared as exported in PE terms.

Usually, you do that by tagging exported symbols
with __declspec(dllexport) at the time the exe is built, and tagging
them as __declspec(dllimport) when the plugin is built.

I.e., you'd apply the guidelines described at:

  https://gcc.gnu.org/wiki/Visibility

to GCC itself.  E.g., add a macro like the DLL_PUBLIC described
there, something around:

#if defined _WIN32 || defined __CYGWIN__
# ifdef BUILDING_GCC
#   define GCC_PUBLIC __declspec(dllexport)
# else
#   define GCC_PUBLIC __declspec(dllimport)
# endif
#else
# define GCC_PUBLIC
#endif

And add GCC_PUBLIC to symbols that should be exported to plugins.

AFAIK, in plugin architectures on Windows, it's more common to
split the bulk of an exe to a dll that is then linked by both
a "shim" exe and the plugins, but exporting symbols from EXEs
should work fine too.  See e.g.,:

  
http://stackoverflow.com/questions/3752634/dll-get-symbols-from-its-parent-loader/3756083#3756083
  
http://stackoverflow.com/questions/15454968/dll-plugin-that-uses-functions-defined-in-the-main-executable

The key search terms are "plugins on windows export exe symbols".

My Windows knowledge has been steadily fading over the years, and I'm
not sure whether GCC export all symbols automatically using "-fvisibility"
on Windows (as workaround) of whether you really need to go
the __declspec or dllexport routes.

Also, see:

 https://gcc.gnu.org/onlinedocs/gcc/Microsoft-Windows-Function-Attributes.html

maybe you can also workaround it by using LD's --export-all.

This should give you some pointers to experiment.

> Anyway, before to change the compiler or library version I tried to
> dump symbols from libgcc.a in order to understand if missing symbols
> are really in this library and they are not there.

libgcc.a is not GCC itself.  See:

  https://gcc.gnu.org/onlinedocs/gccint/Libgcc.html

Thanks,
Pedro Alves


Re: Deprecating arithmetic on std::atomic

2017-04-20 Thread Pedro Alves
On 04/20/2017 10:39 AM, Jonathan Wakely wrote:
> 
> Or simply deprecate support for it in std::atomic. **If** the
> extension for built-in types is useful then I can imagine it might be
> useful to have it for std::atomic too, for a subset of the programs
> relying on the original extension. But I'm unconvinced how useful
> the original extension is. There are other ways to achieve it if
> absolutely necessary, e.g. convert the void* to uintptr_t and perform
> the arithmetic and use compare_exchange to store it back again.

This comes from a completely different angle, but, 
if P0146R1 (Regular void) [1] manages to follow through, then that
makes GCC's extension for built-in types be standard, AFAICS.

AFAIK, the author is still pushing for the proposal aiming at C++20 [2].

[1] - http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0146r1.html

[2] - 
<https://channel9.msdn.com/Shows/CppCast/Episode-83-Regular-Void-with-Matt-Calabrese>,
  end of last Feb, at min 14 onward.

( Please don't shoot the messenger. :-) )

Thanks,
Pedro Alves



Re: dejagnu version update?

2017-05-15 Thread Pedro Alves
On 05/15/2017 04:54 PM, Martin Jambor wrote:
> Hi,
> 
> On Wed, Sep 16, 2015 at 01:25:18PM -0400, Trevor Saunders wrote:
>> On Wed, Sep 16, 2015 at 10:36:47AM -0600, Jeff Law wrote:
>>>
> 
> ...
> 
>>> I'd rather just move to 1.5 and get on with things.  If some systems don't
>>> have a new enough version, I'm comfortable telling developers on those
>>> platforms that they need to update.  It's not like every *user* needs
>>> dejagnu, it's just for the testing side of things.
>>
>> yeah, it seems like a poor idea to slow down progress we make for all
>> users to benefit a few people who want to develope on rather old
>> machines.
>>

The relying on system dejagnu on old distributions has been a sure
recipe for people avoiding fixing / improving dejagnu.  I mean,
why bother, if your fix will only be generally available in 10 years
when older distros are finally phased out?

> Could we at least make sure that machines in the FSF compile farm have
> a new enough dejagnu before move to requiring at least 1.5?
> 
> I understand that may be a tall order, given that some machines
> (e.g. gcc117) do not have dejagnu at all and this was reported some
> time ago :-(
> 

Maybe download_prerequisites could be taught to handle downloading
and setting up dejagnu [1], making it mostly a non-issue?

[1] Last time I needed to do it, it was mostly plain old
configure/make/make install IIRC, assuming new-enough tcl/expect.
Building a newer tcl/expect would be a bit more work of course.

Thanks,
Pedro Alves



Re: gcc behavior on memory exhaustion

2017-08-10 Thread Pedro Alves
On 08/10/2017 10:22 PM, Florian Weimer wrote:
> * Andrew Haley:
> 
>> On 09/08/17 14:05, Andrew Roberts wrote:
>>> 2) It would be nice to see some sort of out of memory error, rather than 
>>> just an ICE.
>>
>> There's nothing we can do: the kernel killed us.  We can't emit any
>> message before we die.  (killed) tells you that we were killed, but
>> we don't know who done it.
> 
> The driver already prints a message.
> 
> The siginfo_t information should indicate that the signal originated
> from the kernel.  

OOC, where?  While a parent process can use "waitid" to get
a siginfo_t with information about the child exit, that siginfo_t
is not the same siginfo_t a signal handler would get as
argument if you could catch/intercept SIGKILL, which you can't
on Linux.  I.e., checking for e.g., si_code == SI_KERNEL in
the siginfo filled in by waitid won't work, because that
siginfo_t has si_code values for SIGCHLD [CLD_EXITED/CLD_KILLED/etc.],
not for the signal that actually killed the process.

Doesn't seem to give you any more useful information beyond the
what you can already get using waitpid (which is what libiberty's
pex code in question uses) and WIFSIGNALED/WTERMSIG.

> It seems that for SIGKILL, there are currently three
> causes in the kernel: the OOM killer, some apparently unreachable code
> in ptrace, and something cgroups-related.  The latter would likely
> take down the driver process, too, so a kernel-originated SIGKILL
> strongly points to the OOM killer.
> 
> But the kernel could definitely do better and set a flag for SIGKILL.

Meanwhile, maybe just having the driver check for SIGKILL and
enumerate likely causes would be better than the status quo.

Pedro Alves


Re: Slides from GNU Tools Cauldron

2017-10-05 Thread Pedro Alves
On 10/04/2017 08:09 PM, Jan Hubicka wrote:
> Hello,
> all but one videos from this year Cauldron has been edited and are now linked
> from https://gcc.gnu.org/wiki/cauldron2017 (plugins BoF will appear till end
> of week).
> 
> I would also like to update the page with links to slides.  If someone beats 
> me
> on this and adds some or all of them as attachements to the page, I would be
> very happy :)

I like the table with direct links to talks/slides/videos on
last year's page, so I spent a while this morning adding a table
to this year's page.  Hope others find that useful too.

Thanks,
Pedro Alves



Re: Using gnu::unique_ptr to avoid manual cleanups (was Re: [PATCH 2/2] use unique_ptr some)

2017-10-17 Thread Pedro Alves
On 10/17/2017 03:57 PM, David Malcolm wrote:

> Given that we build with -fno-exceptions, what are we guaranteed about
> what happens when "new" fails?  (am I right in thinking that a failed
> allocation returns NULL in this case?).  Is XNEWVEC preferable here?

No, that's incorrect.  Even with -fno-exceptions, if new fails,
then an exception is thrown.  And then because the unwinder
doesn't find a frame that handles the exception, std::terminate
is called...

You can easily see it with this:

$ cat new-op.cc 
int main ()
{
  char * p = new char [-1];
  return 0;
}

$ g++ new-op.cc -o new-op -g3 -O0 -fno-exceptions
$ ./new-op 
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

If you want new to return NULL on allocation failure, you either
have to call the std::nothrow_t overload of new:

  char* p = new (std::nothrow) char [-1];

or you have to replace [1] the global operator new to not throw.

> 
> i.e. do we prefer:
> 
> (A)
> 
>   gnu::unique_ptr data (new uint32_t[size + 1]);
> 
> (implicitly delete-ing the data, with an access-through-NULL if the
> allocation fails)

That'd be my preference too [2].

Though I think that GCC should replace the global
operator new/new[] functions to call xmalloc instead of malloc.

Like:

void *
operator new (std::size_t sz)
{
  return xmalloc (sz);
}

void *
operator new (std::size_t sz, const std::nothrow_t&)
{
  /* malloc (0) is unpredictable; avoid it.  */
  if (sz == 0)
sz = 1;
  return malloc (sz);
}

Then memory allocated by all new expressions, including those done
via the standard allocators within standard containers, std::vector,
std::string, etc., ends up in xmalloc, which calls xmalloc_failed on
allocation failure.  I.e., replace operator new to behave just
like xmalloc instead of letting it throw an exception that is
guaranteed to bring down gcc, badly.

Note there are already many "new" expressions/calls in GCC
today.  Any of those can bring down GCC.

This is very much like what I did for GDB.  See GDB's (latest)
version here:

  
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob;f=gdb/common/new-op.c;hb=bbe910e6e1140cb484a74911f3cea854cf9e7e2a

GDB's version still throws, because well, GDB uses exceptions.

[1] Replacing global operator new is something that is totally
defined by the standard (unlike -fno-exceptions).
See "global replacements" 
at <http://en.cppreference.com/w/cpp/memory/new/operator_new>.

[2] - Or in many cases, use a std::vector instead.  And if
you care about not value-initializing elements, see gdb's
gdb/common/{def-vector, default-init-alloc}.h.

Thanks,
Pedro Alves



Re: [PING][PATCH] gdb/x86: Fix `-Wstrict-overflow' build error in `i387_collect_xsave'

2018-05-22 Thread Pedro Alves
On 05/22/2018 12:14 PM, Maciej W. Rozycki wrote:
> On Tue, 15 May 2018, Maciej W. Rozycki wrote:
> 
>>  gdb/
>>  * i387-tdep.c (i387_collect_xsave): Make `i' unsigned.
> 
>  Ping for: <https://patchwork.sourceware.org/patch/27269/>.

OK.

Thanks,
Pedro Alves


Re: -Wclass-memaccess warning should be in -Wextra, not -Wall

2018-07-06 Thread Pedro Alves
On 07/06/2018 12:14 AM, Soul Studios wrote:
> Having benchmarked the alternatives memcpy/memmove/memset definitely makes a 
> difference in various scenarios.

That sounds like a missing optimization in the compiler.  If you have valid
testcases, I think it would be a good idea to file them in bugzilla.

Thanks,
Pedro Alves


Re: [RFC] Update coding conventions to restrict use of non-const references

2018-07-12 Thread Pedro Alves
On 07/12/2018 12:40 PM, Jonathan Wakely wrote:
> On Thu, 12 Jul 2018 at 11:41, Richard Sandiford wrote:
>> +Only use non-constant references in the following situations:
>> +
>> +
>> +
>> +when they are necessary to conform to a standard interface, such as
>> +the first argument to a non-member operator+=
> 
> And the return value of such operators (which also applies to member
> operators, which is the more conventional way to write compound
> assignment operators).
> 
>> +in a return value, when providing access to something that is known
>> +to be nonnull
>> +
>> +
>> +
>> +In other situations the convention is to use pointers instead.
>> +
>> +
>> +
>> +HOST_WIDE_INT do_arith (..., bool *overflow);   // OK
>> +HOST_WIDE_INT do_arith (..., bool &overflow);   // Please avoid
> 
> I understand the objection to using references for out parameters 

FWIW, GDB's conventions (which import GCC's coding conventions) already
suggest avoiding non-const reference output parameters, so this won't
affect us.

But, FWIW, personally, while I used to be all for avoiding non-const
reference output parameters, I no longer am, at least so strongly,
nowadays.

The reason being that the visual distinction can be easily lost with
pointers too anyway:

 // the usual argument is that using pointers for output parameters shows
 // clearly that bar is going to be modified:
 function (foo, &bar);

 // but well, we often works with pointers, and if "bar" is a pointer,
 // we don't get any visual clue anyway either:
 function (foo, bar);

 // which suggests that what really helps is seeing the output
 // variable nearby, suggesting to define it right before the
 // function call that fills it in, and I would go as far
 // as saying that clearer symbol names help even more.  For e.g.:
 B bar_out;
 fill_bar (foo, bar_out);

(an
> alternative to pointers is to return a struct with the wide int result
> and the overflow flag),

+1.  I've been pushing GDB in that direction whenever possible.

> but ...
> 
>> +int *elt = &array[i];  // OK
>> +int &elt = array[i];   // Please avoid
> 
> ... this seems unnecessary. If the function is so long that the fact
> elt is a reference can easily get lost, the problem is the length of
> the function, not the use of a reference.
> 

+1000.  This seems really unnecessary.  References have the advantage
of implicit never-null semantics, for instance.

Pedro Alves


Re: [RFC] Update coding conventions to restrict use of non-const references

2018-07-12 Thread Pedro Alves
On 07/12/2018 05:17 PM, Richard Sandiford wrote:
> Pedro Alves  writes:

>> (an
>>> alternative to pointers is to return a struct with the wide int result
>>> and the overflow flag),
>>
>> +1.  I've been pushing GDB in that direction whenever possible.
> 
> I agree that can sometimes be better.  I guess it depends on the
> context.  If a function returns a bool and some other data that has no
> obvious "failure" value, it can be easier to write chained conditions if
> the function returns the bool and provides the other data via an output
> parameter.

I agree it depends on context, though your example sounds like a
case for std::optional.  (We have gdb::optional in gdb, since
std::optional is C++17 and gdb requires C++11.  LLVM has a similar
type, I believe.)

> 
>>> but ...
>>>
>>>> +int *elt = &array[i];  // OK
>>>> +int &elt = array[i];   // Please avoid
>>>
>>> ... this seems unnecessary. If the function is so long that the fact
>>> elt is a reference can easily get lost, the problem is the length of
>>> the function, not the use of a reference.
>>>
>>
>> +1000.  This seems really unnecessary.  References have the advantage
>> of implicit never-null semantics, for instance.
> 
> The nonnullness is there either way in the above example though.
> 
> I don't feel as strongly about the non-const-reference variables, but for:
> 
>  int &elt = array[i];
>  ...
>  elt = 1;
> 
> it can be easy to misread that "elt = 1" is changing more than just
> a local variable.

I think that might be just the case for people who haven't used
references before, and once you get some exposure, the effect
goes away.  See examples below.

> 
> I take Jonathan's point that it shouldn't be much of a problem if
> functions are a reasonable size, but we've not tended to be very
> good at keeping function sizes down.  I guess part of the problem
> is that even functions that start off small tend to grow over time.
> 
> We have been better at enforcing more specific rules like the ones
> in the patch.  And that's easier to do because a patch either adds
> references or it doesn't.  It's harder to force (or remember to force)
> someone to split up a function if they're adding 5 lines to one that is
> already 20 lines long.  Then for the next person it's 25 lines long, etc.
> 

> Also, the "int *elt" approach copes with cases in which "elt" might
> have to be redirected later, so we're going to need to use it in some
> cases.  

Sure, if you need to reseat, certainly use a pointer.  That actually
highlights an advantage of references -- the fact that you are sure
nothing could reseat it.  With a local pointer, you need to be mindful
of that.  "Does this loop change the pointer?"  Etc.  If the semantics
of the function call for having a variable that is not meant to be
reseated, why not allow expressing it with a reference.
"Write what you mean."  Of course, just like all code, there's going
to be a judgment call.

A place where references frequently appear is in C++11 range-for loops.
Like, random example from gdb's codebase:

 for (const symbol_search &p : symbols)

GCC doesn't yet require C++11, but it should be a matter
of time, one would hope.

Other places references appear often is in coming up with
an alias to avoid repeating long expressions, like
for example (again from gdb):

  auto &vec = objfile->per_bfd->demangled_hash_languages;
  auto it = std::lower_bound (vec.begin (), vec.end (),
  MSYMBOL_LANGUAGE (sym));
  if (it == vec.end () || *it != MSYMBOL_LANGUAGE (sym))
vec.insert (it, MSYMBOL_LANGUAGE (sym));

I don't think the fact that "vec" is a reference here confuses
anyone.

That was literally a random sample (grepped for "&[a-z]* = ") so
I ended up picking one with C++11 "auto", but here's another
random example, spelling out the type name, similarly using
a reference as a shorthand alias:

  osdata_item &item = osdata->items.back ();

  item.columns.emplace_back (std::move (data->property_name),
 std::string (body_text));

> It's then a question of whether "int &elt" is useful enough that
> it's worth accepting it too, and having a mixture of styles in the codebase.
I don't see it as a mixture of styles, as I don't see
pointers and references the exact same thing, but rather
see references as another tool in the box.

Thanks,
Pedro Alves


Re: ARM : code less efficient with gcc-trunk ?

2009-02-16 Thread Pedro Alves
On Monday 16 February 2009 11:19:52, Vincent R. wrote:
> I used to have .align 0 with gcc-4.1 and now I get a .align 4, how can I
> change that ?

It was a bug in the patches I had sent you months ago.  I've posted the
latest patch I had here at cegcc-devel@ --- it should fix this.

-- 
Pedro Alves


Re: Slight tree reorganization for cygming platforms

2009-04-24 Thread Pedro Alves
On Friday 24 April 2009 10:05:00, Vincent R. wrote:
> Once again we are  referencing i386/t-gthr-win32, i386/t-dw2-eh and
> i386/t-sjlj-eh 
> and this is stupid because those files have only one definition that is not
> i386 specific.

That's the definition of a ...

> +#hack! using i386 file directly...
   ^

... hack.  This is not stupid.  It was a deliberate decision ---
it makes it much easier to carry around small local changes in the original
files, than to keep around local patches moving a bunch of things around...

On Friday 24 April 2009 14:50:06, Dave Korn wrote:
>   I think there are more options than just those two, but certainly it is
> right to common out the shared functionality.  


> I believe it should just go one 
> level up in config/ rather than have a subdir; there's a ton of shared stuff
> in that directory already.

I completelly agree.  This option has been my intent all along.  It just
needs someone to seriously propose doing it, and then, doing it.

How would people prefer something like this going forward?  Start with
extracting common stuff into config/, while making sure e.g, at least
one of mingw or cygwin still bootstraps/tests OK; or, start by
updating config/arm/ with copies of what's under config/i386/, and once
that is in, work on extracting/abstracting common stuff out?

Vincent pointed out the obvious abuses that are 100% reusable, but,
I'd like to reuse e.g., config/i386/winnt.c, config/i386/winnt-cxx.c,
config/i386/winnt-stubs.c for ARM as well.  E.g, currently, I've copied
config/i386/winnt.c over the current config/arm/pe.c (completelly replacing
it), and adjusted it a bit to make it work for ARM.  As you can see from
the hunk below, the main juice is shareable here (there's no fastcall
on ARM WinCE):

-- 
Pedro Alves

--- i386/winnt.c2009-04-19 21:56:24.0 +0100
+++ arm/pe.c2009-04-19 21:56:30.0 +0100
@@ -1,7 +1,6 @@
-/* Subroutines for insn-output.c for Windows NT.
-   Contributed by Douglas Rupp (dr...@cs.washington.edu)
-   Copyright (C) 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
-   2005, 2006, 2007, 2008 Free Software Foundation, Inc.
+/* Routines for GCC for ARM/pe.
+   Copyright (C) 1995, 1996, 2000, 2001, 2002, 2004, 2005, 2007, 2008
+   Free Software Foundation, Inc.
 
 This file is part of GCC.
 
@@ -32,13 +31,11 @@ along with GCC; see the file COPYING3.  
 #include "tm_p.h"
 #include "toplev.h"
 #include "hashtab.h"
-#include "langhooks.h"
 #include "ggc.h"
-#include "target.h"
 
-/* i386/PE specific attribute support.
+/* arm/PE specific attribute support.
 
-   i386/PE has two new attributes:
+   arm/PE has two new attributes:
dllexport - for exporting a function/variable that will live in a dll
dllimport - for importing a function/variable from a dll
 
@@ -50,7 +47,7 @@ along with GCC; see the file COPYING3.  
 /* Handle a "shared" attribute;
arguments as in struct attribute_spec.handler.  */
 tree
-ix86_handle_shared_attribute (tree *node, tree name,
+arm_pe_handle_shared_attribute (tree *node, tree name,
  tree args ATTRIBUTE_UNUSED,
  int flags ATTRIBUTE_UNUSED, bool *no_add_attrs)
 {
@@ -67,7 +64,7 @@ ix86_handle_shared_attribute (tree *node
 /* Handle a "selectany" attribute;
arguments as in struct attribute_spec.handler.  */
 tree
-ix86_handle_selectany_attribute (tree *node, tree name,
+arm_pe_handle_selectany_attribute (tree *node, tree name,
 tree args ATTRIBUTE_UNUSED,
 int flags ATTRIBUTE_UNUSED,
 bool *no_add_attrs)
@@ -77,7 +74,7 @@ ix86_handle_selectany_attribute (tree *n
  until the language frontend has processed the decl. We'll check for
  initialization later in encode_section_info.  */
   if (TREE_CODE (*node) != VAR_DECL || !TREE_PUBLIC (*node))
-{  
+{
   error ("%qs attribute applies only to initialized variables"
 " with external linkage",  IDENTIFIER_POINTER (name));
   *no_add_attrs = true;
@@ -100,7 +97,7 @@ associated_type (tree decl)
 /* Return true if DECL should be a dllexport'd object.  */
 
 static bool
-i386_pe_determine_dllexport_p (tree decl)
+arm_pe_determine_dllexport_p (tree decl)
 {
   tree assoc;
 
@@ -113,7 +110,7 @@ i386_pe_determine_dllexport_p (tree decl
   /* Also mark class members of exported classes with dllexport.  */
   assoc = associated_type (decl);
   if (assoc && lookup_attribute ("dllexport", TYPE_ATTRIBUTES (assoc)))
-return i386_pe_type_dllexport_p (decl);
+return arm_pe_type_dllexport_p (decl);
 
   return false;
 }
@@ -121,7 +118,7 @@ i386_pe_determine_dllexport_p (tree decl
 /* Return true if DECL should be a dllimport'd object.  */
 
 static

Re: RFD: hookizing BITS_PER_UNIT in tree optimizers / frontends

2010-11-24 Thread Pedro Alves
On Tuesday 23 November 2010 20:09:52, Joern Rennecke wrote:
> If we changed BITS_PER_UNIT into an ordinary piece-of-data 'hook', this
> would not only cost a data load from the target vector, but would also
> inhibit optimizations that replace division / modulo / multiply with shift
> or mask operations.

Have you done any sort of measurement, to see if what is lost
is actually noticeable in practice?

> So maybe we should look into having a few functional hooks that do  
> common operations, i.e.
> bits_in_unitsx / BITS_PER_UNIT
> bits_in_units_ceil   (x + BITS_PER_UNIT - 1) / BITS_PER_UNIT
> bit_unit_remainder   x % BITS_PER_UNIT
> units_in_bits    x * BITS_PER_UNIT

-- 
Pedro Alves


Re: RFD: hookizing BITS_PER_UNIT in tree optimizers / frontends

2010-11-24 Thread Pedro Alves
On Wednesday 24 November 2010 13:45:40, Joern Rennecke wrote:
> Quoting Pedro Alves :
> 
> > On Tuesday 23 November 2010 20:09:52, Joern Rennecke wrote:
> >> If we changed BITS_PER_UNIT into an ordinary piece-of-data 'hook', this
> >> would not only cost a data load from the target vector, but would also
> >> inhibit optimizations that replace division / modulo / multiply with shift
> >> or mask operations.
> >
> > Have you done any sort of measurement, to see if what is lost
> > is actually noticeable in practice?
> 
> No, I haven't.
> On an i686 it's probably not measurable.  On a host with a slow software
> divide it might be, if the code paths that require these operations are
> exercised a lot - that would also depend on the source code being compiled.

And I imagine that it should be possible to factor out many
of the slow divides out of hot loops, if the compiler doesn't
manage to do that already.

> Also, these separate hooks for common operations can make the code more
> readable, particularly in the bits_in_units_ceil case.
> I.e.
>  foo_var = ((bitsize + targetm.bits_per_unit () - 1)
> / targetm.bits_per_unit ());
> vs.
>  foo_var = targetm.bits_in_units_ceil (bitsize);
> 

bits_in_units_ceil could well be a macro or helper function
implemented on top of targetm.bits_per_unit (which itself could
be a data field instead of a function call), that only accessed
bits_per_unit once.  It could even be implemented as a helper
macro / function today, on top of BITS_PER_UNIT.

Making design decisions like this based on supposedly
missed optimizations _alone_, without knowing how much
overhead we're talking about is really the wrong way to
do things.

-- 
Pedro Alves


GSoC, Make cp-demangle non-recursive and async-signal safety

2022-04-08 Thread Pedro Alves
Hi!

I noticed the discussions about making cp-demangle use malloc/free instead of 
recursion,
and I wonder about signal handlers, and I don't see that mentioned in
https://gcc.gnu.org/wiki/SummerOfCode's description of the project.  

See my question to Ian a few years back, here, and his answer:

https://gcc.gnu.org/legacy-ml/gcc-patches/2018-12/msg00696.html

~~~
 Ian says:
 > Pedro says:
 > Ian earlier mentioned that we've wanted to avoid malloc because some
 > programs call the demangler from a signal handler, but it seems like
 > we already do, these functions already aren't safe to use from
 > signal handlers as is.  Where does the "we can't use malloc" idea
 > come from?  Is there some entry point that avoids
 > the malloc/realloc/free calls?

 cplus_demangle_v3_callback and cplus_demangle_print_callback.
~~~

Grepping the gcc tree, I see that libsanitizer uses those entry points.

Is async-signal safety no longer a consideration/concern?  Or will those entry 
points
continue to work without calling malloc/free somehow?


Re: [RFC] Using std::unique_ptr and std::make_unique in our code

2022-07-11 Thread Pedro Alves
Hi!

On 2022-07-08 9:46 p.m., David Malcolm via Gcc wrote:
> - pending_diagnostic *d,
> + std::unique_ptr d,

I see that you didn't add any typedef for std::unique_ptr in this patch.  
It will be
inevitable that people will start adding them, for conciseness, IME, though.  
To avoid diverging
naming styles for such typedefs in the codebase, GDB settled on using the "_up" 
suffix (for Unique Pointer)
quite early in the C++11 conversion, and we use such typedefs pervasively 
nowadays.  For example, for the type
above, we'd have:

  typedef std::unique_ptr pending_diagnostic_up;

and then:

 -  pending_diagnostic *d,
 +  pending_diagnostic_up d,

I would suggest GCC have a similar guideline, before people start using foo_ptr,
bar_unp, quux_p, whatnot diverging styles.

And it would be nice if GCC followed the same nomenclature style as GDB, so we 
could
have one single guideline for the whole GNU toolchain, so people moving between 
codebases
only had to learn one guideline.

Pedro Alves


Re: [RFC] Using std::unique_ptr and std::make_unique in our code

2022-07-12 Thread Pedro Alves
On 2022-07-12 11:21 a.m., Florian Weimer wrote:
> * Pedro Alves:
> 
>> For example, for the type above, we'd have:
>>
>>   typedef std::unique_ptr pending_diagnostic_up;
>>
>> and then:
>>
>>  -   pending_diagnostic *d,
>>  +   pending_diagnostic_up d,
>>
>> I would suggest GCC have a similar guideline, before people start
>> using foo_ptr, bar_unp, quux_p, whatnot diverging styles.
> 
> This doesn't seem to provide much benefit over writing
> 
>   uP d;
> 
> and with that construct, you don't need to worry about the actual
> relationship between pending_diagnostic and pending_diagnostic_up.

Given the guideline, nobody ever worries about that.  When you see "_up",
you just know it's a unique pointer.

And as you point out, there's the custom deleters case to consider too.

> 
> I think the GDB situation is different because many of the types do not
> have proper destructors, so std::unique_ptr needs a custom deleter.

Yes, there are a few cases like but it's not "many" as you suggest,
and most are types for which we have no control, like 3rd party library
types, like debuginfod, curses, python.  Most of the rest of the custom deleter 
cases
are instead because of intrusive refcounting.  I.e., the deleter decrements the
object's refcount, instead of deleting the object straight away.

These are valid cases, not "GDB is doing it wrong, so GCC won't have to
bother".  I would suspect that GCC will end up with a good number of
custom deleters as well.


Re: [RFC] Using std::unique_ptr and std::make_unique in our code

2022-07-12 Thread Pedro Alves
On 2022-07-12 11:45 a.m., Jonathan Wakely wrote:
> On Tue, 12 Jul 2022 at 11:22, Florian Weimer via Gcc  wrote:
>>
>> * Pedro Alves:
>>
>>> For example, for the type above, we'd have:
>>>
>>>   typedef std::unique_ptr pending_diagnostic_up;
>>>
>>> and then:
>>>
>>>  -pending_diagnostic *d,
>>>  +pending_diagnostic_up d,
>>>
>>> I would suggest GCC have a similar guideline, before people start
>>> using foo_ptr, bar_unp, quux_p, whatnot diverging styles.
>>
>> This doesn't seem to provide much benefit over writing
>>
>>   uP d;
>>
>> and with that construct, you don't need to worry about the actual
>> relationship between pending_diagnostic and pending_diagnostic_up.
>>
>> I think the GDB situation is different because many of the types do not
>> have proper destructors, so std::unique_ptr needs a custom deleter.
> 
> 
> A fairly common idiom is for the type to define the typedef itself:
> 
> struct pending_diagnostic {
>   using ptr = std::unique_ptr;
>   // ...
> };
> 
> Then you use pending_diagnostic::ptr. If you want a custom deleter for
> the type, you add it to the typedef.
> 
> Use a more descriptive name like uptr or uniq_ptr instead of "ptr" if
> you prefer.
> 

Only works if you can change the type, though.  Sometimes you can't,
as it comes from a library.


Re: More C type errors by default for GCC 14

2023-05-12 Thread Pedro Alves
On 2023-05-12 7:01 a.m., Po Lu via Gcc wrote:
> Jason Merrill  writes:
> 
>> You shouldn't have to change any of those, just configure with CC="gcc
>> -fwhatever".
> 
> If it were so simple...
> 
> Many Makefiles come with a habit of using not CC_FOR_BUILD, but just
> `cc', to build programs which are run on the build machine.

Then write a wrapper script named "cc", and put that in the PATH:

 $ cat cc
 #!/bin/sh

 exec /my/real/gcc -fwhatever "$@"

Done.


Re: Updated Sourceware infrastructure plans

2024-05-02 Thread Pedro Alves
On 2024-05-01 22:26, Mark Wielaard wrote:
> For now I am cleaning up Sergio's gerrit setup and upgrading it to the
> latest version, so people can at least try it out. Although I must
> admit that I seem to be the only Sourcewware PLC member that believes
> this is very useful use of our resources. Even the biggest proponents
> of gerrit seem to believe no project will actually adopt it. And on
> irc there were some people really critical of the effort. It seems you
> either love or really hate gerrit...

When GDB upstream tried to use gerrit, I found it basically impossible to
follow development, given the volume...  The great thing with email is the
threading of discussions.  A discussion can fork into its own subthread, and any
sane email client will display the discussion tree.  Email archives also let
you follow the discussion subthreads.  That is great for archaeology too.
With Gerrit that was basically lost, everything is super flat.  And then 
following
development via the gerrit instance website alone is just basically impossible 
too.
I mean, gerrit is great to track your own patches, and for the actual review
and diffing between versions.  But for a maintainer who wants to stay on top of 
a
project, then it's severely lacking, IME and IMO.

(Note: I've been using Gerrit for a few years at AMD internally.)



Re: Updated Sourceware infrastructure plans

2024-05-02 Thread Pedro Alves
On 2024-05-01 22:04, Simon Marchi wrote:
> The Change-Id trailer works very well for Gerrit: once you have the hook
> installed you basically never have to think about it again, and Gerrit
> is able to track patch versions perfectly accurately.  A while ago, I
> asked patchwork developers if they would be open to support something
> like that to track patches, and they said they wouldn't be against it
> (provided it's not mandatory) [1].  But somebody would have to implement
> it.
> 
> Simon
> 
> [1] https://github.com/getpatchwork/patchwork/issues/327

+1000.  It's mind boggling to me that people would accept Gerrit, which
means that they'd accept Change-Id:, but then they wouldn't accept 
Change-Id: with a different system...  :-)



Re: gcc extensibility

2012-03-29 Thread Pedro Alves
On 03/29/2012 05:52 PM, Basile Starynkevitch wrote:

> On Thu, 29 Mar 2012 11:42:30 -0500
> Gabriel Dos Reis  wrote:
>> I suspect that if plugins people want to make progress on this
>> recurring theme, they
>> will have to come up with a specification and an API.  Otherwise, they have 
>> only
>> themselves to blame if their plugins break from release to release.
> 
> 
> They blame nobody if their plugins break from one release to the next. They 
> take this
> incompatibility of GCC as part of their plugins developer's work.
> 
> Again, a plugin writer by definition uses whatever interface is given to him.

IMO, the right way to approach this is instead:

#1 - I, a so called "plugin writer", have this use I could give to GCC,
 but it wouldn't make sense to include that code in the GCC sources/executable
 itself.  In fact, the maintainers would reject it, rightly.

#2 - However, if I could just add a little bit of glue interface to GCC
 that exposes just enough GCC internal bits that I could write my plugin
 against, in a way that is not invasive to the rest of the compiler,
 I know that would be accepted by the maintainers.  It's a compromise the
 GCC maintainers are willing to make.  They are aware that there's potential
 for other people to come up with other uses for the same minimal interfaces,
 so they accept this.  In the future, it's likely that other plugin authors
 will be satisfied by the interfaces I and other previous plugin authors have
 already added to GCC by then.

But, note it's clearly the plugin author that needs to write #2.  #1 too, 
obviously.  :-)

-- 
Pedro Alves


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-03 Thread Pedro Alves
On 04/03/2012 09:04 PM, Stefano Lattarini wrote:

> OK, you've all made clear you have your sensible reasons to have the '.info'

...
> it available only though the new, undocumented option named (literally)
> "hack!info-in-builddir".  I hope this is acceptable to you.
...
> *undocumented* option '!hack!info-in-builddir' (whose name should
> made it clear that it is not meant for public consumption).

So will this be called a hack forever, or will the naming be revisited
before a release?  IMO, either the feature is sensible, and there doesn't
seem to be a good reason other users couldn't also use it, and hence it
should get a non-hackish name and be documented; or it isn't sensible, and
then it shouldn't exist.  Why the second-class treatment?

-- 
Pedro Alves


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-04 Thread Pedro Alves
On 04/04/2012 12:53 AM, Miles Bader wrote:

> I suspect there are better, cleaner, ways to accomplish the underlying
> goal, but I suppose the gcc maintainers don't want to spend the time
> fiddling around with their build infrastructure for such a minor
> issue...


Why speculate?  I haven't seen any hint on what the better, cleaner,
way to accomplish this is.

-- 
Pedro Alves


Re: RFC: -Wall by default

2012-04-05 Thread Pedro Alves
On 04/04/2012 10:44 AM, Gabriel Dos Reis wrote:

> Hi,
> 
> For GCC-4.8, I would like to turn on -Wall by default.
> Comments?


I'd just like to explicitly mention (the obvious fact that)
that this has the effect of breaking builds of projects that carefully
craft their warning set to be able to use -Werror, such as e.g., GDB.

Certainly not insurmountable (just add a -Wno-all), but does
require actively tweaking the build system.  I'm sure there
are many projects affected similarly.

-- 
Pedro Alves


Re: RFC: -Wall by default

2012-04-05 Thread Pedro Alves
On 04/05/2012 10:39 AM, Richard Guenther wrote:

... [-Wall + -Werror] ...

> Btw, it would be more reasonable to enable a subset of warnings that
> we enable at -Wall by default.  Notably those that if they were not
> false positives, would lead to undefined behavior at runtime.  Specifically
> I don't think we should warn about unused static functions or variables
> by default.


Yes, I would agree more with something like that.

-- 
Pedro Alves


Re: Switching to C++ by default in 4.8

2012-04-11 Thread Pedro Alves
On 04/11/2012 07:26 PM, Jonathan Wakely wrote:

> GCC's diagnostics have got a lot better recently.
> 
> The http://clang.llvm.org/diagnostics.html page compares clang's
> diagnostics to GCC 4.2, which was outdated long before that page was
> written.
> 
> It doesn't help GCC's cause when people keep repeating that outdated info :-)


Spelling out the obvious, IWBVN if someone from the gcc camp did a
similar comparison using a current gcc.  Is there such a page somewhere?

-- 
Pedro Alves


Re: Updated GCC vs Clang diagnostics [Was: Switching to C++ by default in 4.8]

2012-04-12 Thread Pedro Alves
On 04/12/2012 11:01 AM, Jonathan Wakely wrote:

> Manu has filed lots of bugs in bugzilla with specific comparisons of
> GCC's diagnostics to Clang's.
> 
> I'll start a page on the GCC wiki but I hope others will add to it.
> The people asking to see results should be the ones doing the
> comparisons really  ;-)


Excellent, thank you!

-- 
Pedro Alves


Re: RFC: -Wall by default

2012-04-12 Thread Pedro Alves
On 04/12/2012 04:23 PM, Gabriel Dos Reis wrote:

> because -Os says it optimizes for size, the expectation is clear.
> -O3 does not necessarily give better optimization than -O2.


No, but it does mean that GCC turns on more optimization options.

"Optimize yet more. -O3 turns on all optimizations specified by -O2 and also 
turns on the -finline-functions,
-funswitch-loops, -fpredictive-commoning, -fgcse-after-reload, -ftree-vectorize 
and -fipa-cp-clone options. "

Just like -W3 wouldn't necessarily generate more warnings on
your code than -W1, perhaps because your code is
already "clean" enough.  It would simply be documented as:

"-W3: Warn yet more.  -W3 turns on all warnings specified by -W2 and also ...".

I'll also note the parallel with -glevel, not just -O.

So, 'gcc -glevel -Wlevel -Olevel' feels quite natural to me.

-- 
Pedro Alves


Re: RFC: -Wall by default

2012-04-12 Thread Pedro Alves
On 04/12/2012 04:52 PM, Gabriel Dos Reis wrote:

> On Thu, Apr 12, 2012 at 10:43 AM, Pedro Alves  wrote:
>> On 04/12/2012 04:23 PM, Gabriel Dos Reis wrote:
>>
>>> because -Os says it optimizes for size, the expectation is clear.
>>> -O3 does not necessarily give better optimization than -O2.
>>
>>
>> No, but it does mean that GCC turns on more optimization options.
>>
>> "Optimize yet more. -O3 turns on all optimizations specified by -O2 and also 
>> turns on the -finline-functions,
>> -funswitch-loops, -fpredictive-commoning, -fgcse-after-reload, 
>> -ftree-vectorize and -fipa-cp-clone options. "
> 
> I think we have perverted the meaning of "optimize yet more", and optimize
> yet more does not yield better/faster code :-)


Sure, so that phrase in the documentation could be improved/replaced, or even 
removed.
The rest of the paragraph looks quite clear enough.

> Yes, I understand the transformations; that does not justify for the awkward
> user-interface.


So stop thinking in terms of -O, if it helps.  Maybe think in terms of -glevel?

 "Request debugging information and also use level to specify how much 
information. The default level is 2."

or just consider it on its own merits:

  -W0 no warning options enabled.  -W1, more warning options enabled than -W0.  
-W2, more
  warning options enabled than -W1.  -WN, more warning options enabled than 
-WN-1.

I fail to see why is that awkward?

-- 
Pedro Alves


Re: [RFD+PATCH] ISA bit treatment on the MIPS platform

2012-06-12 Thread Pedro Alves
On 06/11/2012 07:20 PM, Joel Brobecker wrote:

> From the ChangeLog entry, it seems like Pedro was involved in the making
> of that patch, so perhaps he could be a good reviewer?


All involvement I recall was updating a couple lines to
new interfaces in the context of a merge from upstream.  All else
is pretty much as good as new to me.  :-)

-- 
Pedro Alves