Re: Defining a common plugin machinery

2008-09-19 Thread Brendon Costa
Basile STARYNKEVITCH wrote:
> But what about the trunk, which uses tuples?
I know nothing about the tuples and the trunk and whether tuples might
affect the way plugins work. Maybe others can comment...

To be honest i was more worried about how plugins will work with the
garbage collector and pre-compiled headers... I assume there is no
problem with this though since there are already a number of plugin
frameworks that exist that people report are working.

> Are the below hooks the name of dlsym-ed symbols inside the plugins,
> or the name of some (new or existing) functions?
>> - c_common_init(),  c_common_finish(), finalize(),
>> gimplify_function_tree() 
They are the names of functions in GCC 4.0.4 that i currently modify to
call my code (I am not using a plugin framework at the moment but
distributing a patched GCC so i have statically linked my code into GCC
and called it from these places).It was earlier recognised that we will
need hooks into the pass management, i was just adding the areas outside
the pass manager that i have used. If we have an idea of the "hook
locations" everyone uses it may help get an idea of what may be required
of the plugin framework in terms of locations in GCC where plugins can
hook into. Those were the locations i used.

> How do we interact with the pass manager? Some plugins definitely want
> to add new passes... How does that happen concretely?
I imagine others who have attempted this will comment.


> Agreed, but perhaps this autoload feature is not the first priority.
> Making plugin working inside the trunk seems more important to me than
> the autoload thing. At last, autoload plugins might slow down quick
> compilation of very small files (but do we care?).
Well Ian just provided a different mechanism for projects like mine to
achieve the result i was after without automatically loaded plugins. So
i will assume that this feature is canned for the moment unless there
are other needs for it.

> A related issue (which does not seems a priority to me) is do we want
> to make easy the process of moving a plugin inside the core GCC code,
> precisely to avoid that (ie having significant optimisations as plugins).
To start with i can imagine that no core features will be moved to
plugins, maybe in later versions that might happen but i cant imagine
doing so would be overly difficult.

> And there is one significant point we did not discuss yet. Compiling a
> plugin should if possible be doable outside of GCC source code; this
> means at least that the gcc/Makefile.in should have stuff to copy all
> the relevant *.h files usable by plugins outside. 
Yes. A location where we can place the headers/libs was mentioned by
Joseph ($libsubdir) along with suggesting we create an autoconf macro
that external projects can use to locate and use those headers and the
library.

> Maybe this is not the business of GCC, and only an issue for GCC
> packagers. In general, we should have a convention for the file path of
> installed plugins : do we want them only in system places like
> /usr/lib or also a per-user directory like
> $HOME/.gcc-plugin-dir/4.5.0/ ; we also need a convention for the
> location of all the related *.h files needed to compile a plugin
> outside GCC source tree. (or is this a packaging issue only, ie have
> not only a gcc-4.5 package in debian, but also gcc-4.5-plugins-dev
> package? - but that should mean additional install* targers in our
> Makefile.in).
I would assume the following (Say we make a libgccimpl that contains the
code for GCC except the stub main() that calls gcc_main() from the library):

Install libgccimpl to: $(libsubdir)/libgccimpl.so.4.3.2
Install the headers to: $(includesubdir)/*.h
Install plugins to: $(libsubdir)/plugins/*.so

GCC would install these things every time it is installed, and it is the
job of packagers for the various distributions to split it into
different "subsets" if they so desire.

So searching for plugins (using libltdl) would look first in:
$(libsubdir)/plugins, followed by $LTDL_LIBRARY_PATH, and
$LD_LIBRARY_PATH. The issue with searching anywhere other than
$(libsubdir)/plugins, is knowing if the plugin that is found is
compatible with the given version of GCC.

For example:
Say a system has two versions of GCC installed: 4.3.2 and 4.4.3. A user
then wants to install a plugin but does not have the privileges to do so
in $(libsubdir)/plugins/*.so for the specific versions of the GCC
compiler. So they build the plugin say for version 4.3.2 and install it
in their home directory somewhere.

If they then invoke GCC 4.4.3 with -fplugin=edoc, there is a possibility
that GCC 4.4.3 will find the plugin the user build for 4.3.2 and try to
use it (If the user is not careful with their environment i.e. Setting
LTDL_LIBRARY_PATH). This will cause undefined behaviour.

We have a few options:
1) Dont allow additional search paths for plugins
2) Ignore the problem and put it down to the user needs to understand
what they are doing
3

Re: Defining a common plugin machinery

2008-09-19 Thread Ralf Wildenhues
* Brendon Costa wrote on Fri, Sep 19, 2008 at 02:42:19AM CEST:
> What platforms do we want to support? I can think of the following
> categories:

> * Windows (cygwin/mingw)
> As i understand the issue (I am not very familiar with this) you can't
> have unresolved references in a plugin back to the GCC executable. I.e.
> Building GCC with -rdynamic will not work on this platform. Do we move
> most of the GCC implementation into a "library/DLL" having a "stub"
> main() that just calls the library implementation. Then both the
> application AND the plugins can link with this library/DLL in a way that
> will work on Windows.
> Or are we going for the quick solution of using -rdynamic and not
> supporting this platform (Not my preferred option)?

AFAIK you can fix w32 issues with DEF files.

I would guess putting the bulk of GCC in a library, thus compiling it
with PIC code on most systems, will hurt compiler performance rather
significantly.  Haven't tried it though.

Cheers,
Ralf


volatile structures: Is that a bug?

2008-09-19 Thread Etienne Lorrain
 Hello,

On C structures, for attributes like "const", it is enough to consider
that each field inherit the attribute of the structure.
But for the volatile attribute, is it valid to treat each field as
volatile like GCC does it now?
I mean:
volatile struct { unsigned char a,b,c,d; } volstruct;
void fct (void) { volstruct.a ++; }
Is it valid to optimise to a byte read/increment/write, i.e. not
read and rewrite b, c and d?

I know that it has no consequences on ia32/64 kind of hardware where
most of the devices are in the I/O space, but for PPC it is usual
to have devices in I/O space - or for instance variables which can
only be accessed by 32 bits reads/writes (even if the field is itself
8 bits).
To force 32 bits access on a 8 bits field I/O, is it valid to do:
volatile struct { unsigned char field; unsigned char pad[3]; } io;
void fct (void) { io.field = 43; }

Also, the same question for unions, do the fields of the union
inherit the volatile attribute - or is the union itself volatile,
so that to force 32 bits access you can write:
volatile union { unsigned access32bits; unsigned char field; } io;
void fct (void) { io.field = 43; }

And what happens in C++, where you can write:
struct { volatile unsigned char field; unsigned char pad[3]; } io;
and:
volatile struct { unsigned char field; unsigned char pad[3]; } io;

 Sorry if that question has already been posted, I did not find
the official answer. I was thinking GCC behaved differently long time
ago. I do not like to write a lot of #define to cast everything to
a "volatile int *"...

 Thanks,
 Etienne.





Re: volatile structures: Is that a bug?

2008-09-19 Thread Andrew Haley
Etienne Lorrain wrote:

> On C structures, for attributes like "const", it is enough to consider
> that each field inherit the attribute of the structure.
> But for the volatile attribute, is it valid to treat each field as
> volatile like GCC does it now?

"An object that has volatile-qualified type may be modified in ways
unknown to the implementation or have other unknown side
effects. Therefore any expression referring to such an object shall be
evaluated strictly according to the rules of the abstract machine, as
described in 5.1.2.3."

So, any reference to the object must treat the object as volatile, and
that includes any reference to any part of the object.

Andrew.





Re: volatile structures: Is that a bug?

2008-09-19 Thread Etienne Lorrain
> > On C structures, for attributes like  "const", it is enough to
> > consider that each field inherit the attribute of the structure.
> > But for the volatile attribute, is it valid to treat each field as
> > volatile like GCC does it now?
> 
> "An object that has volatile-qualified type may be
> modified in ways unknown to the implementation or
> have other unknown side effects. Therefore any
> expression referring to such an object shall be
> evaluated strictly according to the rules of the abstract
> machine, as described in 5.1.2.3."
> 
> So, any reference to the object must treat the object as
> volatile, and that includes any reference to any part of
> the object.

 If I correctly understand you, GCC is wrong reading a byte when the
byte is part of a volatile structure - GCC needs to read the complete
structure first, and then extract the byte.
 Shall I create another entry in Bugzilla, or the 3rd comment of
GCC Bugzilla Bug 37135 is sufficient?
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37135

 Etienne.






Re: volatile structures: Is that a bug?

2008-09-19 Thread Andrew Haley
Etienne Lorrain wrote:
>>> On C structures, for attributes like  "const", it is enough to
>>> consider that each field inherit the attribute of the structure.
>>> But for the volatile attribute, is it valid to treat each field as
>>> volatile like GCC does it now?
>> "An object that has volatile-qualified type may be
>> modified in ways unknown to the implementation or
>> have other unknown side effects. Therefore any
>> expression referring to such an object shall be
>> evaluated strictly according to the rules of the abstract
>> machine, as described in 5.1.2.3."
>>
>> So, any reference to the object must treat the object as
>> volatile, and that includes any reference to any part of
>> the object.
> 
>  If I correctly understand you, GCC is wrong reading a byte when the
> byte is part of a volatile structure - GCC needs to read the complete
> structure first, and then extract the byte.

I didn't say that.  Neither did the standard, by my reckoning.

Please read the standard, in particular the part that follows.

Andrew.


New branch, alias-improvements, to improve representation and optimizers

2008-09-19 Thread Richard Guenther

I have created a new branch, alias-improvements, which will host work
to transition us to a leaner virtual-operand representation and to
fix the fallout in the optimizers.

The branch is maintained by me, patches should be marked with [alias].

Thanks,
Richard.


Re: cpp found limits.h in FIXED_INCLUDE_DIR, but not in STANDARD_INCLUDE_DIR

2008-09-19 Thread Eus
Hi Ho!

On Tue, 9/16/08, "Ian Lance Taylor" <[EMAIL PROTECTED]> wrote:

> HOST-x-TARGET == cross-compiler
> native-HOST == native compiler
> BUILD-build native-HOST == native compiler built by cross-compiler
> BUILD-build HOST-x-TARGET == cross-compiler built by cross-compiler
> 
> The BUILD-build system is of course only relevant when discussing
> building the compiler, and becomes irrelevant once the compiler
> exists.

Thank you very much for the clear explanation.
I was looking for this kind of explanation previously :)

> In this case the end result is an x86-build native-MIPS compiler.
> This requires first building an x86-x-MIPS copmiler.  Of course in
> practice it matters whether x86 here is Windows or GNU/Linux; I can't
> remember whether the OP said.

Are you saying that there are two steps involved?

I mean:
1. Building x86-x-MIPS Compiler
2. Building native-MIPS Compiler with the previously built x86-x-MIPS Compiler

> Ian

Best regards,
Eus


  


Re: Defining a common plugin machinery

2008-09-19 Thread Brian Dessent
Ralf Wildenhues wrote:

> > * Windows (cygwin/mingw)
> > As i understand the issue (I am not very familiar with this) you can't
> > have unresolved references in a plugin back to the GCC executable. I.e.
> > Building GCC with -rdynamic will not work on this platform. Do we move
> > most of the GCC implementation into a "library/DLL" having a "stub"
> > main() that just calls the library implementation. Then both the
> > application AND the plugins can link with this library/DLL in a way that
> > will work on Windows.
> > Or are we going for the quick solution of using -rdynamic and not
> > supporting this platform (Not my preferred option)?
> 
> AFAIK you can fix w32 issues with DEF files.

Well, that is another way to do it.  You don't have to use a .def file
or __declspec(dllexport) markup, you can also use
-Wl,--export-all-symbols which doesn't require any source code
modifications.

But there is another more sinister problem on PE hosts with having the
plugin link to symbols from the executable: this hard codes the name of
the executable in the plugin, as essentially the executable is now a
library to which the plugin is linked.  Obviously this is a deal-breaker
because the plugin needs to be loadable from cc1, cc1plus, f951, etc.
unless you want to link a separate copy of the plugin for each language.

An alternative is to have the plugin resolve the symbols in the
executable at runtime, using the moral equivalent of dlsym.  This also
makes linking the plugin a lot easier because you don't have to worry
about resolving the executable's symbols at link-time (which you would
otherwise have to do with an import library for the executable.) 
However, this makes life more difficult for the writer of the plugin as
they have to deal with all those function pointers.  It is possible to
ameliorate this with macros that set up each function pointer to a
wrapper that looks up the actual function the first time it's called and
then sets the pointer to that address, essentially a poor-man's PLT. 
But with this method you still have to maintain a list of every
functions/variable used in order to instantiate a copy of the wrapper
macro for it.

Moving the compiler itself into a shared library of course avoids all of
the above, but I'm not sure I understand how this would work w.r.t. the
different language backends.  Would the shared library just contain
everything, and somehow just determine its behavior based on whether it
was loaded by cc1 or cc1plus or f951 etc.  Or would there be a separate
shared library for each?  (Which would really be useless and not solve
anything for PE hosts because it would be back to requiring a number of
copies of the plugin, one for each language backend.)

Is it really that far fetched to have the plugin not directly access
anything from the executable's symbol table but instead be passed a
structure that contains a defined set of interfaces and callbacks?  By
using this strategy you also insulate the plugin from the gcc internals,
such that you can define this struct in a separate header that goes in
the plugin SDK, without requiring that the SDK drags in a bunch of
internal crap from the build directory.  With judicious use of callbacks
you can even achieve some degree of plugin binary compatibility.  For
example you have gcc provide function pointers for
getters/setters/walkers/enumerators for internal structures in this
struct that is passed to the plugin.  The plugin is then insulated from
internal representation changes since the callback lives in gcc too and
gets rebuilt when the internal representation changes.

Now, I understand that maintaining a stable binary API for plugins is
probably not the number one priority, i.e. past discussions have
concluded that this is too much work and pain to maintain.  And maybe
that's true.  But I'm just suggesting that you can in fact kill two
birds with one stone here: insulate the plugin from internal
representation details *and* overcome portability problems with non-ELF
hosts.  

Brian


Re: Defining a common plugin machinery

2008-09-19 Thread Kai Henningsen
On Fri, Sep 19, 2008 at 15:30, Brian Dessent <[EMAIL PROTECTED]> wrote:
> Ralf Wildenhues wrote:

> Is it really that far fetched to have the plugin not directly access
> anything from the executable's symbol table but instead be passed a
> structure that contains a defined set of interfaces and callbacks?  By
> using this strategy you also insulate the plugin from the gcc internals,

Too much so. This goes again back to needing to decide in advance what
will be interesting; that's pretty much a deal-breaker, I think.

No, it sounds to me as if linking at runtime is the way to go for PE.
However, that's not as hard as it sounds.

Essentially, *for each plugin*, you compile a list of all the gcc
symbols that *this plugin* needs. You convert this list into a
structure with a long list of pointers and names, which you hand to a
small routine that resolves all of them (via dlsym(),
GetProcAddress(), or whatever); and the plugin uses the stuff vioa the
pointers in the structure. As the list of symbols needed for that
plugin is both known and relatively stable, the burden from that isn't
too large.

In gcc, you just make all symbols resolvable.

Gcc then loads the plugin, gets a pointer to that structure (via a
well-known symbol name), resolves all symbols therein, and calls some
initialization function so the plugin can register in all relevant
hooks - or one could do even that by putting additional data into that
structure.

And one can also put versioning data therein, so that gcc can check
compatibility before calling the first plugin routine.

Just a quick&dirty mock-up:

struct s_gcc_plugin_info gcc_plugin_info = {
  .version = "4.4.0";
  .hook = {
 { .name="hook_parse_params"; .proc=&my_parse_params; }
[...]
  };
  .link = {
{ .name="fold"; .ptr = 0; }
   [...]
  };
};

and possibly, for convenience,

#define gcc_fold ((prototype_for_fold)(gcc_plugin_info.link[0].ptr))

... gcc_fold(...) ...

All those .name fields are presumably const char *.

And one could presumably write a small utility that, given a list of
names, goes through gcc's headers and constructs that struct and the
convenience defines automatically. If (as is the case here) the
declarations follow a fairly consistent style, this doesn't need a
full C parser.

It seems to me that the overhead, both in runtime and in maintenance,
isn't too bad here.


Re: Defining a common plugin machinery

2008-09-19 Thread Basile STARYNKEVITCH

Hello All,

(a meta question: do we need to reply-to all, or should the gcc@ list be 
enough to discuss plugins, and the only destination of this interesting 
discussion?).


Brendon Costa wrote:

Basile STARYNKEVITCH wrote:

But what about the trunk, which uses tuples?

I know nothing about the tuples and the trunk and whether tuples might
affect the way plugins work. Maybe others can comment...

To be honest i was more worried about how plugins will work with the
garbage collector and pre-compiled headers... I assume there is no
problem with this though since there are already a number of plugin
frameworks that exist that people report are working.


The MELT plugin is a bit specific about GGC, since in practice all the 
gengtype-d code is inside gcc/basilys.c which is not a plugin itself 
(but which, among other important things, deals with garbage collection 
& dlopen-ing). I could imagine that the other plugin experiments did 
similar stuff (meaning that they did not touch to gcc/ggc-common.h or to 
the gengtype generator).


I'm not sure that pre-compiled headers (pch) should even work with 
plugins. The reasonable solution is to disable them when any plugin is 
used (what should happen if plugins are not used when the pch is 
generated, but used when it is reloaded or viceversa; I don't know if 
this situation is hard to detect. Likewise, what should happen when the 
compilation of the header & the compilation of the source including the 
pch both dlopen-ed a different variant of the "same" plugin?). At least, 
having pch working with plugins should be postponed after the first 
approved (i.e. merged into the trunk) plugin imlementation.


A generic plugin framework should certainly cooperate with GGC (the 
garbage collector inside GCC). From what I understand (but I did'nt try 
and might forgot important details) it should not be very hard (assuming 
we do not care about dlclose-ing plugins); one should:
 1. enhance the function ggc_mark_roots in gcc/ggc-common.c so that it 
marks the GGC roots defined in the plugin. If the plugin is itself a 
single PLUGIN.c source processed by gengtype, this means calling the 
marking routines inside the gt_ggc_r_gt_PLUGIN_h array of ggc_root_tab 
structures generated by gengtype & implemented in the gt-PLUGIN.h 
generated file. Perhaps the plugin initialization routine (inside the 
dlopen-ed plugin) should explicitly register some stuff to GGC.
 2. more generally, enhance the code so that the equivalent of 
gtype-c.h works with the plugins.
 3. enhance the gengtype generator itself to be able to process a 
plugin itself. This is perhaps more tricky since gengtype currently gets 
its list of files to be processed in a very special (brittle & built-in) 
way - it is not simple to pass the list of files as a program argument 
to gengtype, currently it is generated at gengtype build time.


We should be careful about what happens when several plugins are loaded.

So there are some issues with GGC & plugins, but there should not be 
very hard to solve (even if I don't claim it is very easy). I find the 
issue of pass management & plugins more serious (because there is more 
consensus to reach on what should be done).


Regards.
--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


gcc 4.3.0, -Wconversion against -O1

2008-09-19 Thread Andriy Gapon


There is a very simple function:
/ func.c **/
#include 

uint16_t my_htons(uint16_t hostshort)
{
return htons(hostshort);
}
/**/

$ cc func.c -Wconversion -Werror -c -o func.o
Exit 0

$ cc func.c -O1 -Wconversion -Werror -c -o func.o
cc1: warnings being treated as errors
func.c: In function ‘my_htons’:
func.c:5: error: conversion to ‘short unsigned int’ from ‘int’ may alter 
its value

Exit 1

Adding -E flag I get the following code:
...
# 2 "func.c" 2

uint16_t my_htons(uint16_t hostshort)
{
 return (__extension__ ({ register unsigned short int __v, __x = 
(hostshort); if (__builtin_constant_p (__x)) __v = __x) >> 8) & 
0xff) | (((__x) & 0xff) << 8)); else __asm__ ("rorw $8, %w0" : "=r" 
(__v) : "0" (__x) : "cc"); __v; }));

}


Is there anything (smart) that either I or gcc or glibc/linux can do 
about this?

No type-casts in *my* code help.

The warning seems to be uncalled for in this case, because I don't see 
explicit 'int' type anywhere in the expanded code.


P.S. the code seems to be coming from /usr/include/bits/byteswap.h, 
__bswap_16() macro in this way:


=== netinet/in.h ===
#ifdef __OPTIMIZE__
/* We can optimize calls to the conversion functions.  Either nothing has
   to be done or we are using directly the byte-swapping functions which
   often can be inlined.  */
# if __BYTE_ORDER == __BIG_ENDIAN
/* The host byte order is the same as network byte order,
   so these functions are all just identity.  */
# define ntohl(x)   (x)
# define ntohs(x)   (x)
# define htonl(x)   (x)
# define htons(x)   (x)
# else
#  if __BYTE_ORDER == __LITTLE_ENDIAN
#   define ntohl(x) __bswap_32 (x)
#   define ntohs(x) __bswap_16 (x)
#   define htonl(x) __bswap_32 (x)
#   define htons(x) __bswap_16 (x)
#  endif
# endif
#endif

--
Andriy Gapon


Re: gcc 4.3.0, -Wconversion against -O1

2008-09-19 Thread Richard Guenther
On Fri, Sep 19, 2008 at 5:03 PM, Andriy Gapon <[EMAIL PROTECTED]> wrote:
>
> There is a very simple function:
> / func.c **/
> #include 
>
> uint16_t my_htons(uint16_t hostshort)
> {
>return htons(hostshort);
> }
> /**/
>
> $ cc func.c -Wconversion -Werror -c -o func.o
> Exit 0
>
> $ cc func.c -O1 -Wconversion -Werror -c -o func.o
> cc1: warnings being treated as errors
> func.c: In function 'my_htons':
> func.c:5: error: conversion to 'short unsigned int' from 'int' may alter its
> value
> Exit 1
>
> Adding -E flag I get the following code:
> ...
> # 2 "func.c" 2
>
> uint16_t my_htons(uint16_t hostshort)
> {
>  return (__extension__ ({ register unsigned short int __v, __x =
> (hostshort); if (__builtin_constant_p (__x)) __v = __x) >> 8) & 0xff) |
> (((__x) & 0xff) << 8)); else __asm__ ("rorw $8, %w0" : "=r" (__v) : "0"
> (__x) : "cc"); __v; }));

The computation __x) >> 8) & 0xff) |  (((__x) & 0xff) << 8)) is promoted
to int as of C language standards rules and the store to __v truncates it.

The warning isn't clever enough to see that this doesn't happen due to
the masking applied, and in general there are always cases that would
slip through.

Richard.


gcc 4.3.0, -Wconversion and conditional operator

2008-09-19 Thread Andriy Gapon


void f(int x)
{
char c = x ? '|' : '/';
}


$ cc1: warnings being treated as errors
char.c: In function 'f':
char.c:3: error: conversion to 'char' from 'int' may alter its value
Exit 1

If I replace 'x' with a constant (0 or 1) in the condition, then the 
code compiles.

I think gcc should be smarter here.

--
Andriy Gapon


Re: Defining a common plugin machinery

2008-09-19 Thread Ian Lance Taylor
Brian Dessent <[EMAIL PROTECTED]> writes:

> Is it really that far fetched to have the plugin not directly access
> anything from the executable's symbol table but instead be passed a
> structure that contains a defined set of interfaces and callbacks?

Yes, that is pretty far fetched.  Simply writing the list of callbacks
would be very painful.  Keeping that list of callbacks working for
more than one release would be unbearable pain.

Perhaps someday, with much more experience of plugins, this will be
possible, but I don't see any real possibility of starting out that
way.

Ian


Re: Defining a common plugin machinery

2008-09-19 Thread Brian Dessent
Basile STARYNKEVITCH wrote:

> (a meta question: do we need to reply-to all, or should the gcc@ list be
> enough to discuss plugins, and the only destination of this interesting
> discussion?).

Reply-to-all is the common standard on the list.  If you don't want a
personal copy then set the "Reply-to:" header in your message to the
list address and you won't be included in CC by most email clients when
someone does reply-to-all.

> I'm not sure that pre-compiled headers (pch) should even work with
> plugins. The reasonable solution is to disable them when any plugin is
> used (what should happen if plugins are not used when the pch is
> generated, but used when it is reloaded or viceversa; I don't know if
> this situation is hard to detect. Likewise, what should happen when the
> compilation of the header & the compilation of the source including the
> pch both dlopen-ed a different variant of the "same" plugin?). At least,
> having pch working with plugins should be postponed after the first
> approved (i.e. merged into the trunk) plugin imlementation.

There's already an existing requirement that the compiler options used
when precompiling the .gch must match or be compatible with those used
when using the .gch, so in essence the above is already forbidden by the
fact that adding or removing -fplugin= would render the .gch unusable. 
I seem to recall that the .gch file records the entire set of options
used when it was built and gcc refuses to consider the file if it does
not exactly match, however the documentation seems to imply that only
certain options are automatically checked/rejected while others are the
responsibility of the user to enforce:
.

Brian


Re: gcc 4.3.0, -Wconversion against -O1

2008-09-19 Thread Manuel López-Ibáñez
2008/9/19 Richard Guenther <[EMAIL PROTECTED]>:
>
> The computation __x) >> 8) & 0xff) |  (((__x) & 0xff) << 8)) is promoted
> to int as of C language standards rules and the store to __v truncates it.
>
> The warning isn't clever enough to see that this doesn't happen due to
> the masking applied, and in general there are always cases that would
> slip through.

-Wconversion had some fixes in this area during GCC 4.4 (and I think
there is still at least one pending patch). Could you re-test with a
recent trunk snapshot? I cannot do it myself.

Cheers,

Manuel.


Re: gcc 4.3.0, -Wconversion and conditional operator

2008-09-19 Thread Manuel López-Ibáñez
2008/9/19 Andriy Gapon <[EMAIL PROTECTED]>:
>
> If I replace 'x' with a constant (0 or 1) in the condition, then the code
> compiles.
> I think gcc should be smarter here.

This should be fixed in any recent GCC 4.4. snapshot. But I cannot
test it myself to be sure, if someone would be so kind to check this
for me, I would appreciate it.

Cheers,

Manuel.


C/C++ FEs: Do we really need three char_type_nodes?

2008-09-19 Thread Diego Novillo
When we instantiate char_type_node in tree.c:build_common_tree_nodes
we very explicitly create a char_type_node that is signed or unsigned
based on the value of -funsigned-char, but instead of make
char_type_node point to signed_char_type_node or
unsigned_char_type_node, we explicitly instantiate a different type.

This is causing trouble in lto1 because when we stream out gimple,
assignments between char and unsigned char go through without a cast.
However, the LTO front end has no knowledge of the C/C++ FE flags, so
it creates a signed char_type_node.  This causes a SEGV in PRE because
the assignment between char and unsigned char confuses it.

This is trivially fixable by making char_type_node be 'unsigned char',
so that when we stream out the types for the variables the types are
explicitly signed or unsigned, instead of taking its sign implicitly
from an FE flag.

@@ -7674,12 +7713,8 @@ build_common_tree_nodes (bool signed_cha
   unsigned_char_type_node = make_unsigned_type (CHAR_TYPE_SIZE);
   TYPE_STRING_FLAG (unsigned_char_type_node) = 1;

-  /* Define `char', which is like either `signed char' or `unsigned char'
- but not the same as either.  */
-  char_type_node
-= (signed_char
-   ? make_signed_type (CHAR_TYPE_SIZE)
-   : make_unsigned_type (CHAR_TYPE_SIZE));
+  /* Define `char', which is like either `signed char' or `unsigned char'.  */
+  char_type_node = signed_char ? signed_char_type_node :
unsigned_char_type_node;
   TYPE_STRING_FLAG (char_type_node) = 1;

   short_integer_type_node = make_signed_type (SHORT_TYPE_SIZE);

However, this has other side effects in the FE, as we now can't build
libstdc++ (a PCH failure).  So, do we really need a third, distinct,
char_type_node?  Is there some language or FE rule that causes us to
do this?  The comment in build_common_tree_nodes is not very
illuminating.

I could fix this in a slower way by crawling through the whole
callgraph and changing 'char' to 'unsigned char' everywhere once the
whole compilation unit is in GIMPLE form, but I'd like to consider
doing that only if it's really necessary.


Thanks.  Diego.


Re: improving testsuite runtime

2008-09-19 Thread Tom Tromey
> "Ben" == Ben Elliston <[EMAIL PROTECTED]> writes:

Ben> Do you think that the current order of .exps should be preserved
Ben> in the resultant .sum and .logs?

I personally don't have a use for this.

I just think that the order ought to be stable across checks of the
same build.

Tom


Re: C/C++ FEs: Do we really need three char_type_nodes?

2008-09-19 Thread Andrew Thomas Pinski
Yes char, unsigned char and signed char are three distant types.  
Unlike the other interger types in C/C++. That is they are  
incompatable types.


Thanks,
Andrew Pinski

Sent from my iPhone

On Sep 19, 2008, at 9:36 AM, "Diego Novillo" <[EMAIL PROTECTED]>  
wrote:



When we instantiate char_type_node in tree.c:build_common_tree_nodes
we very explicitly create a char_type_node that is signed or unsigned
based on the value of -funsigned-char, but instead of make
char_type_node point to signed_char_type_node or
unsigned_char_type_node, we explicitly instantiate a different type.

This is causing trouble in lto1 because when we stream out gimple,
assignments between char and unsigned char go through without a cast.
However, the LTO front end has no knowledge of the C/C++ FE flags, so
it creates a signed char_type_node.  This causes a SEGV in PRE because
the assignment between char and unsigned char confuses it.

This is trivially fixable by making char_type_node be 'unsigned char',
so that when we stream out the types for the variables the types are
explicitly signed or unsigned, instead of taking its sign implicitly
from an FE flag.

@@ -7674,12 +7713,8 @@ build_common_tree_nodes (bool signed_cha
  unsigned_char_type_node = make_unsigned_type (CHAR_TYPE_SIZE);
  TYPE_STRING_FLAG (unsigned_char_type_node) = 1;

-  /* Define `char', which is like either `signed char' or `unsigned  
char'

- but not the same as either.  */
-  char_type_node
-= (signed_char
-   ? make_signed_type (CHAR_TYPE_SIZE)
-   : make_unsigned_type (CHAR_TYPE_SIZE));
+  /* Define `char', which is like either `signed char' or `unsigned  
char'.  */

+  char_type_node = signed_char ? signed_char_type_node :
unsigned_char_type_node;
  TYPE_STRING_FLAG (char_type_node) = 1;

  short_integer_type_node = make_signed_type (SHORT_TYPE_SIZE);

However, this has other side effects in the FE, as we now can't build
libstdc++ (a PCH failure).  So, do we really need a third, distinct,
char_type_node?  Is there some language or FE rule that causes us to
do this?  The comment in build_common_tree_nodes is not very
illuminating.

I could fix this in a slower way by crawling through the whole
callgraph and changing 'char' to 'unsigned char' everywhere once the
whole compilation unit is in GIMPLE form, but I'd like to consider
doing that only if it's really necessary.


Thanks.  Diego.


Re: C/C++ FEs: Do we really need three char_type_nodes?

2008-09-19 Thread Jakub Jelinek
On Fri, Sep 19, 2008 at 12:36:12PM -0400, Diego Novillo wrote:
> When we instantiate char_type_node in tree.c:build_common_tree_nodes
> we very explicitly create a char_type_node that is signed or unsigned
> based on the value of -funsigned-char, but instead of make
> char_type_node point to signed_char_type_node or
> unsigned_char_type_node, we explicitly instantiate a different type.

C++ e.g. requires that char (c) is mangled differently from unsigned char
(h) and signed char (a), it is a distinct type.

Jakub


Re: C/C++ FEs: Do we really need three char_type_nodes?

2008-09-19 Thread Diego Novillo
On Fri, Sep 19, 2008 at 12:55, Jakub Jelinek <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 19, 2008 at 12:36:12PM -0400, Diego Novillo wrote:
>> When we instantiate char_type_node in tree.c:build_common_tree_nodes
>> we very explicitly create a char_type_node that is signed or unsigned
>> based on the value of -funsigned-char, but instead of make
>> char_type_node point to signed_char_type_node or
>> unsigned_char_type_node, we explicitly instantiate a different type.
>
> C++ e.g. requires that char (c) is mangled differently from unsigned char
> (h) and signed char (a), it is a distinct type.

Thanks, that answer my question.


Diego.


Re: Defining a common plugin machinery

2008-09-19 Thread Basile STARYNKEVITCH

Brian Dessent wrote:

Basile STARYNKEVITCH wrote:



I'm not sure that pre-compiled headers (pch) should even work with
plugins. The reasonable solution is to disable them when any plugin is
used [...]


There's already an existing requirement that the compiler options used
when precompiling the .gch must match or be compatible with those used
when using the .gch, so in essence the above is already forbidden by the
fact that adding or removing -fplugin= would render the .gch unusable. 
I seem to recall that the .gch file records the entire set of options

used when it was built and gcc refuses to consider the file if it does
not exactly match, however the documentation seems to imply that only
certain options are automatically checked/rejected while others are the
responsibility of the user to enforce:
.


In my opinion we should, in the first version of plugin code inside 
trunk, disable pre-compiled headers when using any plugin. Notice that 
PCH require specific support by GGC (probably in gcc/ggc-common.c), 
hence additional processing for plugins, more than what I mentioned in 
http://gcc.gnu.org/ml/gcc/2008-09/msg00342.html


Only when plugins are actually usable, and when we get real practical 
experience on them, should we tackle PCH. We should not bother before.


And my understanding was that PCH are essentially an experimental, not 
very satisfactory (but definitely useful to users) features. I even 
thought that some people are interested in working on a way to replace 
it by something neater (IIRC, this was informally discussed at the 2008 
GCC summit, I forgot by whom..).




I am much more worried about passes and plugins (and I am very surprised 
to be almost the only one mentioning passes in plugin discussions). I 
feel it is a major issue (not a matter of coding, much more a matter of 
convention & understanding). So far, I have (in MELT branch) a very 
unsatisfactory bad solution. I defined a few passes, which may (or not) 
be enabled by some plugins. What I would dream is some machinery to be 
able to have the plugin ask 'insert my pass foo_pass after the last pass 
which construct the IPA SSA tuple representation' and this is not 
achievable today.


From what I understand, the ICI http://gcc-ici.sourceforge.net/ effort 
is tackling with related issues.



Actually, I believe that passes should *much* more described, and I mean:

  * each pass should be documented, by a paragraph like the beginning 
comment of alias.c cse.c df-core.c dse.c gcse.c ipa-cp.c ipa-inline.c 
ipa-type-escape.c omp-low.c tree-cfg.c tree-inline.c tree-mudflap.c or 
tree-nrv.c or tree-optimize.c ... I was suggesting in 
http://gcc.gnu.org/ml/gcc/2008-09/msg00177.html that each pass *should* 
have such a description (less than a hundred english words each), and 
even dreaming of some simplistic comment marking convention (perhaps 
texinfo markup inside a comment) and some simple (e.g. awk) script to 
extract these documentary comments from source code and merge them into 
a generated pass-internal-doc.texi file. The issue here is not technical 
[writing such an awk script is simple] but social [have each pass 
developer document a little his/her pass in some 'standard' format]. And 
such a crude device (script merging comments) would help to have more 
documentation.


  * each pass should have a unique name. Note that I submitted a patch 
http://gcc.gnu.org/ml/gcc-patches/2008-08/msg00057.html which has been 
accepted & committed to the trunk (on august 1st 2008) which permit 
passes to be named without having any dump. A possible minor issue is 
probably that several test suites (I have no idea which ones) depend on 
the pass name. This is much more a social issue (have each pass 
developer be concerned of naming his pass uniquely) than a technical one 
(giving a unique name to every passes is realistically possible only to 
the elite people understanding nearly all of GCC. They are very few, and 
they don't need any documentation anymore). Each pass developer should, 
without major effort, be able to document in one paragraph his pass and 
uniquely name it. In contrast, having a single (or a few) person doing 
all this work is not realistic.


  * each major internal GCC representation should be more or less 
documented (this is probably the case today: Tuples, generic trees, RTL 
have a documentation, even if is imperfect), and each pass should tell 
what representation it changes, it invalidates, it destroys, it provides 
(some but not all of this is already available in struct opt_pass thru 
flags, eg properties_required, todo_flags_start, ...).


I believe that a major (more social than technical) issue about plugins 
(assuming almost all of them are implementing some kind of passes) is:


  * where should this plugin be inserted (after which pass, before 
which pass)
  * what kind of representations this plugin consumes, provides, 
enhances, inv

Re: Defining a common plugin machinery

2008-09-19 Thread Joseph S. Myers
On Fri, 19 Sep 2008, Brendon Costa wrote:

> 1) Dont allow additional search paths for plugins

I think we do want to allow the user to specify such a path on the 
compiler command line, in addition to searching the standard libsubdir or 
libexecsubdir location.

> 3) Somehow embed something in all plugins that can safely be queried,
> which indicates the version/build of GCC the plugin belongs to and only
> load plugins that match.

I think the required GCC version should be exported by the plugin - in 
addition to the plugin's own version and bug-reporting URL which should go 
in the --version and --help output if those options are used together with 
plugin-loading options.  It should also export the configured target 
triplet since plugins built for different targets on the same host can't 
be expected to be compatible (they may use target macros internally).

We should make it clear that e.g. 4.5.0 and 4.5.1 are not necessarily 
compatible for plugins; that's an inevitable consequence of exporting all 
the GCC-internal interfaces.  Thus the full version number is what should 
be checked, rather than trying to keep interfaces stable within e.g. the 
4.5 series.  But we do want plugins to be usable from both cc1 and 
cc1plus, etc., unless they use front-end-internal interfaces (which some 
may wish to do, at the expense of only working with one language).

> Do we say for the first version we use -rdynamic and don't support
> windows (will need build machinery to disable all this on windows if we
> do) or do we add the export macros and library and support the windows
> platform from the beginning (Are there any other platforms with quirks
> not covered here)?

There are many supported non-ELF hosts with their own peculiarities - not 
just Windows - Darwin (Mach-O), HP-UX on 32-bit PA (SOM), older BSD 
versions (a.out), Tru64, VMS, DJGPP, AIX.  I think it's very likely the 
support will initially be optional and only supported on a subset of hosts 
- but with nothing in the core compiler moved to plugins this should not 
cause any problems with the invariant of generating the same code for a 
target independent of host.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: Defining a common plugin machinery

2008-09-19 Thread Taras

Brendon Costa wrote:

Joseph S. Myers wrote:
  
I think this is a bad idea on grounds of predictability.  


I can understand this view and was initially reluctant to suggest the
auto-load feature for this same reason. However i can not see another
solution that can be used instead of this to achieve simple usability
for a small subset of plugin types (described below).

  
Separately developed plugins have many uses for doing things specific to 
particular projects 


This is the case for the Dehydra, which is specific to the Mozilla
projects. However the plugin i would like to develop it is NOT designed
to be used with a specific project, but to be used while compiling any
project.
  
Sorry to nitpick, but there is nothing Mozilla-specific in 
dehydra/treehydra. There are users outside of Mozilla.
However, I do think it's awesome to be able to store plugin results in 
the resulting binary to do LTO-style analyses. How well is that working 
for you?


Taras


Re: Defining a common plugin machinery

2008-09-19 Thread Basile STARYNKEVITCH

Taras wrote:
However, I do think it's awesome to be able to store plugin results in 
the resulting binary to do LTO-style analyses. How well is that working 
for you?


I would be delighted if that would be easily possible, but I remember 
having asked at some LTO related session or meeting if LTO can be 
extended to add some additional data, and the answer was "we don't want 
to do that".




--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


Re: Defining a common plugin machinery

2008-09-19 Thread Taras

Brendon Costa wrote:

* Automatically loaded plugins as well as explicit user requested plugins
I would like to propose that we allow automatic loading of certain
plugins in addition to the explicit request for loading of plugins using
the -fplugin= command line option.
  
I think we agree that autoloading plugins is not an immediate need and 
can be worked out at a later stage.


--
What should we export?

  
I think exporting everything is the way to go. We sidestepped the win32 
issue by using a mingw cross-compiler.
There are a couple of static functions in GCC that are useful for 
plugins, it would be nice to make them global symbols so each plugin 
doesn't have to copy them.


--
What should be the user interface to incorporate plugin code?

It has been proposed that we use:
-fplugin= -fplugin-arg=

I propose that we use slightly different format:
g++ -fplugin= -f-[=value]

for example:
g++ -fplugin=edoc -fedoc-file=blah.edc

My reasons for this are from the following cases:
1) multiple plugins with a single invocation
2) key/value arguments required for plugins
3) plugins that may want to be loaded automatically

1) This is not a big issue. I just clarity with the behavior.
Say we want to use two plugins in the same invocation. This might look like:

Original: g++ -fplugin=edoc -fplugin-arg=embed -fplugin=optim
-fplugin-arg=max
New: g++ -fplugin=edoc -fedoc-embed -fplugin=optim -foptim-max

If using the original method, the order of the specification matters,
with the new method it does not.
  
I don't feel that it's worth the hassle for GCC to support multiple 
plugins. The thought of GCC loading multiple plugins that would stepping 
on each other's toes leading to all kinds of unexpected ICEs is unpleasant.
My proposal is to write multiplexor plugin to make it possible to load 
subplugins. This would help keep the plugin part of GCC minimal. Having 
said that, I initially came up with the idea of a multiplexor because I 
wasn't sure how to deal with commandline arguments cleanly. I like your 
proposal of -fplugin/-fplugin-arg pairs. It is also possible to 
introduce llaborate command-line argument extensions via plugins :).

2) With my EDoc++ project one of the arguments i look for has a format
like: -fedoc-file=blah.edc What would this look like with the
-fplugin-arg= format?

Possibilities with Original format:
g++ -fplugin=edoc -fplugin-arg=file:blah.edc
g++ -fplugin=edoc -fplugin-arg-file=blah.edc
g++ -fplugin=edoc -fplugin-arg=file=blah.edc
g++ -fplugin=edoc -fplugin-key=file -fplugin-value=blah.edc
  
In Mozilla we have a getopt-style parser in the javascript part of the 
plugin that parses the -fplugin-arg="--long bunches -of arguments" 
getopt-style. That seems to be working well.


--
At what point in the compilation stage should it kick in?

I think plugins should be loaded as soon as possible. I would propose
loading plugins immediately after the processing of command line
arguments or maybe even at the same time as command line argument
processing depending on what might be easier/cleaner.
  

Loading plugins ASAP is the way to go.


--
Some extra questions.
--
What platforms do we want to support? I can think of the following
categories:

  
I think it's fine for the initial revision to only support "easy" 
platforms, ie ones with -ldl/-rdynamic.


--
How do we search for plugins and make sure we don't load incompatible ones?
  

We force users to use absolute paths. It's not ideal.


--
How/where do we install headers and the library for external plugin
projects to use?
  
Currently my plugins are built by specifying the location of the GCC 
source tree via ./configure. This might be a little inconvenient, but if 
people are going to be developing plugins, they better be able to build 
gcc. In fact I think it would be helpful to not bother with plugin 
headers/etc, so plugin developers would be encouraged to expand their 
scope to hacking gcc as well.
In our case, plugin users do no need the sources, they just need a 
binary for the compiler and the plugin which can be provided by their 
package manager.


Cheers,
Taras


Re: Defining a common plugin machinery

2008-09-19 Thread Diego Novillo
On Fri, Sep 19, 2008 at 13:16, Basile STARYNKEVITCH
<[EMAIL PROTECTED]> wrote:

> I would be delighted if that would be easily possible, but I remember having
> asked at some LTO related session or meeting if LTO can be extended to add
> some additional data, and the answer was "we don't want to do that".

I have no memory of that.  But in principle, I don't see why this
couldn't be done by either storing extended statement attributes or a
new section in the LTO output.  Of course, the plugin would have to be
responsible for producing and consuming this data.


Diego.


Re: improving testsuite runtime

2008-09-19 Thread Joern Rennecke
> I think 'make -j' is the way to go, since it lets the user easily
> control the amount of parallelism.

As I said before, make -j is a complete non-starter for me, as it restricts
the paralelism to a single machine and thus would actually reduce the
parallelism from what I have now with multilibs.

Perhaps we can accomodate the different requirements by having a
set of independent make targets which are suitable for simultanous
execution on different hosts usin gthe same file system, and one rule
to bind them all together and do the reporting for the make -j adherents.
For independent execution the reporting job would have to be run explicitly
after the individual test jobs have all completed.

What is important for independent execution is that tere is no hidden
dependency.  E.g. when starting to use the mutilibbed targets, as in

  for check in check-g{cc,++}//arc700-sim/; do
..
  for tar in 
{-mARC600/-mmul64/-mnorm,-mARC600,-mARC700,-mARC600/-mmul32x16/-mnorm}{,/-EB}; 
do
lsrun -m \$1 make $check\$tar &
shift
  done
..
I found that it is essential that after building gcc and the libraries,
I also explicitly build site.exp, before I farm out the tests, as
otherwise up to 17 make processes simultanously start to write site.exp,
which does generally not produce a viable site.exp file.


Re: gcc 4.3.0, -Wconversion and conditional operator

2008-09-19 Thread Gerald Pfeifer
On Fri, 19 Sep 2008, Manuel López-Ibáñez wrote:
> This should be fixed in any recent GCC 4.4. snapshot. But I cannot
> test it myself to be sure, if someone would be so kind to check this
> for me, I would appreciate it.

Confirmed on i386-unknown-freebsd6.3 using soures as of 20 hours ago.

Gerald

ia64/ia32 Instruction Description

2008-09-19 Thread JD

Hello all,
  I am looking for a boiled down version of the ia64/32 instruction set 
which I could extract instruction/flag dependencies from. As a last 
resort I will use the Intel Software Developer PDFs. Based on my 
understanding, the ia64.md and i386.md don't exactly give this information.


As an example, I would like to obtain the following information:

instruction format, input condition codes/flags, altered codes/flags

Cheers,
JD


Re: Defining a common plugin machinery

2008-09-19 Thread Ian Lance Taylor
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes:

> I am much more worried about passes and plugins (and I am very
> surprised to be almost the only one mentioning passes in plugin
> discussions). I feel it is a major issue (not a matter of coding, much
> more a matter of convention & understanding). So far, I have (in MELT
> branch) a very unsatisfactory bad solution. I defined a few passes,
> which may (or not) be enabled by some plugins. What I would dream is
> some machinery to be able to have the plugin ask 'insert my pass
> foo_pass after the last pass which construct the IPA SSA tuple
> representation' and this is not achievable today.

I think we also need that for general backend use, not only for
plugins.  E.g., bt-load.c should move to config/sh, and at startup
time the SH backend should register the pass with the pass manager.

Ian


Re: improving testsuite runtime

2008-09-19 Thread Janis Johnson
On Fri, 2008-09-19 at 18:32 +0100, Joern Rennecke wrote:
> > I think 'make -j' is the way to go, since it lets the user easily
> > control the amount of parallelism.
> 
> As I said before, make -j is a complete non-starter for me, as it restricts
> the paralelism to a single machine and thus would actually reduce the
> parallelism from what I have now with multilibs.

Would it work for you to have a check-init target to set up site.exp
and whatever else might be needed, a check-fini target to wrap up
the results, and multiple targets that you can invoke separately in
between those?  A top-level "make check" would do the start-up stuff,
run lots of check targets in parallel, and then do the wrap-up at the
end, perhaps after running the resource hogs sequentially.

Janis



m32c: pointer math vs sizetype again

2008-09-19 Thread DJ Delorie

m32c-elf-gcc -mcpu=m32c (16 bit ints, 24 bit pointers) miscompiles
this:

int *foo (int *a, int b)
{
  return a-b;
}

as this:

_foo:
enter   #0  ; 30prologue_enter_24
pushm   r1,r3,a0; 31pushm
; end of prologue   ; 32prologue_end
mov.w   12[fb],r0   ; 27movhi_op/1
sha.w   #1,r0   ; 7 ashlhi3_i/1
neg.w   r0  ; 8 neghi2/1
mov.w   r0,a0   ; 10zero_extendhipsi2
mov.l   8[fb],r3r1  ; 28movpsi_op/2
add.l   a0,r3r1 ; 16addpsi3/3
mov.l   r3r1,mem0   ; 29movpsi_op/2
; start of epilogue ; 35epilogue_start
popmr1,r3,a0; 36popm
exitd   ; 37epilogue_exitd_24

The key instructions are - neg, zero_extend, add.  This breaks if the
original value is, say, 2.  Neg gives 0xfffe, zero_extend gives
0x00fffe, and you end up adding 65534 to the pointer.

If I change sizetype to "long unsigned int", it's bigger than a
pointer, and the front end leaves an unexpected nop_convert in various
expressions, which causes ICEs.

There is no standard integer type the same size as pointers (24 bit,
PSImode).  IIRC, last time I tried to shoehorn in a PSImode sizetype,
gcc wanted a full set of math operators for it, which the chip doesn't
have (they have to be done in HI or SI mode)

I tried making sizetype signed, that ICEd too.

What's the right thing to do?

If the front end either sign extended, or had a POINTER_MINUS_EXPR,
things would just work.

DJ


Re: (Side topic EDoc++ binary embedding)

2008-09-19 Thread Brendon Costa
> Sorry to nitpick, but there is nothing Mozilla-specific in
> dehydra/treehydra. There are users outside of Mozilla.

Sorry, i didn't realise this.

> However, I do think it's awesome to be able to store plugin results in
> the resulting binary to do LTO-style analyses. How well is that working
> for you?

It makes life a WHOLE lot easier.

For example, say a project compiles a lot of object files into a few
libraries:
libstuff1.a contains: O01.o O02.o O03.o O04.o ...
libstuff2.so contains: O11.o O12.o O13.o O14.o ...
then main.o is linked with libstuff1.a and libstuff2.so into main.exe

The result of the way the linking works is that the main.exe contains
all the embedded data from the objects that were used from libstuff1
O01.o ..., and main.o

My application then uses ldd to find any shared libs: libstuff2.so and
adds the data from those too.

So regardless of how complex the build/link procedure is to generate an
application binary: main.exe. To look at the embedded data for that
main.exe is simply a matter of:

edoc main.exe --format simple

It saves having to manage extra files in the build system. The other
option i have is to define --edoc-dir=/home/me/blah. Where /home/me/blah
is some absolute directory outside the build. The data is then placed in
separate files for each object file in that directory for everything
that is built. Again this results in usage similar to:

edoc /home/me/blah/ --format simple

I am currently not using the modified GCC to do the LOT-like analysis. I
have brought that out into a separate post compilation tool as it was
just easier to code that way.

To be honest the idea of embedding the data into the binary came from
someone else on this list.

Brendon.


Re: no symbol in current context problem when debug the program in gdb

2008-09-19 Thread Peng Yu
On Mon, Sep 15, 2008 at 2:54 PM, Peng Yu <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I have the following program. When I step in to test's constructor, I
> would be able to print the variable three. It says
> (gdb) n
> 7 T three = 3;
> (gdb) n
> 8 std::cout << three << std::endl;
> (gdb) p three
> No symbol "three" in current context.
>
> According to gdb mailing list, this is a bug in GCC. I'm wondering if
> this issue has been resolved in the later versions of GCC.
>
> Thanks,
> Peng
>
> #include 
>
> template 
> class test {
>  public:
>   test(const T &a) : _a(a) {
> T three = 3;
> std::cout << three << std::endl;
>   }
>  private:
>   T _a;
> };
>
> int main() {
>  test p(10);
> }


Can somebody take a look at this issue? As installing a new compiler
takes a lot of effort, I'd like to know if this has been solved in the
newer version of gcc. If this has not been solved in the newer version
of gcc, can somebody put this thing in the schedule?

Thanks,
Peng


Re: Defining a common plugin machinery

2008-09-19 Thread Chris Lattner


On Sep 19, 2008, at 3:25 PM, Ian Lance Taylor wrote:


Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes:


I am much more worried about passes and plugins (and I am very
surprised to be almost the only one mentioning passes in plugin
discussions). I feel it is a major issue (not a matter of coding,  
much

more a matter of convention & understanding). So far, I have (in MELT
branch) a very unsatisfactory bad solution. I defined a few passes,
which may (or not) be enabled by some plugins. What I would dream is
some machinery to be able to have the plugin ask 'insert my pass
foo_pass after the last pass which construct the IPA SSA tuple
representation' and this is not achievable today.


I think we also need that for general backend use, not only for
plugins.  E.g., bt-load.c should move to config/sh, and at startup
time the SH backend should register the pass with the pass manager.


Is the plugin machinery intended to eventually allow new (GPL  
compatible) backends to be used?  It would be nice to make llvm-gcc be  
a plugin.


-Chris