Re: GCC Plugins (again)

2008-09-04 Thread brendon
> Plugins features.  This addresses Richard Stallman's concerns, so he
> no
> longer objects to a Plugins feature.

That is GREAT news!!!


> We, the GCC community, are waiting for the advocates of Plugins to
> reach
> a consensus on a single plugins architecture and implement it.  When
> are the
> people interested in this going to finish a robust implementation that
> can be
> merged into GCC?

Any idea where i can go to have some involvement in this? It was a while ago 
that i checked out the plugin branch to look at and i am sure some things may 
have changed since then. I might have another look this weekend...

Thanks,
Brendon.




Custom __attribute__ like functionality

2007-08-06 Thread brendon
Hi all,

I am interested in being able to "mark-up" C++ code with special 
meta-information. This is kind of like the existing __attribute__ for GCC but 
the semantics are quite different (I.e. Not just function/type level but 
statement level meta-data). I wish to ask if anyone knows of anything existing 
that may be able to achieve what i desire and if nothing exists what others 
think the difficulty would be of modifying existing __attribute__ semantics or 
creating something new. Note: This is for a non-mainstream patch and I will not 
be requesting it be added to mainstream GCC unless people could see a reason to 
do so.

Basically I would like to mark-up arbitrary segments of C++ code "like":

#define EDOC_NOTHROW(Code) __attribute__ ((nothrow)) { Code }
#define EDOC_THROW(Code, Types) __attribute__ ((throw(Types))) { Code }

for (Type i = begin(); EDOC_NOTHROW( i < end() ); i++)
{
}

or:

EDOC_NOTHROW(vec.push_back(blah));

I would probably want meta information for things, like: nothrow, throws (With 
additional type arguments), and a doc one that can be used for assigning 
documentation meta-information to a "throw x;" statement.

The idea is that a patched GCC (I already have a patched GCC for gathering 
other exception based data: http://edoc.sourceforge.net/ )  would then be able 
to query tree nodes for some sort of "attached" meta-data that includes this 
information. I could then use this with the EDoc++ program in order to allow 
users to provide indications of code segments that require different types of 
exception guarantees, which is then enforced at a later time. 

Any thoughts or ideas are welcome.
Thanks,
Brendon.









Tree node questions

2006-10-12 Thread Brendon Costa
Hi all,

If I have a FUNCTION_DECL node that returns non-null for
DECL_TEMPLATE_INFO() then it is a template or template instantiation.
How can I tell if it is a full instantiation (I.e. not general
template or partial specialisation)?

Does this also apply to nodes that represent template types
(structs/classes)?


Also up to now I have been using my own "tree walker" as I seem to be
unable to get walk_tree to work as I need. I have come across a few
problems in my own tree walker and need to change it, but before I do
I thought I would see if there already exists something that works
similarly.

Basically I would like to know the current "tree depth" and context
for the parent nodes in the tree as it is walked down.

Is there any way of getting the current depth and node context using
walk_tree or a similar function?


One final question. Are there any functions and if so what types that
are skipped by gimplify_function_tree defined in gimplify.c?


Thanks,
Brendon.


Additional tree node questions.

2006-10-12 Thread Brendon Costa
Sorry about the separate email.

In addition to the previous questions I want to ask if there is a
better way of achieving finding the type nodes for a functions
exception specification list.

For each FUNCTION_DECL node I find, I want to determine what its
exception specification list is. I.e. the throws() statement in its
prototype.

For example:

Function1();
Function2() throws();
Function3() throws(int, float);

Where 1 has no specification list, 2 has a empty list and 3 has a list
that allows int and float exceptions to pass through.

What is the best way to find the type nodes for this exception spec
list? I am doing it in a very cumbersome way at the moment and was
wondering if there exists an elegant way to achieve this?

I currently assume that ANY EH_FILTER_EXPR node i find while parsing
the contents of a FUNCTION_DECL defines a spec list (and if I don't
find one then it allows all exceptions to pass through). I don't think
this is correct though. Is it? I then use EH_FILTER_MUST_NOT_THROW()
and EH_FILTER_TYPES() to gather my information from this node.

Is there a better way of achieving this?

Thanks,
Brendon.





Re: building gcc

2006-10-13 Thread Brendon Costa
Bob Rossi wrote:
> Hi Ian,
>
> Basically, I want to use GCC with C,C++. I want to walk a tree that GCC
> creates for the translation units. I would like to know if for these two
> languages if I should use a language dependent tree, the generic tree or
> the gimple tree. In general, I would like to use the tree that most
closely
> resembles the source language, and that is documented best.
>
> For starters, can you recommend which tree structure I should use in
> GCC? If so, would it be to much to ask to point me to the object in the
> source code that represents the tree after the tree has been populated?
>
> If I should be using gimple, I found this paper.
> ftp://gcc.gnu.org/pub/gcc/summit/2003/GENERIC%20and%20GIMPLE.pdf
> Is there any other good documentation on this?

I cant help much as i have only been fumbling around in the GCC source
for a short time now and still have no idea about a lot of stuff.
However I did want to also look at the full tree for the C++ front end.
I did this from the parser.c: cp_parser_translation_unit() function just
after the call to: finish_translation_unit() and I was looking at the
tree defined globally elsewhere in the variable: global_namespace

This is probably not the best spot to look at the tree but it seemed to
work for me. However i have recently been changing the way i do things
as working with the full tree was VERY in-efficient for my task. I now
look at the trees for individual functions as they pass through:
gimplify.c: gimplify_function_tree()

It sped my code up by over 100x. I guess it really depends on what you
wish to do and I am sure someone else on this list can help a lot more
than i can.

Brendon.







Getting type used in THROW_EXPR

2006-10-14 Thread Brendon Costa
Hi all,

I have yet another question that has arisen as i have started testing my
code. Basically I am trying to get the type that is being used in
throwing an exception.


Is there a simple macro i can use to get the type of an exception from a
THROW_EXPR? I think this is a matter of getting the TREE_TYPE for the
value passed into the function: except.c: build_throw(tree exp)


I have been printing out the TREE_CODE of the node that comes from
calling TREE_TYPE(exp) inside that function, and i think from what i can
tell this gives the correct value for the type of the exception being
thrown (maybe i should also pass it through the prepare_eh_type()
function too). The problem is:

How do i get a reference to the expression tree passed into the
build_throw() method from a THROW_EXPR node or at least its type?

OR

How do i get a reference to the node with the type of the exception for
a THROW_EXPR?



Note: I also tried to change the THROW_EXPR node to have 2 operands
instead of 1 in (cp-tree.def) and then use build2() with the original
exp node instead of the existing build1() and attached the original exp
node to it. This had other disastrous consequences which i kind of
expected. I was not sure if I could just add an additional operand to
the THROW_EXPR node. I figured that all other commands would just use
operand 0 and ignore the new operand but it seems to fail with an ICE,
so i gave up that line of thinking...


Thanks for any help/info on this.
Brendon.





Re: Getting type used in THROW_EXPR

2006-10-14 Thread Brendon Costa
Richard Guenther wrote:
> On 10/14/06, Brendon Costa <[EMAIL PROTECTED]> wrote:
>> Hi all,
>>
>> I have yet another question that has arisen as i have started testing my
>> code. Basically I am trying to get the type that is being used in
>> throwing an exception.
>>
>>
>> Is there a simple macro i can use to get the type of an exception from a
>> THROW_EXPR? I think this is a matter of getting the TREE_TYPE for the
>> value passed into the function: except.c: build_throw(tree exp)
>
> If you look at cp/cp-tree.def you will see
>
> /* A throw expression.  operand 0 is the expression, if there was one,
>   else it is NULL_TREE.  */
> DEFTREECODE (THROW_EXPR, "throw_expr", tcc_expression, 1)
>
> which means that TREE_OPERAND (t, 0) is the expression thrown.  Based
> on whether that is a reference already or not, you need to create a
> reference by your own using build1 (ADDR_EXPR, ...) with a properly
> constructed reference type (I guess there's some helper for that in the
> C++ frontend).

Thanks for the fast reply. I have read that documentation before. The
problem is that the expression that you get from TREE_OPERAND(t, 0) is
not the same as the one that is passed into the except.c:
build_throw(tree exp) function which is the actual expression used to
determine the exception type.

Looking through the code for the function build_throw(), it adds
NOP_EXPR nodes, numerous compound expressions + cleanup nodes etc to the
original expression and this jumble of additional expressions is what is
considered the "expression" in the above comment (it somewhere contains
a link to the original expression somewhere). So getting the tree type
of that expression will not give me the type for the exception being
thrown, unlike getting the type of the expression that is initially
passed into that function.


For example getting the TREE_TYPE for the code below of the "exp" node
passed into build_throw() gives a node type of:
RECORD_TYPE

where as doing the same on TREE_OPERAND(t, 0) gives:
VOID_TYPE


class C
{
};

int main()
{
throw C();
return 0;
}


Also my terminology is not quite correct, when i say "reference" i meant
i need to get a pointer to the tree node that has the information, i.e.
a reference to the tree node with the information. Not a ADDR_EXPR.
Sorry for the confusing use of terminology.


Thanks,
Brendon.




Re: Getting type used in THROW_EXPR

2006-10-14 Thread Brendon Costa
My GCC extension will never be merged with the GCC source I dont think
but will be distributed as a patch for GCC. So with that in mind do you
think there will be any functional issues for me to set the TREE_TYPE of
all THROW_EXPR nodes to have the type of the exception they are throwing
or void (as it is currently) if the type is not known (Like for rethrow)?


Currently the throw expr nodes are constructed with the line:
exp = build1 (THROW_EXPR, void_type_node, exp);

I was going to change that line to:
exp = build1 (THROW_EXPR, resulting_type, exp);

where resulting_type is the type of the exception being thrown (stripped
of CV qualifiers + references by calling prepare_eh_type to get the
correct type).

I have tried it and there don't seem to be any problems with the small
tests I have performed, but just thought i would see if others know of
any issues that may arise from doing this?

Thanks,
Brendon.


Re: building gcc

2006-10-14 Thread Brendon Costa
Bob Rossi wrote:
> 
> Thanks Brendon, that was really helpful. I'm very new at this, and may 
> have some seemingly rather odd questions. I see that global_namespace is
> of type 'union tree_node'. Is this the C++ language dependent AST?

Yes, this is the C++ AST. I actually think it is just a superset of
GENERIC. I.e. it uses a lot of stuff defined in the tree.def file but
also has some stuff specific to C++ that is defined in the cp-tree.def file.

>From what i have gathered ALL tree types use this "union tree" type to
represent a node in the tree. A good place to start in the documentation
is the GCC internals documentation:

http://gcc.gnu.org/onlinedocs/gccint/

There is a section on:
The intermediate representations used by the C and C++ front ends

that describes a lot of useful information about the C++ front end tree.
This C++ tree includes the nodes for the C tree also, so look in the
files tree.def, cp-tree.def for documentation on the different types of
nodes that can be found in this tree.

In addition the file: tree.h contains a number of functions and macros
that can be used for manipulating and checking the contents of tree
nodes. A lot of this information is in the above documentation, but some
things you just need to search the sources for.

I find that if something doesn't have any documentation, i search the.c
files for uses of it using grep and see how it is used to get some idea
of what it does.

Finally there is a lot of documentation in the source files. I just find
it difficult searching for the correct macro/function to use to do what
i want. Once I have found it, using it is not as big of an issue.

Brendon.




Re: Additional tree node questions.

2006-10-14 Thread Brendon Costa
Ian Lance Taylor wrote:
> Brendon Costa <[EMAIL PROTECTED]> writes:
> 
>> For each FUNCTION_DECL node I find, I want to determine what its
>> exception specification list is. I.e. the throws() statement in its
>> prototype.
> 
> Look at TYPE_RAISES_EXCEPTIONS (FNDECL).
> 
> Ian

This macro does not seem to work. Either that or i am doing something wrong.

I have some code that looks a little like:


temp_tree = TYPE_RAISES_EXCEPTIONS(node);
if (temp_tree)
{
   fprintf(stderr, "Has an exception spec.\n");
   for (list = node; list; list = TREE_CHAIN(list))
   {
  /* Get the type for the specification. */
  temp_tree = TREE_VALUE(list);

...

If i use the test code below:

void Function1() throw()
{
}

void Function2() throw(float)
{
throw 1.0f;
}

int main()
{
   Function1();
   Function2();
   return 0;
}

I never see the print statement. I am doing this with the FUNCTION_DECL
nodes that pass through the: gimplify_function_tree() function.

Is there something incorrect about my usage of this macro?

I am using gcc-4.0.1

Thanks,
Brendon.






Re: Additional tree node questions.

2006-10-15 Thread Brendon Costa
Brendon Costa wrote:
> Ian Lance Taylor wrote:
>> Brendon Costa <[EMAIL PROTECTED]> writes:
>>
>>> For each FUNCTION_DECL node I find, I want to determine what its
>>> exception specification list is. I.e. the throws() statement in its
>>> prototype.
>> Look at TYPE_RAISES_EXCEPTIONS (FNDECL).
>>
>> Ian
> 

Aahh. I should have read the documentation more closely.
TYPE_RAISES_EXCEPTIONS() requires a FUNCTION_TYPE node NOT a
FUNCTION_DECL node.

Thanks again for the pointer to the correct macro.

Brendon.




__comp_ctor() functions

2006-10-15 Thread Brendon Costa
Hi again,

I have noticed in the C++ front end that classes have a few
__comp_ctor () functions. These functions do not have an
implementation that can be obtained with DECL_SAVED_TREE.

Looking further into it there are a number of identifiers for
functions like this added to cp_global_trees. I have searched the code
but cant find where the code for these functions is. Does anyone know
where i can get more information on the "implementation" of these
functions?

I can make some assumptions on what they do, like
Attribute::__comp_ctor () would call Attribute::Attribute() etc. But
is there somewhere I can get some more definitive information on these
functions?

Thanks,
Brendon.




 Some examples of the sort of data I see are shown below 


Attribute::Attribute()
   Calls: None

Attribute::Attribute(Attribute const&)
   Calls: None

Attribute::__comp_ctor(Attribute const&)
   Calls: UNKNOWN

Attribute::__comp_ctor()
   Calls: UNKNOWN

main()
   Calls: Container::__comp_ctor()
   Calls: Container::__comp_ctor(Container const&)

Container::Container()
   Calls: Attribute::__comp_ctor()

Container::__comp_ctor()
   Calls: UNKNOWN

Container::Container(Container const&)
   Calls: Attribute::__comp_ctor(Attribute const&)

Container::__comp_ctor(Container const&)
   Calls: UNKNOWN


-
class Attribute
{
public:
   Attribute() {}
   Attribute(const Attribute& right){}
};

class Container
{
public:
   Attribute a;
};


int main()
{
   Container c1;
   Container c2(c1);
   return 0;
}




Getting a virtual functions VT lookup index

2006-10-17 Thread Brendon Costa
Hi all,

How can I find a FUNCTION_DECL node from a CALL_EXPR node for virtual
function calls?

Note: I am not after the node for the function that will be executed
at runtime as I know this is not possible to determine in most
situations.


Thanks for any help in advance,
Brendon.



---
I have tried the following:

When I encounter a CALL_EXPR for a virtual function call, I can get
the index for the virtual function table from the OBJ_TYPE_REF object
operand: 2 is an INTEGER_CST that seems to have a different index for
the various virtual functions.

However I need to find the FUNCTION_DECL node that is associated with
a particular type with the given virtual function table index.

For example if I have a index value of: 1 that I have obtained from
the OBJ_TYPE_REF node operand 2 I can also get the METHOD_TYPE node
and thus get the RECORD_TYPE node for the object.

How can I find the FUNCTION_DECL node given the virtual table index of
the function and the classes RECORD_TYPE node?

* I have tried using class.c: get_vcall_index() but it seems to return
negative integers.

* I have also looked in the TYPE_BINFO with the BINFO_VIRTUALS for the
BV_CALL_INDEX but that is NULL.

* I have also tried using CLASSTYPE_VCALL_INDICES and all these have
failed to give me a value that seems correct.




Re: Getting a virtual functions VT lookup index

2006-10-18 Thread Brendon Costa
Brendon Costa wrote:
> Hi all,
> 
> How can I find a FUNCTION_DECL node from a CALL_EXPR node for virtual
> function calls?
> 

Well I have managed to achieve this, though I don't know if it is the
best way to do so. For the sake of people that may find this question in
the list archives I will show how i did this.

>From a CALL_EXPR node, i get TREE_OPERAND(call_expr, 0). If this is a
OBJ_TYPE_REF then this is either a virtual function call or a function
pointer call to a class member (Maybe something else too but I cant
think what).





ref = TREE_OPERAND(call_expr, 0);
/* Get the INTEGER_CST node that contains the index to virtual function
lookup table */
virt_index = TREE_OPERAND(ref, 0);


/* Get the type of the class which this method belongs to */
pointer_type = TREE_TYPE(TREE_OPERAND(call_expr, 0));
method_type = TREE_TYPE(pointer_type);
class_type = TYPE_METHOD_BASETYPE(method_type);

virt_list = BINFO_VIRTUALS(TYPE_BINFO(class_type));
for (virt = virt_list; virt; virt = TREE_CHAIN(virt))
{
   fndecl = BV_FN(virt);
   index = DECL_VINDEX(fndecl);

   !! Now if index and virt_index are the same then fndecl is the
FUNCTION_DECL node for the virtual function being called. !!
}


I hope this might help someone. Also if anyone can see a major problem
in this code or a much better way of doing this I would love to hear it.

I discovered mostly through trial and error so there is a chance that it
could be completely incorrect.

Brendon.


__comp_ctor and std::ofstream default constructor

2006-10-19 Thread Brendon Costa

Hi all,

I am having trouble with finding what method of a class that __comp_ctor 
() would call. I have been assuming that it will call the constructor 
method that has the same parameter list as the __comp_ctor () function 
but this does not seem to be working.


Particulary when compiling code that uses std::ofstream. At some point 
the std::basic_ofstream >::__comp_ctor () 
function is being called, and I am trying to find the FUNCTION_DECL node 
for the assocated std::basic_ofstream 
>::basic_ofstream () constructor method.


This seems to be a problem. I can see the method in the fstream source 
header file, but the FUNCTION_DECL node for the constructor has strange 
arguments which do not seem to be in the source file.


In particular calling: DECL_ARGUMENTS() on the function decl and 
iterating over its parameters i get the following parameters for what i 
think is the default constructor for the basic_ofstream class.


(I EXPECT THIS ARGUMENT AND NO OTHERS)
arg: 0
   name: this
   type: NULL

(I DONT EXPECT THE FOLLOWIN ARGS)
arg: 1
   name: __in_chrg
   type: int

arg: 2
   name: __vtt_parm
   type: NULL


So somehow the code:
basic_ofstream::basic_ofstream()

is producing a FUNCTION_DECL with args that suggest it is declared:
basic_ofstream::basic_ofstream(int __in_chrg, void* __vtt_parm)

Can someone please help me understand why this happens?

Also when given a FUNCTION_DECL node for a __comp_ctor () function of a 
class, how I can find the FUNCTION_DECL node that would be called by it 
in order to perform the initialisation of the class data?


Thanks,
Brendon.



Re: __comp_ctor and std::ofstream default constructor

2006-10-19 Thread Brendon Costa

Brendon Costa wrote:


basic_ofstream::basic_ofstream(int __in_chrg, void* __vtt_parm)

Can someone please help me understand why this happens?

Well looking more in the source I have found that this happens when 
virtual inheritance is in play. It is used to determine which 
constructor will construct the single base object.



The other question however is still valid. If anyone knows a simple way 
to achieve this then I would love to hear it.


Also when given a FUNCTION_DECL node for a __comp_ctor () function of 
a class, how I can find the FUNCTION_DECL node that would be called by 
it in order to perform the initialisation of the class data?




Thanks,
Brendon.



More __comp_ctor () woes

2006-10-23 Thread Brendon Costa

Hi again,

I am having issues with the __comp_ctor () __base_ctor () etc functions 
that I encounter in the C++ front-end tree (Just before gimplification).


If i compile some code that looks like:

#include 
int main()
{
  std::allocator alloc;
  const char* str1 = "Hello";
  const char* str2 = "Hello";
  std::basic_string, std::allocator 
> str(str1, str2, alloc);


  return 0;
}

Note: I think this code is incorrect, however it compiles fine.

I am trying to figure out which of the constructors of std::basic_string 
GCC would use for this code. Is it the following constructor?


template
basic_string(_InputIterator __beg, _InputIterator __end, const _Alloc& 
__a = _Alloc());



The main problem i am having is that GCC generates a FUNCTION_DECL for:

__comp_ctor (::char const*, ::char const*, ::std::allocator const&)

I am trying to find the corresponding constructor from the basic_string 
class that should be called in place of the __comp_ctor function. There 
seems to be no FUNCTION_DECL node for the constructor: 


basic_string(::char const*, ::char const*, ::std::allocator const&)

which I would expect if there is a __comp_ctor () with those parameters.


My questions are:

1) Which of the basic_string constructors would be being called in this 
situation?


2) Do __comp_ctor and __base_ctor functions just call/are substituted 
with equivilant "User Constructors" with exactly the same arguments or 
do the arguments of the __comp_ctor sometimes differ from the "User 
Constructor" that it is associated with?


3) Again, is there a simple way of finding the constructor method that 
would be called given a corresponding __comp_ctor () method?


Thanks,
Brendon.



Re: More __comp_ctor () woes

2006-10-23 Thread Brendon Costa
Sorry that my previous email was unclear. I have tried to clarify what i 
meant in this email by answering your questions.



Andrew Pinski wrote:


On Tue, 2006-10-24 at 02:30 +, Brendon Costa wrote:
 

I am trying to find the corresponding constructor from the basic_string 
class that should be called in place of the __comp_ctor function. There 
seems to be no FUNCTION_DECL node for the constructor: 


basic_string(::char const*, ::char const*, ::std::allocator const&)

which I would expect if there is a __comp_ctor () with those parameters.
   



Wait you say you have a function decl but cannot find the function decl
for the constructor?  I don't understand what you are getting at here.
Do you understand how templates work because that seems like where you
are getting lost.  Templates are instantiated with different types, in
this case, with _InputIterator being "const char*"

 


If there is a simple class like:

class MyClass
{
   MyClass()
   {}
};

int main()
{
   MyClass c;
   return 0;
}

This will produce at least 3 FUNCTION_DECL nodes:
1) main()
2) MyClass::MyClass(MyClass* this)
3) MyClass::__comp_ctor (MyClass* this)

I call (2) a "User Constructor" for want of a better name. But note that 
BOTH (2) and (3) are considered "constructors" as returned by 
DECL_CONSTRUCTOR_P(fndecl)



I have the FUNCTION_DECL node for (3) and want to find (2) from it. 
Basically as i understand it, main() will call (3) using a CALL_EXPR 
when it constructs the instance of MyClass. This __comp_ctor (MyClass* 
this) function (Which does not have an implementation i can find with 
DECL_SAVED_TREE) should call (2)


I have noticed that all __comp_ctor () and __base_ctor () FUNCTION_DECL 
nodes seem to correspond to a "User Constructor" for the same class that 
the __comp_ctor () was found in with exactly the same parameter list as 
the __comp_ctor () has.


For example:
SomeClass::__comp_ctor (const SomeClass&)

will some-how call:
SomeClass::SomeClass(const SomeClass&)

and:
SomeClass::__comp_ctor (int)

will some-how call:
SomeClass::SomeClass(int)



So hopefully that helps to understand what i mean by the two 
FUNCTION_DECL nodes.


The problem is that with the code in the previous email, i cant seem to 
find the associated "User Constructor" FUNCTION_DECL node given the 
__comp_ctor constructor FUNCTION_DECL node. In particular:


std::basic_string::__comp_ctor (::char const*, ::char const*, 
::std::allocator const&)

exists, but i cant seem to find an associated:
std::basic_string::basic_string(::char const*, ::char const*, 
::std::allocator const&)

NOTE: I have left the template parameters out to make this easier to read.




My questions are:

1) Which of the basic_string constructors would be being called in this 
situation?
   




template
basic_string(_InputIterator, _InputIterator, const _Alloc& );

Instantiated with _InputIterator being "const char*".

 

Ignore this question. I think i know the answer, and yes i do understand 
how template instanciation works. I just wanted someone to say yes or no 
as to wether the above constructor from std::basic_string is the one 
that is being used when compiling the code I provided.


In particular, is the code I provided calling a different constructor 
from within std::basic_string class or is it instanciating this 
templated cosntructor and using the instanciated constructor with "const 
char*" as the template parameter?


I think the answer is yes it is using an instanciation of this 
constructor, but I wanted clarification.


I was interested in the answer to this as i am trying to reproduce the 
problem with the __comp_ctor () FUNCTION_DECL in a small test case (Not 
using std::basic_string). I have tried but seem unable to reproduce the 
problem in a smaller test case currently. An answer to that question 
above might help me in finding what exactly is causing the problem i 
have encountered.



Thanks,
Brendon.


Re: More __comp_ctor () woes

2006-10-23 Thread Brendon Costa

Andrew Pinski wrote:


Why do you need to find (2)?  It is not the function which is actually
called.  DECL_SAVED_TREE might not be set but that is because it has
already been gimplified and lowered to CFG at the time you are looking
through the calls.

Why do you need to know the constructor anyways?

 


Thanks for the reply.

I am creating a tool that allows users to perform static analysis of C++ 
exception propogation. Basically while GCC compiles source, I gather 
certain information that is then embedded as data belonging in a 
seperate section in the asm file. This then produces an object file in 
which my data is placed into a seperate section called ".edoc" which 
allows a post processing tool which extracts this information to 
construct a complete "pessimistic" call graph of source, and it also 
gathers other information about exception try/catch blocks and where the 
function calls are made etc.


I wont go into details about the rest of it because that is not the 
issue here, but in order to construct a call graph one of the things i 
need to do is to follow the CALL_EXPR's from say the main() function 
mentioned in my last email that actuall calls MyClass::__comp_ctor () to 
the fact that the users defined MyClass::MyClass(MyClass* this) function 
is being called.


So i need to know that when in the main function for example i come 
across a CALL_EXPR and the function target for that is a FUNCTION_DECL 
node for the MyClass::__comp_ctor (MyClass* this), this will actually 
call: MyClass::MyClass(MyClass* this). This allows me to construct the 
call graph, otherwise I have a broken link that ends at the __comp_ctor().


Since in the tree there is no implementation for any of the __comp_ctor 
(), __base_ctor (), ... same for dtor() functions. In my data file i 
generate a "fake" call graph link which says that the __comp_ctor () 
calls the MyClass() method.


If there is another way of achiving this I would love to hear it, but I 
have a mostly working implementation already, this is one of the few 
problems i have left to iron out.


Thanks,
Brendon.




Re: More __comp_ctor () woes

2006-10-23 Thread Brendon Costa
In order to help, I am posting the code I use to try and get a "user 
constructor" FUNCTION_DECL node from a "__comp_ctor ()" FUNCTION_DECL 
node as mentioned in previous emails. The code can be found at the end 
of this email.


Also...


class MyClass
{
   MyClass()
   {}
};

int main()
{
   MyClass c;
   return 0;
}

This will produce at least 3 FUNCTION_DECL nodes:
1) main()
2) MyClass::MyClass(MyClass* this)
3) MyClass::__comp_ctor (MyClass* this)

I call (2) a "User Constructor" for want of a better name. But note that 
BOTH (2) and (3) are considered "constructors" as returned by 
DECL_CONSTRUCTOR_P(fndecl)
   




This comes from the fact ABI has two constructors, an in charge and an
out of charge and the orginal constructor is still there though 3 is a
clone of (2). (2) is never compiled or called.

 

As i understand it (2) is compiled, as that is the function that 
contains the users code. I could be wrong here though as i have very 
little experience hacking GCC.




The code I am using to obtain a "user constructor" FUNCTION_DECL from a 
"__comp_ctor ()" FUNCTION_DECL looks as shown below. I really dont like 
this code as it currently uses a strcmp in order to findwhat it is after 
and it also does not work completely. I have also placed some code at 
the very end of this email which is a small test case of code that this 
fails for.



Thanks,
Brendon





I call the function: GetCtorEquivilent(tree fndecl) with the fndecl node 
for the "__comp_ctor()" i am looking for the associated "User 
constructor" for.




static tree GetCtorEqvProcMethod(tree fndecl, tree method,
  const char* context_string, int context_len);

/*==*/
tree GetCtorEquivilent(tree fndecl)
{
  tree context = NULL_TREE;
  tree method = NULL_TREE;
  int context_len = 0;
  const char* context_string = NULL;
  tree ret = NULL_TREE;
 
  context = DECL_CONTEXT(fndecl);

  BJC_ASSERT(TYPE_P(context), "Should be a TYPE object");
  BJC_ASSERT(CLASS_TYPE_P(context), "Should be a CLASS TYPE object");

  /* Get list of classes methods that we will search for a
  constructor in. */
  method = TYPE_METHODS(context);
 
  context_string = IDENTIFIER_POINTER(

 DECL_NAME(TYPE_NAME(context)));
  context_len = strlen(context_string);
 
  while(method)

  {
 ret = GetCtorEqvProcMethod(fndecl, method,
context_string, context_len);
 if (ret)
 {
return ret;
 }

 method = TREE_CHAIN(method);
  }

  return NULL_TREE;
}


/*==*/
static tree GetCtorEqvProcMethod(tree fndecl, tree method,
  const char* context_string, int context_len)
{
  tree ret = NULL_TREE;
  tree list = NULL_TREE;
  int decl_len = 0;
  const char* decl_string = NULL;

  /* We are only interested in constructors */
  if (!DECL_CONSTRUCTOR_P(method))
  {
 return NULL_TREE;
  }

  /* Ensure we do not look at ourself */
  if (method == fndecl)
  {
 return NULL_TREE;
  }

  /* If is a template decl then process all instanciations */
  if (TREE_CODE(method) == TEMPLATE_DECL)
  {
 list = DECL_TEMPLATE_SPECIALIZATIONS(method);
 while (list)
 {
/* The TREE_VALUE should be eithre a FUNCTION_DECL
 or a TEMPLATE_DECL process it recursivly. */
ret = GetCtorEqvProcMethod(fndecl, constructor,
   TREE_VALUE(list), context_string, context_len);
if (ret)
{
   return ret;
}
list = TREE_CHAIN(list);
 }

 return NULL_TREE;

  }

  /* Template decls are handled above, at this point we
  should only ever see function decls. */
  BJC_ASSERT(TREE_CODE(method) == FUNCTION_DECL, "");

  /* This is dodgy there has to be a way to do this
  without using string compare of the first part. I.e.
  we are comparing without looking at the template data.*/
 
  /* I.e. ::Container<::int>::Container() is the correct

  constructor NOT
  ::Container<::int>::Container<::int>() which will never occur
  */
  decl_string = IDENTIFIER_POINTER(DECL_NAME(method));
  decl_len = strlen(decl_string);
  if (strncmp(context_string, decl_string, decl_len))
  {
 /*decl has a different name than the context. */
 return NULL_TREE;
  }

  if (context_len != decl_len)
  {
 if (context_string[decl_len] != '<')
 {
/*decl has a different name than the context. */
return NULL_TREE;
 }
  }

  /* Got a method with the same name as the class that
  encapsulates it. Now checking parameter types to see if
  they are the same */
 
  /* This function is defined elsewhere and will compare

  parameters of functions/methods ignoring the
  in_charge_identifier, vtt_parm_identifier nodes
  if the 3rd arg is true*/
  if (!FunctionsHaveSameArgs(fndecl, method, true))
  {
 return NULL_TREE;
  }

  r

Re: More __comp_ctor () woes

2006-10-24 Thread Brendon Costa
For the code shown below, if i get the type node for the class: 
"MyClassT" and then call TYPE_METHODS() on it and iterate over the 
nodes, there is no FUNCTION_DECL node for the function:


MyClassT::MyClassT<::int, ::char const*>(::char const*)

From what I understand from the documentation, since this is a member 
function of the class like any other constructor, it should exist. There 
is a TEMPLATE_DECL node however that represents this in the list, but it 
returns NULL_TREE when used with eithre of the macros: 
DECL_TEMPLATE_SPECIALIZATIONS(tmpdecl), or: 
DECL_TEMPLATE_INSTANTIATIONS(tmpdecl). I would expect that it should 
return non-NULL_TREE for DECL_TEMPLATE_SPECIALIZATIONS(tmpdecl)


I know the FUNCTION_DECL node for the function:
MyClassT::MyClassT<::int, ::char const*>(::char const*)

exists as the node passes through the: gimplify.c: 
gimplify_function_tree() method.


Is this a bug or am I doing/expecting something that is incorrect?

Thanks,
Brendon.




template
class MyClassT
{
public:
  template
  MyClassT(_InputIterator __beg) {}
};

typedef MyClassT MyClass;
int main()
{
  const char* str1 = "Hello1";
  MyClass mc(str1);

  return 0;
}




Re: More __comp_ctor () woes

2006-10-25 Thread Brendon Costa

I have been looking at the source in

class.c:
   clone_function_decl()
   clone_constructors_and_destructors()

pt.c:
   check_explicit_specialization()

In pt.c: check_explicit_specialization() it specifically requests that 
the clone function of a specialised constructor NOT add the new clone to 
the classes method vector.



Does this have something to do with the comment in the code in that by 
not adding a method to the types method vector it somehow defines the 
constructor as being "not in charge"?


I thought that an "in charge" or not "in charge" constructor was defined 
using some other means than by wether or not the constructor is in the 
types method vector.



I can now go ahead and hook into the clone method to gather the 
information i need, however it is a bit hackish and I am still trying to 
understand why the specialisation of a template constructor does not get 
added to the methods vector.



Thanks for any information in advance.
Brendon.



The appropiate few lines of code in the pt.c: 
check_explicit_specialization() file look as shown below:



else if (DECL_CONSTRUCTOR_P (decl) || DECL_DESTRUCTOR_P (decl))
   /* This is indeed a specialization.  In case of constructors
  and destructors, we need in-charge and not-in-charge
  versions in V3 ABI.  */
   clone_function_decl (decl, /*update_method_vec_p=*/0);

 /* Register this specialization so that we can find it
again.  */
 decl = register_specialization (decl, gen_tmpl, targs);




Re: More __comp_ctor () woes

2006-10-25 Thread Brendon Costa

Hi all,

Well after trying numerous different approaches to find the 
FUNCTION_DECL node for a constructor like MyClass::MyClass(int) from a 
FUNCTION_DECL node for one of the constructors: MyClass::__comp_ctor 
(int) or similar, I have found that there is a VERY simple way to do 
this using DECL_CLONED_FUNCTION()



I am posting my understanding here in case others are searching for the 
same information in the future.


The __comp_ctor (), __base_ctor (), and similarly __comp_dtor () ... 
like functions are created as clones of what i call "user constructors".


I.e. for the code:

class MyClass
{
public:
   MyClass(int i) {}
};

GCC will generate:

1) MyClass::__comp_ctor (int)
2) MyClass::__base_ctor (int)
3) MyClass::MyClass(int)

Item (3) contains the users code for the constructor. At some point GCC 
then gets the FUNCTION_DECL node for (3) and creates two "clones" of (3) 
which are (1) and (2) I think these are referred to as in-charge and not 
in-charge constructors. From what I understand these clones are just an 
alias for the original function. However you are unable to get the 
DECL_SAVED_TREE() from them which gives the actual code for the 
constructor.


All code that uses constructors will make use of these __comp_ctor () 
and __base_ctor () like functions and NOT the user defined MyClass() 
constructor, i think this is done because it is simpler than the 
alternative of trying to use the MyClass::MyClass(int) function directly.


Anyhow, the simple way of finding the original function (3) from any 
clone (1) and (2) is to use the macro: DECL_CLONED_FUNCTION(fndecl). For 
a function that is a clone of another function, this will return the 
original function that was cloned, in our case (3)


I failed to understand what it is meant when a function is a clone, and 
so assumed that the "implementation" of the __comp_ctor () function 
(which is a clone) would have a CALL_EXPR node in it somewhere for the 
(3) FUNCTION_DECL, where from what i understand now it seems to just be 
an alias for it.


Part of what I have found confusing is that there are "deleting" 
destructors, which made me think that there was some extra code as part 
of this __deleting_dtor () that woud free the memory or something like 
that. Maybe this is the case, but I do not need to know that for my 
situation.


Hope this can help someone else.

Thanks,
Brendon.



DECL_TEMPLATE_INFO and TYPE_DECL nodes

2006-10-28 Thread Brendon Costa
In what situations is it valid to call DECL_TEMPLATE_INFO() on a
TYPE_DECL node?

I am using DECL_TEMPLATE_INFO in order to see if a DECL node has any
template information associated with it. The documentation says:

For a VAR_DECL, FUNCTION_DECL, TYPE_DECL or TEMPLATE_DECL
template-specific information.

but each time i try to call it with a TYPE_DECL it segfaults.

Thanks,
Brendon.


Bug? Exceptions and implicit constructors.

2006-10-29 Thread Brendon Costa
Hi all,

Is this a bug in GCC or does the code below incorrectly use exceptions
and virtual inheritance?


I expect the code below to display:

Constructing child2.
Caught exception: 3

However it causes an abort after displaying the first line. Looking
further into this i found that when GCC creates the implicit default
constructor for the GrandChild class, it gives it a no-throw specifier:
throw()



Usually when GCC creates an implicit default constructor it gives it a
no-throw specifier if that class and all its parents only construct
primitive data without calling on user defined constructors. In the case
below GCC seems to ignore that it uses a user defined constructor
defined in Child2() I assume because of the virtual inheritance.

To work around the problem, it is a simple matter of defining a default
constructor in GandChild OR even adding an attribute to the GandChild
class that has complex construction like std::string. E.g. Change
GrandChild class to look like:

class GrandChild : public Child1, public Child2
{
public:
   std::string  s;
};

Thanks,
Brendon.





class Parent
{
public:
   int data;
};

class Child1 : public virtual Parent
{
};

class Child2 : public virtual Parent
{
public:
   Child2()
   {
  std::cout << "Constructing child2." << std::endl;
  throw 3;
   }
};

class GrandChild : public Child1, public Child2
{
};

int main()
{
   try
   {
  GrandChild c;
   }
   catch(int i)
   {
  std::cout << "Caught exception: " << i << std::endl;
   }
   return 0;
}



Obtaining builtin function list.

2006-11-08 Thread Brendon Costa
How can I get a full list of all GCC C++ built-in functions that may be
used on a given platform or GCC build?

For example, __cxa_begin_catch(), __cxa_end_catch(), __builtin_memset ...

I am currently working with GCC 4.0.1 source base.

Thanks,
Brendon.




Re: Obtaining builtin function list.

2006-11-08 Thread Brendon Costa

Thanks for the information. It was very helpful.

I have now written some code (Seperate to gcc) that makes use of the 
builtins.def and associated def files to generate a list of all builtin 
functions as i require with the full prototypes as would be declared in 
say a header file. Are there also frontend specific builtins that I will 
need to handle in addition to those builtins defined in builtins.def?


I am using the C++ front end and could not find any def files for C++ 
frontend builtins but your comment seemed to imply that front-ends could 
define their own set of builtin functions.


As for the libsupc++.a and libgcc*.* libraries. Are they compiled with 
the newly generated gcc/g++ compilers or are they compiled with the 
compiler used to build gcc and g++?


Thanks,
Brendon.


Well, there is the documentation, e.g.:
   http://gcc.gnu.org/onlinedocs/gcc-4.0.3/gcc/Other-Builtins.html

Or if you want to look at the source code, look at builtins.def and
look for calls to add_builtin_function or lang_hooks.builtin_function
in config/CPU/*.c.

Well, that will tell you about functions like __builtin_memset:
functions which are specially recognized and handled by the compiler.
__cxa_begin_catch is a different sort of function, it is a support
function which is called by the compiler.  For the complete set of
those functions, look at libgcc*.* and libsupc++.a in the installed
tree (generally under lib/gcc/TARGET/VERSION).  Those functions
generally have no user-level documentation.

Ian
 



Obtaining type equivilance in C front end

2006-11-08 Thread Brendon Costa
How do i determine if two type nodes in the C front end are equivilent? 
In C++ i use same_type_p() but do not see an equivilant for the C front end.


Thanks,
Brendon.


Re: Obtaining type equivilance in C front end

2006-11-09 Thread Brendon Costa



The function you want is comptypes.
 


Thanks. That is working well.



Hi Brendon,

Wouldn't the C++ one (mostly) be a superset of the C?



Types are reasonably different between the C and C++ front ends though you do 
have the common ones because as you said, C++ is a superset of C. The C++ front 
end has a similar comptypes function which is called by same_type_p, however it 
is not the same as the C one. Rather than trying to write my own based on the 
C++ one, I was sure there would already exist a function to do it somewhere in 
the C front end...

Thanks for the help.
Brendon.





Getting "char" from INTEGER_TYPE node

2006-11-09 Thread Brendon Costa
I am having some trouble with getting type names as declared by the user 
in source. In particular if i have two functions:


void Function(int i);
void Function(char c);

when processing the parameters i get an INTEGER_TYPE node in the 
parameter list for both function as expected, however 
IDENTIFIER_POINTER(DECL_NAME(TYPE_NAME(node))) returns the string "int" 
for both nodes. I would have expected one to be "int" and the other to 
be "char". Looking at the TYPE_PRECISION for these nodes i get correct 
values though, i.e. one is 8 bit precision, the other is 32 bit.


How can i get the "char" string when a user uses char types instead of 
"int" strings?


Thanks,
Brendon.



Re: Getting "char" from INTEGER_TYPE node

2006-11-10 Thread Brendon Costa
> > I am having some trouble with getting type names as declared by the user
> > in source. In particular if i have two functions:
> >
> > void Function(int i);
> > void Function(char c);
> >
> > when processing the parameters i get an INTEGER_TYPE node in the
> > parameter list for both function as expected, however
> > IDENTIFIER_POINTER(DECL_NAME(TYPE_NAME(node))) returns the string "int"
> > for both nodes. I would have expected one to be "int" and the other to
> > be "char". Looking at the TYPE_PRECISION for these nodes i get correct
> > values though, i.e. one is 8 bit precision, the other is 32 bit.
> >
> > How can i get the "char" string when a user uses char types instead of
> > "int" strings?


After more debugging, the problem was with the type I was obtaining the
name of. I was using DECL_ARG_TYPE() to obtain it and not TREE_TYPE() on
the function parameter node. This was giving me a wider integer type
parameter instead of the type that the user declared.

Brendon.



GCC Garbage Collection

2006-11-12 Thread Brendon Costa

Hi All,

   I think i am having trouble with the garbage collector deleting the 
memory for tree nodes that i am still using.


   In my code i gather all sorts of information from FUNCTION_DECL 
nodes as they pass through the gimplify_function_tree() function. I 
gather info about types and functions and store that information in my 
own data sturctures. Alongside this data i also store the original tree 
node for the FUNCTION_DECL or type and later perform some post 
processing using this node to gather additional information before 
saving this data to a file.


I think that the garbage collector is cleaning up some of the nodes that 
I have stored in my structures.


How can i determine if it is deleting the memory for this node?



I have read the GCC Internals manual on garbage collection, and am not 
sure how I should use it in my situation. I have a dynamic array of my 
own structs like below:


struct TypeInfoStruct
{
  tree node;
   my data
};
typedef TypeInfoStruct TypeInfo;


and i have an array of these declared in a .c file:

static TypeInfo** registered_types = NULL;

... initialise registered types array someplace in my code


I manage the memory for the registered types array and the TypeInfo 
structure instances and can not give this to the garbage collector to 
do. However the node element in the structure seems to become invalid.


To declare the node as being a root for garbage collection and thus 
should not be freed should i declare my structure like:


struct TypeInfoStruct
{
  GTY(()) tree node;
   my data
};


I dont think this will work from what I have read it seems the garbage 
collection only seems to work for single globals. How can i achieve this?


Thanks,
Brendon.



Re: GCC Garbage Collection

2006-11-13 Thread Brendon Costa
> The wiki page 
>http://gcc.gnu.org/wiki/Memory_management
> might help you
> 
> I had a quick glance at your mail, so I may be wrong, but I am not sure that
> you configured correctly the build system so that thet GTY(()) macros get
> processed correctly. Sadly, the gengtype generator does not recieve the list
> of files to process thru the command line, but only in internal data.
> 

Thanks for the reply. I followed the steps for setting up the build
system according to:

http://gcc.gnu.org/onlinedocs/gccint/Files.html

I have my code in a header file as part of the C++ front end so i
followed those instructions accordingly. I was wondering if my usage of
the GTY(()) macros is incorrect.

Is there any way of finding out what memory the garbage collector is
freeing up? If i print the memory reference of the node when i know it
exists and then able to match that with a reference that i see the GC
deleting at least I can determine if this problem even is related to GC.

Thanks,
Brendon.





Re: GCC Garbage Collection

2006-11-13 Thread Brendon Costa

Mike Stump wrote:


On Nov 12, 2006, at 10:47 PM, Brendon Costa wrote:

   I think i am having trouble with the garbage collector deleting  
the memory for tree nodes that i am still using.



You must have a reference to that data from gc managed memory.  If 
you  don't use use gc to allocate the data structures, it just won't 
work.   In addition, the roots of all such data structures have to be 
findable  from the gc roots.  The compiler is littered with examples 
of how to  do this, as an example:


  static GTY(()) tree block_clear_fn;

is findable by the gc system.  All data findable from this tree, will  
be findable by the GC system.


If you _must_ have references outside gc, you can do this, if there 
is  at least 1 reference within the gc system.  For example, if you do:


static GTY(()) tree my_refereces;

void note_reference(tree decl) {
  my_references = tree_cons (NULL_TREE, decl, my_references);
}

and call note_reference (decl) for every single bit you save a  
reference to, it'll work.  Actually, no, it won't work, precompiled  
headers will fail to work because you'd need to modify the PCH writer  
to write your data, because you didn't mark your data structures with  
GTY and didn't use gc to allocate them.  See the preprocessor for an  
example of code that doesn't use GTY and yet writes the data out for  
PCH files.





Thanks for the help.

I used the idea you showed above and it seems to work (I dont understand 
enough to know why you say it wont work and thus this email). I did this 
by adding the note_reference() function to except.c as a quick hack so I 
did not have to modify the build files at all.


Eventually I plan on trying to update my code to perform all calculation 
at the time it finds a given node. This will reduce memory usage as the 
nodes will not have to remain around after this. For now as a hack I 
might keep this code in my patched except.c file.


I dont think I understand the relationship between the PCH files and the 
garbage collector. I thought that the PCH files were the gt-*.h files 
which are generated, though I have no idea what is placed in them 
except. I think it somehow does some black magic to let the garbage 
collector know what does and does not need to be cleaned up, but in 
reality I have no idea.


None of my data structures are being managed by the garbage collector, i 
manually malloc and free them so I figured that I did not need to worry 
about modifying a PCH writer to cater for them? The only thing being 
used with the garbage collector are existing tree nodes which from what 
I understand should already have all infrustructure to achieve this...


So now when i create a instance of TypeInfo

struct TypeInfoStruct
{
   tree node;
   
};
typedef TypeInfoStruct TypeInfo;

I also now call note_reference() with the node that I store in the type 
info struct which ensures that the garbage collector knows there is at 
least 1 reference to this tree node and so does not destroy it.


Thanks again for the help. Sorry that I dont understand how this works. 
I am still trying to learn more about GCC hacking.


Brendon.






Re: GCC Garbage Collection

2006-11-13 Thread Brendon Costa

Mike Stump wrote:

It is the difference between all features of gcc working, or just 
most  of the features working.  If you want pch to work, you have to 
think  about the issue and do up the appropriate code.  However, I bet 
you  don't need pch to work.  If you are doing real stuff for a real  
production compiler, well, I retract that.


If you want it to work, the rules are simple, all data must be  
allocated and managed by gc and have GTY(()) markers.  You can escape  
the simplicity of this, if you want, but that requires way more  
thought and more code, slightly beyond the scope of this email.


So are you saying that the quick hack that i did will not work for 
fixing the memory problem I have but that it will probably raise its 
ugly head again or just that PCH will not work?


I am not interested in PCH for the moment, just in ensuring that the 
data i am using is not deleted.


This is not a permanent solution. I wish to get a prototype of my 
extension up and running soon and changing all my code to use the GCC 
garbage collector is a huge task and one which I am currently ill 
equipped to do.


Are there any advantages to using PCH besides it may make compiling the 
GCC compiler a little faster? In the case of many header files i can 
imagine it makes a large difference (For example tree.h is included 
almost everywhere and is quite a large file). The code in my header 
files is very small and these files are included in at most 5 c files so 
there is little speed advantage gained in precompiling these headers 
from what I understand. At most there is about 40 lines of code in each 
of them.


Thanks,
Brendon.


EXPR_HAS_LOCATION seems to always return false

2006-11-16 Thread Brendon Costa

Hi all,

I am trying to obtain location information (file, line) for a number of 
expr nodes (CALL_EXPR, THROW_EXPR and ADDR_EXPR) and it seems that every 
expression node i call EXPR_HAS_LOCATION on returns false.


If it returns true i then use: EXPR_LINENO(), EXPR_FILENAME() to obtain 
the required data but that never seems to occur.


Is there something i should be doing before using EXPR_HAS_LOCATION() ?

Thanks,
Brendon.


Re: EXPR_HAS_LOCATION seems to always return false

2006-11-16 Thread Brendon Costa

Steven Bosscher wrote:


On 11/17/06, Brendon Costa <[EMAIL PROTECTED]> wrote:


Is there something i should be doing before using EXPR_HAS_LOCATION() ?



Compile with -g, perhaps?


I tried that and it didnt seem to make any difference.


Re: EXPR_HAS_LOCATION seems to always return false

2006-11-16 Thread Brendon Costa

Brendon Costa wrote:


Hi all,

I am trying to obtain location information (file, line) for a number 
of expr nodes (CALL_EXPR, THROW_EXPR and ADDR_EXPR) and it seems that 
every expression node i call EXPR_HAS_LOCATION on returns false.


If it returns true i then use: EXPR_LINENO(), EXPR_FILENAME() to 
obtain the required data but that never seems to occur.


Is there something i should be doing before using EXPR_HAS_LOCATION() ?



An additional note, is that i seem to correctly get the location info 
from a HANDLER type node for a catch statement using the same code.


Brendon.


Differences in c and c++ anon typedefs

2006-11-26 Thread Brendon Costa

Hi all,

I have just come across a small difference in the way the C an C++ front 
ends handle anonymous struct types which is causing me some grief. In 
particular the following code:


typedef struct
{
   int b1;
   int b2;
} Blah;

void Function(Blah* b) {}

When i get the Blah type in the function above (After removing the 
pointer) i then use TYPE_MAIN_VARIANT on it and do some special 
processing on the resulting main varient type. In the C++ front end 
TYPE_MAIN_VARIANT returns the type "Blah" where as in the C front end 
this returns an anonymous RECORD_TYPE node.


If i then change the code to look like:

typedef struct
{
   int b1;
   int b2;
} Blah, Another;


And apply the TYPE_MAIN_VARIANT on "Another", C++ returns Blah and C 
again returns the anonymous RECORD_TYPE. In my situation this is causing 
some grief as i need a consistent name for the main varient type across 
translation units. The C++ front end will allow me to do this by the 
fact that it returns "Blah" as the main varient, however the C front end 
does not.


Is there some way in the C front end of obtaining "Blah" from the 
anonymous RECORD_TYPE node?


I was thinking of looking at the first TYPE_NEXT_VARIANT of the 
anonymous RECORD_TYPE node which may give me "Blah", but I am not 
certain it will do so all the time... Does anyone know if this will work 
or if there is a better way?



Thanks,
Brendon.


Re: Differences in c and c++ anon typedefs

2006-11-27 Thread Brendon Costa
Gabriel Dos Reis wrote:
> C++ defines a notion of "class name for linkage purpose" -- that is a
> notion used to define the One Definition Rule. 
> In general the TYPE_NAME of TYPE_MAIN_VARIANT is the class name for
> linkage purpose.  
> The behaviour you reported on implements the rule 7.1.3/5:
> 
>If the typedef declaration defines an unnamed class (or enum), the
>first typedef-name declared by the declaration
>to be that class type (or enum type) is used to denote the class type
>(or enum type) for linkage purposes only (3.5).
> 

As a result of C types not having a "class name for linkage purposes", I
am finding it difficult to define a "normalised" string for
FUNCTION_TYPE nodes that represents the type in the same way across
C/C++ for compatible function types.

Basically I need to save to a file a string that represents a
FUNCTION_TYPE node that can be compared against other strings that also
represent FUNCTION_TYPE nodes and if the two functions are compatible as
would be returned by: function_types_compatible_p() then the strings
should be equal.

In C++ I create the string by using TYPE_MAIN_VARIENT for the various
parts of the FUNCTION_TYPE node (return type, and parameters). Does
anyone have ANY idea of a way I could do something similar in the C
front end?

Thanks,
Brendon.


Re: Differences in c and c++ anon typedefs

2006-11-27 Thread Brendon Costa
Andrew Pinski wrote:
> Again C has different rules from C++.
> In C, the following two TUs combined together are still valid code while in 
> C++,
> they are invalid.
> 
> tu1.c:
> 
> struct a
> {
>   int t;
> };
> void f(struct a);
> 
>  cut -
> tu2.c:
> 
> typedef struct
> {
>   int t;
> }b;
> void f(b a);
>  cut -
> 

I think I am going to have to use a non-optimal solution for function
pointer type matches. I am creating a pessimistic/over-expanded
callgraph by saying that a function pointer call MAY call any function
whose function pointer type matches the call and whose address has been
taken somewhere in the source code. So maybe I can make it even more
pessimistic in the C case and match all structs/unions as being the
"same" for the purposes of testing function pointer equality.

Maybe it is also possible to somehow generate a "type string" based on
the contents of the struct/union in C, however I am not sure how I would
get that to inter-operate with C++ since C++ may treat two structures
with the same contents as different types. I might just go for the
pessimistic match first and look at improving things later on once the
rest of it is all up and going.

Thanks for the input,
Brendon.





Multiple FUNCTION_DECLS for __cxa_begin_catch

2006-11-29 Thread Brendon Costa

Hi again,

Getting further along with my project, I have come across yet another 
thing that I dont understand. While compiling:


libstdc++-v3/libsupc++/vec.cc

My GCC extension comes across two FUNCTION_DECL nodes that both have 
DECL_ASSEMBLER_NAME of __cxa_begin_catch


After reading some past posts it seems that the standard allows for 
multiple C functions defined extern "C" in different namespaces.


As an example of code which might do this see below:

extern "C"
{
  void Function(void) {}
}
namespace NS
{
  extern "C"
  {
 void Function(void);
  }
}

Is it safe to assume in the C++ front end that two functions declared in 
such a manner will always share the same implementation in which case it 
is kind-of like a "using" statement?


If so, is there any reason why the following code does not emit an error 
in the compiler but only in the assembler?


extern "C"
{
  void Function(void) {}
}
namespace NS
{
  extern "C"
  {
 void Function(void) {}
  }
}


Thanks,
Brendon.


Re: Multiple FUNCTION_DECLS for __cxa_begin_catch

2006-11-30 Thread Brendon Costa

Andrew Pinski wrote:


On Thu, 2006-11-30 at 16:08 +1100, Brendon Costa wrote:
 


Hi again,
Is it safe to assume in the C++ front end that two functions declared in 
such a manner will always share the same implementation in which case it 
is kind-of like a "using" statement?
   



The C++ front-end is broken and needs to be fixed ...

Any front-end that produces two FUNCTION_DECLs (or any kind of decl)
that point to the same function (or decl) is now broken.

 



Seems this is already a known bug in the bug tracker. Thanks for the 
clarification on this.


Brendon.


GCC Internals Documentation

2006-11-30 Thread Brendon Costa
I am getting a bit closer to finishing an alpha release of my project 
which makes use of a modified version of GCC 4.0.1 in order to collect 
data about the source code being compiled. In developing it i have come 
across a number of things that I think may be helpful to be added to the 
GCC Internals documentation in particular to section 9:


"9 Trees: The intermediate representation used by the C and C++ front ends"



I have read some posts that have come up recently on modifying 
documentation but I just want to ask a few questions to clarify...


Do i need to have any sort of agreement with FSF in order to submit 
documentation changes?


Should I update the latex sources for the docs or do it on the wiki?

After making the changes do i submit them to the patches list in order 
to be reviewed or do they go somewhere else first?



I will not be able to get around to doing this at least until i have 
made a first release of my project (Hopefully within the next month), 
but I think i might start taking notes now of some things i would like 
to add.


Thanks,
Brendon.


Determining if a function has vague linkage

2006-12-01 Thread Brendon Costa
Hi all,

I understand that all template functions in GCC should have vague
linkage and thus may be exported into numerous translation units where
they are used. I have been attempting to use a few different macros on
both an instanciated template functions FUNCTION_DECL node and a normal
functions FUNCTION_DECL node to see what the results are. I have tried:

DECL_WEAK(fndecl)
DECL_ONE_ONLY(fndecl)
DECL_COMDAT(fndecl)
DECL_DEFER_OUTPUT(fndecl)
DECL_REPO_AVAILABLE_P(fndecl)
IDENTIFIER_REPO_CHOSEN(DECL_ASSEMBLER_NAME(fndecl))

and ALL of these macros are returning false for both the FUNCTION_DECL
nodes. Is there any macro i can use to determine if a FUNCTION_DECL node
has vague linkage? Or do i need to just assume that it is the sace for a
template function?

Also what are some other examples of functions that should return true
for any of the above macros (I assume inlines do sometimes according to
the vague linkage GCC page)?

Thanks,
Brendon.


Re: Determining if a function has vague linkage

2006-12-02 Thread Brendon Costa
Brendon Costa wrote:
> Hi all,
> 
> I understand that all template functions in GCC should have vague
> linkage and thus may be exported into numerous translation units where
> they are used. I have been attempting to use a few different macros on
> both an instanciated template functions FUNCTION_DECL node and a normal
> functions FUNCTION_DECL node to see what the results are. I have tried:
> 
> DECL_WEAK(fndecl)
> DECL_ONE_ONLY(fndecl)
> DECL_COMDAT(fndecl)
> DECL_DEFER_OUTPUT(fndecl)
> DECL_REPO_AVAILABLE_P(fndecl)
> IDENTIFIER_REPO_CHOSEN(DECL_ASSEMBLER_NAME(fndecl))
> 
> and ALL of these macros are returning false for both the FUNCTION_DECL
> nodes. Is there any macro i can use to determine if a FUNCTION_DECL node
> has vague linkage? Or do i need to just assume that it is the sace for a
> template function?

I forgot to mention that i also looked at the DECL_LINKONCE_P() macro as
mentioned in the GCC internals documentation. However it has not yet
been implemented. This macro could help me i think.

I dont know enough about GCC to write this macro myself. Is there anyone
willing to implement this? I am willing to help in some way, I just dont
know where to start.

Thanks,
Brendon.





Understanding some EXPR nodes.

2006-12-07 Thread Brendon Costa
Hi All,

I am trying to understand certain EXPR nodes, when they are generated
and how code generated from them behaves in the resulting program.

The nodes that have me a little confused are:

TRY_CATCH_EXPR
TRY_FINALLY_EXPR
MUST_NOT_THROW_EXPR
EH_FILTER_EXPR

Note: I have read the GCC Internals documentation and the documentation
in the appropriate .def files for these tree codes. It does not mention
the information I am after.


TRY_CATCH_EXPR/TRY_FINALLY_EXPR
When code generated from these nodes encounter an exception while
processing code from operand 0 is there an implicit rethrow of that
exception at the end of the block of code given by operand 1 or does it
"sink" the exception and only rethrow it if the user specifically
requests it (In C++ anyway)?

In what situations are these nodes generated?


MUST_NOT_THROW_EXPR
What sort of code produces one of these nodes? They do not seem to be
used for the throw() specifiers for a function (At least in C++) as i
would have expected.

EH_FILTER_EXPR
In what situations are these nodes generated? I assume that the code
that these filters applies to is external to the node and if an
exception occurs in this external code that does not match any of the
types in the EH_FILTER_TYPES list (Do they have to be exact matches/how
is type matching done here) then it calls the EH_FILTER_FAILURE which
could be for example a call to terminate()?

How does the EH_FILTER_MUST_NOT_THROW() macro work? If it returns true
then the filter allows NO exceptions and if false then allows only
exceptions of type that are in this list? Is it possible for the
EH_FILTER_TYPES list to be empty and EH_FILTER_MUST_NOT_THROW() to
return false?



Thanks for any information on these questions. It will help me a great deal.
Brendon.


Re: Understanding some EXPR nodes.

2006-12-07 Thread Brendon Costa

Thanks for the reply. One thing that I didnt quite get...


Ian Lance Taylor wrote:


TRY_CATCH_EXPR/TRY_FINALLY_EXPR
   


If operand 0 throws an exception, there is an implicit rethrow after
executing operand 1.  (Of course, operand 1 can prevent that rethrow
by doing its own throw, or by calling a function which does not
return, etc.).
 


TRY_CATCH_EXPR is generated for C++
   try { ... } catch (...) { }

TRY_FINALLY_EXPR is generated for
   class C { ~C(); }; ... { C c; ... }
to ensure that C's destructor is run.

And of course they can be generated for other languages as well.
 



For the C++ code shown above try { ... } catch(...) {}
From memory I get a TRY_BLOCK node and not a TRY_CATCH_EXPR.

Also is the implicit rethrow just for the TRY_FINALLY_EXPR and not for 
the TRY_CATCH_EXPR  or is it for both of them?


Thanks,
Brendon.


Pre Compiled Headers

2007-02-11 Thread Brendon Costa
Hi All,

I am coding an extension for GCC and am having some difficulty with
pre-compiled headers. I dont know if my understanding of how they work
is completely correct and so my code is getting a segfault.

I have a hook into gimplify_function_tree() and I process functions as
they pass through that function.

When compiling a PCH file, the functions being compiled will come
through gimplify_function_tree() i will process them for callgraph and
exception information (which is what my program does), and then store
references to the tree nodes I use into a GTY rooted node as a tree
chain list. This should prevent the nodes from being garbage collected
which is something I want at the moment because i re-visit all the nodes
after the compilation process has been completed (This is not the best
solution but the easiest for my situation at the moment).

In addition to preventing garbage collection of nodes i am interested in
from what i understand of the PCH mechanism, when compiling a file that
uses a PCH GCC should magically re-construct the chain of tree nodes
rooted at the GTY base. What i need to do is then look over those nodes
again after they have been reconstructed and re-process them for
callgraph and exception information (Yes i know this is not good as it
means there is duplicate processing of nodes which is what PCH files are
trying to avoid, but please ignore this fact for the moment).

So i also hook into c_common_no_more_pch() and when it is called i will
look at the tree chain and "re-calculate" the data from the nodes that
should have been recreated from the PCH engine. There seem to be tree
nodes re-created in the tree chain list, however upon trying to process
these nodes I get a segfault almost as though the pointers have been
restored by the PCH engine, but the memory they point to has not.

I am using gcc 4.0.1 on a i386 NetBSD 3.0 system.

Questions:

1) Is there a better place to hook into to know when the PCH engine has
completed reconstructing the tree nodes?

2) Is there something wrong in the way i understand the reconstruction
of data by the PCH engine? I assumed it would re-construct a full tree
hierarchy if i rooted a tree node.

Thanks,
Brendon.



Re: Pre Compiled Headers

2007-02-13 Thread Brendon Costa
Sorry for the long email, i find that i need to provide a whole lot of
history as to why i am doing things the way i am so people understand
what i am trying to ask...

If you want to know why i was doing things read on, otherwise i have
some simple questions at the end since i have discovered that the way i
was going about it was going to fail.


Mike Stump wrote:
> On Feb 11, 2007, at 1:17 PM, Brendon Costa wrote:
>> I am coding an extension for GCC and am having some difficulty with
>> pre-compiled headers. I dont know if my understanding of how they work
>> is completely correct and so my code is getting a segfault.
> 
> You _must_ have clean data structures and complete type safety.  You
> cannot have any pointers to GCed memory any place but GCed memory, or
> file data marked with GTY(()) markers.  The last rule is a good
> approximation, cpp for examples deviates from this general rule.
> 
>> So i also hook into c_common_no_more_pch()
> 
> Ick.  You seem to have thought too much about pch.  Instead, pretend it
> isn't there, except for doing the GTY(()) thing on all static scope data.
> 
I understand that for data managed in GCC you shouldn't have to think
about PCH as the dumping of data to the PCH file and the re-constructing
of the data upon importing a PCH (mmap) should all be done for you. My
problem is that I am trying to do a "quick hack" to get my code to work
alongside the PCH mechanism where my code does not make any use of the
GCC garbage collector. After thinking about it more the approach i was
going to try as a quick hack will not work anyway.


> c_common_no_more_pch seems like the wrong place.  You calculate things
> when you have the data to calculate from and someplace to put it, but
> the code there.  You want to do it at the end of file processing, put it
> there.  You want to do it from the parser, or optimizer, put it there.
> 

I calculate my data just before a function is gimplified. The problem is
that the resulting data structures i am producing from analyzing the
body of the function are not managed under the GCC garbage collector and
so do not get dumped to the PCH file and then "reconstructed"/mmap'ed
back into the GCC process upon importing a precompiled header. So my
plan was to place a list of FUNCTION_DECL nodes under a garbage
collected static root which will be dumped and reconstructed by the PCH
process as normal and then "recalculate" my data structures from the
reconstructed FUNCTION_DECL nodes.

It kinda works (The segfault was because i forgot to use TREE_VALUE on
the chained items to get the actual FUNCTION_DECL nodes). However the
problem i mentioned before about the lowering of the FUNCTION_DECL has
occurred and so in the end i cant successfully recalculate my data
structures from the FUNCTION_DECL nodes.

It was a quick hack idea to try and get around the big task of moving
all my data structures to be managed by the GCC garbage collector.

This was based on an idea from Mike in Nov 2006 when i needed to ensure
that the FUNCTION_DECL nodes were not collected by the garbage collector
while i was still using them. You mentioned that i could do the above to
get the garbage collector to think that the nodes are still being used.
I also assumed that it could be used for working around the PCH engine.


The problem is that not only is this a bad way of going about it (I
should just re-write my code to use the garbage collector to manage my
data structures, though it is a big job) but that the FUNCTION_DECL
nodes that would be dumped to the PCH and then reconstructed by the PCH
mechanism will have been lowered to GIMPLE or even RTL. I need to
process the body of the FUNCTION_DECL nodes before they are lowered.


>> 1) Is there a better place to hook into to know when the PCH engine
>> has completed reconstructing the tree nodes?
> 
> See why this is the wrong question?  The question is, I am doing this
> calculation, where should I put it?  The answer to that is, when you
> have all the data from which you need to do the calculation and a place
> to store it.
> 
>> 2) Is there something wrong in the way i understand the reconstruction
>> of data by the PCH engine? I assumed it would re-construct a full tree
>> hierarchy if i rooted a tree node.
> 
> Yes.
> 
> The mental model you should use for PCH is this, when processing a
> header, the compiler does a complete memory walk and dumps it to a
> file.  Upon `reading' the pch file, it mmaps all that memory back in,
> throwing out all previously allocated memory, and continues just after
> the #include.
> 
> If you have any code that knows about pch (other than GTY(())), you're
> probably trying to be too smart about pchness.  I can't tell if you are
> or not, unless you tell me why you want to be.
>

Re: Pre Compiled Headers

2007-02-13 Thread Brendon Costa
Thanks for the response.

>> * Is it possible to explicitly free garbage collected memory if i know i
>> will not be needing it any more
> 
> Yes, ggc_free.[1]
> 
> 1 - Some people think that all the savings you'd get from this aren't
> work the pain of doing it, if you have to track down any over zealous
> ggc_free calls.
> 

There is no "additional" pain in doing this as I have already
developed my code using manual malloc/free in such a way that i am
reasonably sure there are no leaks, or double free calls or the like.

In my code I have been allocating memory with a macro BJC_MALLOC() and
BJC_FREE() that have some wrapper code that makes sure i have no
memory leaks at the end of the program and also ensure that i dont try
and free un-allocated memory/free something more than once etc. I can
also use this system to track all memory allocations/deallocations in
a log file. If there are any allocation/deallocation problems it tells
me what file/line that has been done on. Most of my memory allocation
has been done in that "Array Class" anyway. So most of it is fairly
localized.

I think i will try to continue using the explicit free and i can then
later just simply define out calls to ggc_free if i decide to do so at
a later date.

Anyhow, I guess it is time to get my hands dirty with the GCC Garbage
Collector...


Thanks for the advice,
Brendon.


Re: Pre Compiled Headers

2007-02-13 Thread Brendon Costa
Mike Stump wrote:
> On Feb 13, 2007, at 3:16 PM, Brendon Costa wrote:
>> There is no "additional" pain in doing this as I have already
>> developed my code using manual malloc/free in such a way that i am
>> reasonably sure there are no leaks, or double free calls or the like.
> 
> No, the pain would be a dangling pointer.  You cannot call ggc_free
> unless there are no references, anywhere in live memory.
> 
> 

Ahh. Yep, I didn't quite understand what you mean by over-zealous
freeing before :-) I can see the problem.

I guess ill leave the freeing of memory to the garbage collector
unless i find that the memory usage is too high, in which case i guess
i could manually invoke the collector or there are a number of places
where it will be safe to explicitly free memory as it will only ever
be referenced in a single location. But no need for premature
optimization...

Thanks,
Brendon.


Obtaining FUNCTION_DECL arglist as defined in source

2007-03-24 Thread Brendon Costa
Hi All,

I am writing to find out if there is any method of obtaining or
constructing a function parameter list string as it would have been
defined in the source code?

For example for the function:
int Function(std::string v1, std::string v2) {return F(v1, v2);}

I would like to obtain a string that looks like:
"std::string v1, std::string v2"

OR:
"std::string, std::string"


I am currently doing something like this using DECL_ARGUMENTS(fndecl)
and then for each of the arguments in the list (skipping the "this" arg
for member functions and the occasional in_charge_identifier or
vtt_parm_identifier args) I obtain a string representation of the type
and use this to construct the function parameter string.

Anyhow, there are a number of "special" cases that i need to handle
using this method, such as DECL_ARGUMENTS returning a NULL_TREE or the
... parameter type etc. With all this i still do not quite get the
results i need.


What i find is that often the resulting parameter string is not exactly
or even functionally the same as what is specified in the source code.
For example some functions that receive a std::string parameter by value
are modified by the compiler for optimization reasons to pass the
parameter in by reference instead, and this shows up in the resulting
parameter string i construct.

So for the example before i would sometimes find that the string contains:
"std::string&, std::string&"


What i need to achieve:

BEST OPTION:
I need to get a string exactly the same as defined in the source code.
It is fine for this to include parameter names and default argument
values if that is how it has to be.

SECOND BEST OPTION:
Otherwise i need to at least obtain names for the types as they were
specified. I.e. without the optimizations applied or this parameter
added or typedefs substituted etc.


Is this possible? And if so where is the best place to look?

By the way i am using the source for GCC 4.0.1 in case that makes a
difference.

Thanks,
Brendon.




Re: Extension for a throw-like C++ qualifier

2007-04-02 Thread Brendon Costa
I have for a while been working on a tool that performs static analysis
of exception propagation through C/C++ code. It is very close to
complete (I estimate the first release within the month).

Implementing static analysis of C++ exception propagation in g++ alone
is not really possible well at least not really feasible. The tool i
have been developing uses g++/gcc to obtain information about the code
so i can construct a complete pessimistic callgraph along with info
about exception usage, but it does not do it at compile time, rather it
has to perform the analysis at the equivalent to the linking stage to
get ALL the necessary information to construct this callgraph accurately.

The problem is for things like function pointers/virtual functions,
until you know exactly what code goes into a particular application you
can not be sure exactly what to expand the calls to. I.e. you will only
get a subset of possible calls from a single translation unit. To get
the full set of possible calls you need to look at ALL code that goes
into a particular application. (This becomes a nightmare to manage with
especially plugins and the like)

If ALL C++ code implemented a thing like in Java (Which i think is what
you are describing) where you can be guaranteed that a function only
throws certain exceptions and the compiler mandates this, then you can
achieve the same sort of results in a single translation unit, however
getting every project that uses C++ to adopt such techniques is just not
going to happen and without having all projects adopt this the usage
will become very difficult. I.e. you will either have to wrap the
function calls inside of try/catch blocks to meet the strict requirements.

Using something like that could be a nightmare... There would be catch
blocks all over the place that are used to filter out possible
exceptions from other peoples code that you just don't know if they are
going to occur. And what do you do if an exception does occur that you
were not expecting?

I think the idea is a great one. It works very well in Java. I still
dont understand why the C++ standard adopted the current throws()
mechanism (Who thought that terminating a program based on that
particular exceptoin condition that the compiler cant check for you was
a good idea? Hmm. I dont know a anyone who make use of the throws()
clauses very often unless they usually do throws() and do a catch all
and are not necessarily sure what exactly to do with what they caught
anyway) I think the whole mechanism comes from the history of the
language and so it has a semi-useful technique unlike Java where the
throws() clause works well.

I have written this application so that i can use the standard C++
mechanisms for indicating the exceptions that can occur, then i can run
my code through this tool and it will notify me of places where an
exception may occur that is contrary to what i have specified. This
means i can use the standard mechanisms and get the equivalent of a
"compile error" from my application when my assumptions are incorrect.
It basically assures me that I wont have my program terminate because of
this problem.


Anyhow, I could talk for a while about this... But if you are interested
more in this application then there is a partially complete website:

http://edoc.sourceforge.net/

As i said it is not yet complete but it is getting close. I will be
updating the website and the documentation again once i have completed
modifying the code. The only way to download the code at the moment is
from CVS and i would not recommend that just now. So it might be best to
wait and give it a try. You can read the "rough" manual for now to get
an idea of what EDoc++ can and cant do and how to use it.

I will hopefully be posting to this list later a shameless plug for this
application when it is all ready (I was going to ask the maintainer
first). I am interested in feedback on how it can be improved, but this
thread came up and i just could not pass the opportunity...

Brendon.

Sergio Giro wrote:
> Maybe that the option you suggest
> 
>> This is best
>> done with something like -fstatic-exception-specifications or maybe -
>> Wexception-specifications -Werror.
> 
> is ideal, but it seems to me not practical at all. Every stuff using
> the throw qualifier as specified in the standards will not work. If an
> inline method in a standard header is
>   theInlineMethod (void) throw () { throw theException(); };
> this will not even compile...
>   Of course, using an extension as the one I propose, some of the
> standard methods must be wrapped if you want to use the new qualifier
> in order to track the possible exceptions that the methods may arise,
> but I think it is quite more practical to wrap such methods than to
> modify the headers...
>   In addition, the semantic meaning is not the same: a
> throw (a,b,c)  qualifier indicates that you ar

Re: Extension for a throw-like C++ qualifier

2007-04-04 Thread Brendon Costa
I don't have a lot of experience with GCC development. I know enough to
have done what i needed to do.

As for a place to start i would read the GCC internals documentation as
a first step. There is also a lot of info on the Wiki too. However a lot
of documentation is specific to either creating a new front end or
porting GCC to a new platform. The mistake i made first off was to only
read the documentation sections i thought were relevant for me and
ignore the rest. This turned out to be an issue as i skipped some things
which have come back to bite me.

I would then download a version of GCC either from CVS or a past stable
version and get it to compile, install and run successfully. I found
learning stuff was difficult, I ended up doing a lot by trial and error
as i just could not find all the information i needed. The GCC internals
manual will be your friend, however even more useful were the comments
in the various header files like tree.h There were numerous macros in
there that i found useful that are not mentioned in the code.

As for where to hook into the code, for your project others will have to
help you here. I did not need to modify the parser in any way and i also
did not have to generate any tree nodes only analyze what existed. I
leave this open for others to comment on.

I agree with your comments that having the code for EDoc++ outside GCC
is a problem. My patch is for GCC 4.0.1 which had just been released
when i started writing it. I am not yet sure if it will be a big task to
port to the newer version of GCC.

I think your extension could be useful, but it would be MORE useful if
it was adopted by the wider community. You may find that if it is not
possible to make the code compile with other compilers that dont have
the extension you will be less likely to have people adopt it. So think
about how people could use macros to use the source unchanged without
too much verbosity and have it compile fine on older GCC versions. I.e.
maybe renaming _throws to standard throws with a macro if the compile
does not support the extension. Simple but something to think about.

If i had a preferred option I would get people to change the C++
standard to mandate that compilers had to check exceptions at compile
time just like java, and deal with the screaming masses whose code no
longer compiles later...

I am curious though that when you wrap other functions in try/catch
blocks what will you do if you do receive an exception that does not
match that specifier?

Sorry if my comments were discouraging. It didn't mean to be.

I hope this is a start. I guess you may have already read that
documentation, in which case i guess you need to start looking at how to
add stuff to the parser. Which i know absolutely nothing about.


Finally Mike mentioned using LTO. This is not necessary for your
modification idea, but with LTO what i have implemented in EDoc++ could
be integrated into GCC and run at link time in order to achieve a
slightly similar result.  I might look into this at a later time.

Brendon.




Sergio Giro wrote:
> On Apr 2, 2007, at 2:32 AM, Brendon Costa wrote:
>> I have for a while been working on a tool that performs static
>> analysis
>   I agree that Brendon's project is a very good idea, but I still
> have an argument against it: having such an analysis into gcc forces
> the gcc development community to maintain the code performing the
> analysis. Having this analysis outside gcc makes it less likely to
> remain in time. If I start a huge project, I would prefer to wrap
> external functions into try { } catch(...) blocks (in a huge project,
> the time spent by this task is negligeable) instead of relying on an
> external tool...
>   I agree that edoc++ is very useful (particularly, generating
> documentation for Doxygen is a very nice issue!), but I keep
> interested to implement this feature inside gcc.
>   I write here because I am looking forward for your discouraging
> comments! Maybe you can convince that this is not useful...
>   If you consider this analysis to be useful, I will be grateful for
> any ideas concerning where and how to start looking in order to
> implement the analysis.
> Cheers,
>  Sergio
> 
> On 4/2/07, Mike Stump <[EMAIL PROTECTED]> wrote:
> 
>> Ah, yeah, that I suspect would be a even better way to do this...
>> Itt would be nice if gcc/g++ had more support for static analysis
>> tools...  Maybe with LTO.
>>
> 
> 
> 
> 



Re: Inclusion in an official release of a new throw-like qualifier

2007-04-10 Thread Brendon Costa
I prefer the method Jason mentioned of including this functionality as
a form of more strict checking of -Wexception-specs (Or maybe defining
a new warning) as opposed to having an attribute that defines new
semantics.

In the end the two are practically identical. The semantics of the
existing "OLD" throw() you are describing as the exceptions that can
be caught, where as the "NEW" semantics would be that the given
exceptions are the only ones that can propagate from calling the given
function.

These are for all intents and purposes the same. In the "OLD" style
the function still can only throw the given exceptions. It just so
happens that the program terminates if it tries to do otherwise. The
stricter checking just ensures that no other exceptions may arise and
so the program wont terminate as a result.

The only difference is that at compile time you will either enforce
that this throw() specifier is adhered to by doing static analysis or
you wont.

It avoids having to define special attributes in the code, the only
difference is the set of command line flags you pass to the compiler.
It does however mean that you cant provide function level
"enable/disable of static checking". I.e. It will check for all
functions whose implementation you find in the given translation unit.


Brendon.



Mike Stump wrote:
> On Apr 10, 2007, at 2:06 PM, Sergio Giro wrote:
>> Maybe I missed some point: why everything should be rewritten?
> 
> Let me try again.  The standard way to add a new qualifier in g++, is to
> add it in an attribute, please do that.  The possible responses are, no,
> I want to be different, or ok.  If you choose the former, you have to
> back your position.  For throw specs, that attribute is the standard one
> that goes on the function/method.  The name or the spec can be
> statically_check_eh_spec,  or something less verbose.
> 
>> The only bad thing here is that you have two qualifiers having similar
>> meanings...
> 
> Ding.
> 
>> But I think it must be that way
> 
> No, this is incorrect.
> 
>> in order to _avoid_ rewriting code.
> 
>> The point here is that, in order to do this, you need interprocedural
>> analysis.
> 
> You've not yet grasped they are isomorphic forms.  If the above is true,
> then then below is wrong.  If the below is not wrong, then the above
> must be wrong.  Your pick.
> 
>> If you have qualifiers as the one I describe, you can perform the
>> check by merely using the prototypes...
> 
>> So, what do you think now?
> 
> Unchanged.
> 
> 



Re: Inclusion in an official release of a new throw-like qualifier

2007-04-11 Thread Brendon Costa
ns
in different translation units (Jee that is a poor coding style but it
is possible using the preprocessor) and code compiled with
-fno-exceptions linked with code that allows exceptions, same with C++
and C code intermixed, templates and vague linkage, differing throws()
specifiers for a functions prototype in different translation units, and
the list of complexities goes on...

Many issues i believe relate to poor usage of the C++ language or
overlooked bugs caused by things like declaring prototypes in more than
one place and forgetting to copy across the throw() specifier. But all
these wonderful things are "possible" with C++. By generating the
information with the compiler you know exactly what is being used and it
does a lot of the hard work for you.

EDoc++ is implemented as a modification to the gcc 4.0.1 and also
requires a post processing tool that looks at the information generated
by this modified GCC once all the compilation is complete.

To make such a feature as Segio is describing useful in g++, i would say
that the first stage is to get it working as described in previous
emails where it would emit errors for throw specifiers that do not
include say std::bad_alloc, and then as a second stage using some form
of markup in the source or some other method like the external
suppressions file that it should then be capable of modifying that data
to suppress certain exception types, or override compiler generated
information for cases where it can not accurately determine say the
callgraph.

There is a lot of work required in implementing a feature like this well
i think. I do also think however that it is worth the effort (Which is
why i have been working on EDoc++) and that such a feature would be
valuable within gcc itself.

Anyhow, i think i have been rambling again... I am curios to see where
this thread leads.

Brendon.


Getting access to g++ tree from front end.

2005-03-31 Thread Brendon Costa
Hi,
   I am trying to make a small modification to a local copy of gcc (In
particular the g++ front end) that will help me in documenting
exceptions that can be thrown by functions. I have had a look at most of
the gcc documentation i could find and it has been helpful, but i am
currently stuck in where to look next. I know roughly how i want to
implement this code, but i need to find an appropriate place to "hook
into the g++ fronten code". What i need at the point of where i hook in,
is just the intermediate tree generated by the C++ front end. This is
the one spoken of in chapter 9 of the document "GNU Compiler Collection
(GCC) Internals" where it talks about the intermediate representation
used by the C and C++ front ends. I assume that this tree is generated,
and then converted to a GIMPLE tree and then later to RTL, but what
happens with it after i have used it is really none of my concern.
Questions:
1) Does gcc generate this full tree before it compiles the code (I.e.
generates RTL and then assembly etc), and if not is there any point in
time where this intermediate tree is complete so that i can get access
to all information parsed in the given g++ session.
2) If there is a full tree generated, where is the best place to get
that tree for my purposes (I will not be modifying the tree just
iterating through it getting the appropriate information that i need and
saving it externally)?
3) If the tree is never generated in its entirety at any point, i
noticed that in the gcc/toplev.c file there is a function called:
rest_of_compilation which is called for each function being compiled
from what i can tell. It there something similar inside the front end
where i can hook into that will give the full tree/branch for each
function in the parsed file(s)?
Thanks for any help,
Brendon.



G++ Modification Question

2005-08-03 Thread Brendon Costa

Hi all,

   A while ago I attempted to make a modification to gcc 4.0 before it 
was released. I attempted to create a modification that would allow me 
to document all exceptions that are eithre thrown directly by a 
function/method or that could propagate through a function/method. I ran 
into a few problems along the way and this email is to ask a few 
questions about gcc internals and better ways of coding in gcc so that I 
can re-write this module to work correctly. (Sorry it is so long)


The modification I made to the C++ front end would parse the 
global_namespace tree in the function cp_parser_translation_unit (found 
in cp/parser.c) and create a database that maps a function/method to all 
calls it made to other functions/methods and also a list of all 
exceptions thrown directly by it. This database file was appended to for 
each run of g++ and so after compiling a whole project the database 
would(Should) contain the details for all methods in that project. I 
then wrote a post-processing tool that went through the database and 
generated a list of all exceptions that could propagate through a 
particular function/method and also a seperate list of all exceptions 
that would be thrown directly by the method.


In concept it worked, but my implementation of the tree parser did not 
always work... A simple problem was that if I was to compile a small 
file like:


---main.cpp---
void Function1();
void Function2();

int main()
{
   Function1();
   return 0;
}

void Function1()
{
   throw 1;
}

void Function2()
{
   throw 2;
}




Then the database would contain the information as expected for 
Function1() but not for Function2(). The parser would come across a 
definition for funciton 2 but there was no function body segment in the 
tree for me to process and find out if it threw exceptions or called 
other functions. This problem also seemed to occurr for all library 
files I compiled. Is this some optimisation done by gcc and is there 
another place where I can get access to a global tree that represents 
ALL source code that has been parsed?



Also I was wondering if I was parsing the best tree for this problem. 
The tree I was parsing was the tree found in the variable:


   global_namespace

in the file:

   cp/parser.c

in the function:

   static bool cp_parser_translation_unit(cp_parser* parser)

just after the call to:

   finish_translation_unit();

Another problem with the method I was using is that some of the tree 
nodes that I expected to find in the tree were not avaliable, it was as 
if they had already been translated to more primitive node types. For 
example I did not see any tree nodes of type:


 EH_SPEC_BLOCK
 TRY_BLOCK
 THROW_EXPR
 EH_FILTER_EXPR
 CATCH_EXPR

(Note I could be wrong about some of those expressions. I do however 
remember that instead of getting a THROW_EXPR I had to search for calls 
to the __cxa_throw() function to know when an exception was thrown. I 
cant remember how I handled the try/catch blocks though I did).



I would like to re-write this code at some point and I am looking for 
some pointers on ways of doing things better/differently. In particular 
I would like to know if the method I was using as described above is 
fine, or if there is a better tree to parse to get the information I 
need or a better place to process the tree from.


If anyone is able to help me or even point me to the place where I can 
get more help on this then I would greatly appreciate it.


I feel that a tool which enables me to see what exceptions can possibly 
propagate through what methods could be very useful. It could also warn 
of exceptions that could possibly propagate through methods whose 
exception specification block do not support it. I was also looking at 
modifying doxygen to automatically import this data into its 
documentation, but that is another topic.


Thanks,
Brendon.




Re: Defining a common plugin machinery

2008-10-01 Thread Brendon Costa
I have notes inline below, following is my summary of libplugin from
what i understand of your posts:
* It exists as a fraemwork that works with GCC now
* It uses xml files to define plugins (Allows making new plugins as
combinations of others without making a new shared library, i.e. just
create an xml file that describes the plugin)
* It handles issues with inter-dependencies between plugins
* It uses a "push" framework, where function pointers are
replaced/chained in the original application rather than explicit calls
to plugins (Provides more extensibility in a application that makes
heavy use of function pointers, but produces a less explicit set of
entry points or hooks for plugins)
* Currently it provides automatic loading of plugins without specific
user request
* It already has a framework for allowing plugins to interact with the
pass manager

If you can think of any other points to summarize the features it might
be helpful as you are closer to it.

The issues i see with this framework:
   * it seems to provide a lot of features that we may not necessarily
need (That should be up for discussion)
   * plugin entry points are not well defined but can be "any function
pointer call"

Some questions:
* How does the framework interact with the compile command line arguments?
* Does this work on platforms that dont support -rdynamic or can it be
modified to do so in the future?




Hugh Leather wrote:
>*Separating Plugin system from appliction*
>Libplugin ships as a library.  Apart from a few lines of code in
>toplev.c, the only other changes to GCC will be refactorings and
>maybe calling a few functions through pointers.
As i understand the difference between the pull vs push, a plugin will
load, and then modify existing function pointers in GCC to insert its
own code and chain the existing code to be called after it. Is this correct?

Doing this will be able to make use of existing function pointers as
plugin hook locations, but some hooks we may want are not already called
by function pointers and so would need to be changed. This means that
plugin hook locations are not explicitly defined, but rather any place
where a function pointer is used can be modified. Personally i prefer
explicit specification of plugin hook locations.

>I think it's important to separate the plugin system from the
>application.  Doing plugins well, IMO, requires a lot of code.  It
>shouldn't be spread through the app.  It also cleanly separates
>plugin mechanism from the actual extensions the app wants.
>Finally, plugins have to be extensible too. They should really be on
>a nearly equal footing with the app.  Otherwise plugin developers
>who want the plugins to be extensible will need to reimplement there
>own extensibility system.
Without the use of plugin meta-data in XML files and auto-loading and
many of the things discussed, i am not so sure that plugins will be such
a large body of code. It is really a matter of deciding if such features
that libplugin provides are desirable for GCC. If so, then there is a
lot of code required for plugins and libplugin becomes a good idea IMO.
If not, then libplugin may just be more than we need. It really depends
on what "doing plugins well" means for the specific application.


>*Scalable and Granularity*
>The system is very scalable.  Really this is due to the push
>architecture.
The granularity as i understand it is only as fine/coarse as the number
of function pointers in the system that can be overwritten. This is no
different from the pull method (i.e. The granularity depends on where
you put the hook locations) except that function pointers "may already
exist". Though i may have mis-understood something...

I.e. For the "pull" method you can:

Add a "pull" for firePluginEvent() or add a "pull" inside each existing
event handler. Where as the push method requires that the existing event
handlers are called via function pointers and the "push" chains itself
to that.

I have used a similar method for the "push" plugin in python. The
advantage here is that basically "anything" can be pushed in python so
the system becomes very flexible to extend via "plugins". In C/C++ the
areas that can be extended need to be defined and turned into function
pointers for the push method to work.

Again, assuming i have understood how it works.

>*Mutliple cooperating plugins
>*I think some of the proposals don't allow multiple plugins or
>plugins aren't able to be extended in the same way that the
>application is.  In libplugin you can have lots of plugins all
>depending on each other.  Plugins can provide extension points as
>well as the application - this means it isn't just a matter of the
>application deciding what's important and everyone else having to
>make do.
>
>In some senses, this is the difference between a plugin system and
>loading a few shared libraries.  A plugin system provides a

Re: Defining a common plugin machinery

2008-10-01 Thread Brendon Costa

> I believe we should first focus (when the runtime license will permit
> that) on making whatever plugin machinery available and merged into
> the trunk (when it comes back to stage one). This is not an easy task.
Isn't the point of this discussion to decide what features to put into a
plugin framework? I.e. We need a "whatever plugin machinery available"
to exist before we can even think about merging that into the trunk and
defining what that looks like is the point of this discussion i thought.

Possible steps for moving forward with this:
1) Define what features we need for the first release, and think about
what we may want in the future
2) See which frameworks currently exist and how each meets the necessary
features identified
3) Either use one of the above frameworks as a base or start a new
framework on the plugin branch
4) Work on the "base set of features" for a first release
5) Make sure the branch is up to date/tracking the trunk
6) Look at merging into the trunk when licensing is complete

We are still at 1 (and partially identifying projects for 2) as far as i
understand.

I was going to start itemizing the features we have discussed, and the
frameworks mentioned on the wiki. But I am not going to have time to do
so for a number of weeks now. If someone else wants to do it it may get
done a bit faster.

So far, i think libplugin seems to be the most "general" plugin
framework for GCC i have had a chance to look at (It was easy to look at
because it has some decent documentation online).

> In practice, I think that we should first try to get some code into
> the trunk which make some plugin work on some common easy host system
> (Linux), and only after try to generalize the work to harder hosts.
I agree, that providing working code for only simple to implement
platforms (and basic plugin features) at first is a good idea (but do so
on a branch first, then merge that to the trunk once it is operational).
However we do not want to start with a framework that will need to be
completely redesigned in the future to later support other platforms or
usages. I.e. Thinking ahead but not necessarily implementing ahead...

> My main concern is plugins & passes.
Yes. We have not really looked at this more important aspect in much
detail, how to manage passes with plugins. It looks like libplugin has
some ideas for pass management that may help? Any thoughts?





Re: Defining a common plugin machinery

2008-10-08 Thread Brendon Costa
  
> Personally I'm against the env var idea as it would make it harder to
> figure out what's going on. I think someone mentioned that the same
> effect could be achieved using spec files.
>
Ian mentioned the idea of creating small wrapper scripts with the names:
gcc/g++ etc which just call the real gcc/g++... adding the necessary
command line args. These can then just be put earlier in the search path.

I currently use the env var method in my project, but I think the
wrapper script idea is a bit nicer than using env vars personally, so i
will likely change to that soon.

Brendon.


Re: Defining a common plugin machinery

2008-10-09 Thread Brendon Costa

>   Sounds like you're almost in need of a generic data marshalling interface
> here.
>   
Why do we need the complication of data marshaling?

I don't see why we need to define that all plugin hooks have the same
function interface as currently proposed. I.e. a single void*. This
makes a lot of work marshaling data both as parameters and from return
values. This is already done for us by the language (Though i may have
mis-understood the train of thought here).

I will propose the start of a new idea. This needs to be fleshed out a
lot but it would be good to get some feedback.

I will use the following terminology borrowed from QT:
signal: Is a uniquely identified "hook" to which zero or more slots are
added. (I.e. Caller)
slot: Is a function implementation say in a plugin. This is added to a
linked list for the specified signal. (I.e. Callee)

The main concepts in this plugin hook definition are:
* Signals can define any type of function pointer so can return values
and accept any parameters without special data marshaling
* Each signal is uniquely identified as a member variable in a struct
called Hooks
* A signal is implemented as a linked list where each node has a
reference to a slot that has been connected to the signal
* A slot is a function pointer and a unique string identifier

This differs a bit from the QT definition but i find it helpful to
describe the entities.

Important things to note:
Multiple plugins are "chained" one after the other. I.e. It is the
responsibility of the plugin author to call any plugins that follow it
in the list. This gives the plugin authors a bit more control over how
their plugins inter-operate with other plugins, however it would be
STRONGLY recommended that they follow a standard procedure and just call
the next plugin after they have done their work.

Basically, the idea is to provide the following structure and then most
of the work will involve manipulation of the linked lists. I.e. Querying
existing items in the LL, inserting new items before/after existing
items, removing items from the LL.

This is not a proposed end product. It is just to propose an idea. There
are a few disadvantages with the way it is implemented right now:
* Too much boilerplate code for each signal definition
* The idea of chaining calls means the responsibility of calling the
next plugin ends up with the plugin developer which could be bad if a
plugin developer does not take due care, however it also provides them
with more flexibility (not sure if that is necessary).

Now, i have NO experience with the current pass manager in GCC, but
would the passes be able to be managed using this same framework
assuming that each pass is given a unique identifier?

Thanks,
Brendon.

#include 
#include 

/* GCC : Code */
struct Hooks
{
   /* Define the blah signal. */
   struct BlahFPWrap
   {
  const char* name;
  int (*fp)(struct BlahFPWrap* self, int i, char c, float f);
  void* data;
 
  struct BlahFPWrap* next;
  struct BlahFPWrap* prev;
   }* blah;
  
   struct FooFPWrap
   {
  const char* name;
  void (*fp)(struct FooFPWrap* self);
  void* data;
 
  struct FooFPWrap* next;
  struct FooFPWrap* prev;
   }* foo;
};

/* Initialised by main */
struct Hooks hooks;

void SomeFunc1(void)
{
   /* Call plugin hook: blah */
   int result = (!hooks.blah ? 0 : hooks.blah->fp(hooks.blah, 3, 'c',
2.0f));
   /* ... do stuff with result ... */
   (void)result;
}

void SomeFunc2(void)
{
   /* Call plugin hook: foo */
   if (hooks.foo) hooks.foo->fp(hooks.foo);
}

void PlgInit(struct Hooks* h);

int main()
{
   hooks.blah = NULL;
   hooks.foo = NULL;
  
   PlgInit(&hooks);
   return 0;
}


/* Writeme... */
#define PLUGIN_INSERT_BEFORE(Hooks, Struct, Hook, FuncPtr, Before, SlotName)


/* In plugin */
#define PLUGIN_NAME "myplg"

static void MyFoo(struct FooFPWrap* self)
{
   printf("MyFoo\n");
   if (self->next) self->next->fp(self->next);
}

static void MyBlah(struct BlahFPWrap* self, int i, char c, float f)
{
   printf("MyBlah\n");
   if (self->next) self->next->fp(self->next, i, c, f);
}

void PlgInit(struct Hooks* h)
{
   PLUGIN_INSERT_BEFORE(h, struct BlahFPWrap, blah, &MyBlah, NULL,
PLUGIN_NAME "_MyBlah");
   PLUGIN_INSERT_BEFORE(h, struct FooFPWrap, foo, &MyFoo, NULL,
PLUGIN_NAME "_MyFoo");
}

void PlgShut(struct Hooks* h)
{
   PLUGIN_REMOVE(h, PLUGIN_NAME "_MyBlah");
   PLUGIN_REMOVE(h, PLUGIN_NAME "_MyFoo");
}



Functional Purity

2008-11-28 Thread Brendon Costa
Hi all,
I want to use GCC to categorise "functional purity" in C++. My
definition will differ from classic functional purity. In particular:

A function is considered pure if it makes no changes to existing
memory or program state. There may be a few exceptions to this rule
such as for new/malloc in that they change program state (allocating
new memory) but will be manually marked as pure. However free/delete
should be marked as impure (modifying function parameter, but not
global state). This means that a function which is pure can create new
objects/memory and make changes to those new objects, but the existing
memory must remain unchanged.

When categorising a functions purity i also would like to identify the
cause of the impurity. In particular for a function that is impure i
want to categorise the impurity cause as:
* modifies global state
* modifies a function parameter
* modifies the object state (this is an extension of the function
parameter on the "this" parameter)

The reason for posting this is to ask. Is there code in GCC that
already does something "similar" in say one of the optimisation passes
so i can get a look at how to get started on this?

Thanks,
Brendon.


Fwd: Functional Purity

2008-11-29 Thread Brendon Costa
Forgot to reply all...

-- Forwarded message --
From: Brendon Costa <[EMAIL PROTECTED]>
Date: 2008/11/30
Subject: Re: Functional Purity
To: David Fang <[EMAIL PROTECTED]>


>Sounds like you want to (at least):
>
> 1) automatically qualify every declaration (including parameters, member
> declarations, and member function declarations) with 'const'.
>
> 2) forbid the use of the '=' operator family (including +=, etc...)
> (might be redundant with #1)
>
> Does this accurately summarize your proposed analysis? As a crude start,
> maybe you could alter the syntax trees, and let the rest of compilation
> catch any resulting violations?

You are right in that what i am trying to achieve is very similar to
the analysis of constness already performed by the compiler. Constness
and purity as i have defined them are very similar. Except there is no
syntatic const definition that can represent "global purity" (I.e.
Markup a given function to indicate that it does not modify any global
variables), and constness can be easily (and un-intentionally) cast
away.

Part of the reason for this analysis is to identify parameters and
methods that could be declared const but are not and notify the user,
so as a result it can not rely on the users const definitions.

Not being able to rely on protoype definitions will make the task much
more difficult, as i will need to do my analysis at link time. Though
i am already doing this sort of thing for my static analysis program
(EDoc++). The purpose of this analysis is to be a part of a larger
analysis to automate classifying the exception safety guarantee of
functions. I am looking at the feasibility of automating something
similar to the process described at:
http://www.ddj.com/cpp/184401728

Starting with the classification of a functions purity. I will also
look at the other suggestions in ipa-pure-const.c and
gimple_has_side_effects() and see if they can help me out somehow.

Thanks,
Brendon.


Re: Feedback request.

2008-12-07 Thread Brendon Costa
2008/12/8 Simon Hill <[EMAIL PROTECTED]>:
> I'm curious as to why I didn't get any responses to my last posts here
> on 29 / 11 / 2008.
> http://gcc.gnu.org/ml/gcc/2008-11/
>

Hi Simon,

I have found in the past that larger posts do not get many if any
responses. One thing that might help is to ask one question at a time
and keep it as succinct as possible. That is just my experience and
may not be behind the lack of responses you have seen.

If a post is going to take longer than 5 min to respond to then unless
there is some motivation to do so people are not likely to spend a
large chunk of time reading/responding to when that they could spend
the same time doing their own important work.

Brendon.


Re: Plugin API Comments (was Re: GCC Plug-in Framework ready to port)

2009-02-04 Thread Brendon Costa
2009/2/1 Sean Callanan :
>
> (3) The -fplugin-arg argument is one way to do arguments.  We do it as
>
>  -ftree-plugin=/path/to/plugin.so:arg=value:arg=value:...
>

In the previous discussions we had on this whole thing
(http://gcc.gnu.org/ml/gcc/2008-09/msg00292.html), we were aiming
towards arguments like:

(Style 1)
-fplugin= -f-[=]

so this might look like:
gcc -fplugin=edoc -fedoc-file=blah.edc -fedoc-embed

The idea was that GCC would then search for the plugin on some defined
search path (that will be specific for the build tuple and GCC
version) OR a user could specify the plugin including path on the
command line directly. So the above COULD look something like:

gcc -fplugin=/usr/local/lib/gcc-4.0.4/i386-unknown-netbsdelf3.0/edoc.so.1.0.0
-fedoc-file=blah.edc -fedoc-embed
gcc -fplugin=/usr/local/lib/gcc-4.0.4/i386-unknown-netbsdelf3.0/edoc.so.1.0.0
-fedoc-file=blah.edc -fedoc-embed

I understand that currently we are looking at a new format like:

(Style 2)
-fplugin=;[=];[=]...

example:
gcc 
-fplugin=/usr/local/lib/gcc-4.0.4/i386-unknown-netbsdelf3.0/edoc.so.1.0.0;file=blah.edc;embed

I personally prefer Style 1 for the plugin framework. My reasons include:

* It looks more like existing options and has a level of familiarity.
* Using ';' to separate args can be confusing to new people especially
if they forget to quote them properly (I can see many questions on
gcc-help arising from this). Also is ';' safe to use on all systems or
can some filesystems include the ';' character in a file/path name?
* I like to have the option for the plugin to be searched for by GCC
on a pre-defined search path. For those that want to specify their
plugins explicitly, they can still do so.
* I know that "automatic loading" of plugins is not going to be in the
plugin framework, but if in the future we DID decide to add it (for
some unknown reason), then the plugin arguments would not need to be
changed. I.e. With the above example if the plugin: edoc was to be
automatically loaded, its arguments could be provided like:

gcc -fedoc-file=blah.edc -fedoc-embed

which is just the same as it was before, but the -fplugin=edoc was omitted.

Brendon.


C++ Frontend and combining files

2006-10-02 Thread Brendon Costa
Hi All,

I have been writing a small extension for GCC on and off for the last
year or two that performs static analysis of C++ exception
propagation. Basically I use the patched GCC to gather the information
I am after while it compiles files and then save that information to
files that are then sent through a post processing phase.

I have run into a small problem that I cant find in the GCC source
where it handles the logic of combining object files. I.e.

g++ test.cpp ext_test.cpp -o blah

Will run the C++ frontend to compile test.cpp into some assembly file
like /tmp/foo1.s and then again for ext_test.cpp into /tmp/foo2.s

Now I currently need to generate an associated file for each of these
files as well. So my patched gcc will produce:

from test.cpp
/tmp/foo1.s
/tmp/foo1.edc

from ext_test.cpp
/tmp/foo2.s
/tmp/foo2.edc


Now I assume GCC runs the assembler on /tmp/foo1.s and produces
something like /tmp/foo1.o and similarly for /tmp/foo2.s produces
/tmp/foo2.o

Then GCC (SOMEWHERE I CANT SEEM TO FIND THIS PART OF THE CODE) links
these objects by calling ld with /tmp/foo1.o and /tmp/foo2.o combining
these into: blah

When this occurs I wish to also combine my /tmp/foo1.edc and
/tmp/foo2.edc into a file: blah.edc

Where is the best place in the GCC source to look at where this is
achieved? I need access to the output file name "blah" and the
assembly file names (or at least be able to construct my /tmp/foo1.edc
.. names from the object files) so I know what files to process for my
.edc files and how to combine them.


I am currently working with a gcc-4.0.1 source base. If someone can
point me as to where GCC links these files together I would greatly
appreciate it.

Thanks,
Brendon Costa.




Re: C++ Frontend and combining files

2006-10-02 Thread Brendon Costa
Ian Lance Taylor wrote:
> Brendon Costa <[EMAIL PROTECTED]> writes:
> 
>> Then GCC (SOMEWHERE I CANT SEEM TO FIND THIS PART OF THE CODE) links
>> these objects by calling ld with /tmp/foo1.o and /tmp/foo2.o combining
>> these into: blah
> 
> It's done in the driver, gcc.c.  Look for link_command_spec.
> 
> Ian
> 

Mike Stump wrote:
>
> Additionally, see -save-temps for additional hints.  This avoids
> /tmp/temp234.s as an intermediate file and generates ext_test.s
> instead.  Run with -v, and you can see how the compiler is called
> from
> the driver as well.  If the command line has all the information you
> need, you then can just change the compiler.  If it doesn't, change
> the
> specs to ensure that the information you need is on the command
> line.
> cp/lang-specs.h has those specs that you'd probably need to change.
>
>

Thanks for the info. I just read the comments in gcc.c on the spec
language and a few web-pages which basically have the same information
and it is all quite confusing...

Unfortunately the command line for cc1plus does not have all the
information I need. It has enough in order to create a .edc file for
each .cpp file compiled, but the problem comes when linking all these
results together as cc1plus is not used for this at all.

After looking at the spec files information I would probably change my
code so that the output would be like:

test.cpp : /tmp/foo1.s /tmp/foo1.s.edc
ext_test.cpp : /tmp/foo2.s /tmp/foo2.s.edc

and I would also change the method I was planning on doing things so
that I would need to do the following:


* Modify the ASM spec used for compiling .s files into .o files so
that it will somehow rename the /tmp/foo1.s.edc files to
/tmp/gah1.o.edc files where /tmp/gah1.o is the name of the output .o
file that comes from the assembler.

* Write a wrapper for ld, that looks for .edc files for each .o file
given and merges them before calling the real ld. This would also look
for .edc files associated with any libraries being statically linked
with and merge those as well.


--- Question/Request ---

I have no idea how I could go about achieving this. Would someone be
able to help me with any information on how I could modify the ASM
command spec in the GCC driver to do this?


Some simple questions that might help me in doing it:

1) Is it possible for a spec to execute more than 1 program?
I.e. can i call as and rm with the same spec?

2) If so, how could I add a new command to the spec?



I assume I would be modifying the contents of the "invoke_as" string
in gcc.c. If I did so would this also work when compiling with debug
mode enabled? (It seems there is a special ASM case for compiling with
debug)


Again, thanks a lot for any help you can provide. I am really looking
forward to having this tool completed.

Brendon.


Re: C++ Frontend and combining files

2006-10-03 Thread Brendon Costa
Mike Stump wrote:
> Hum, on second thought, why not just encode the information you want
> into the .o file.  Just put it into a special section, in whatever
> format you like, the linker will combine them, no additional files, .a
> files work, ld -r foo.o bar.o -o new.o works and so on.  You can then
> fish the information back out from the .o files or the executable as you
> want.
> 

That sounds like a great idea! I guess I need to start researching how
to embed data into .o files... Is looking at debug data generated by
the C++ front-end a good place to start?

I assume this can be done by adding certain directives to the
assembler source file (.s), since the assembler generates the .o files.

Anyhow, I will continue to look further into this and may get back
with more questions :-) later. Thanks for the idea.


Thanks,
Brendon.



Re: C++ Frontend and combining files

2006-10-03 Thread Brendon Costa
Brendon Costa wrote:
> Mike Stump wrote:
>> Hum, on second thought, why not just encode the information you want
>> into the .o file.  Just put it into a special section, in whatever
>> format you like, the linker will combine them, no additional files, .a
>> files work, ld -r foo.o bar.o -o new.o works and so on.  You can then
>> fish the information back out from the .o files or the executable as you
>> want.
>>
> 
> That sounds like a great idea! I guess I need to start researching how
> to embed data into .o files... Is looking at debug data generated by
> the C++ front-end a good place to start?
> 
> I assume this can be done by adding certain directives to the
> assembler source file (.s), since the assembler generates the .o files.
> 
> Anyhow, I will continue to look further into this and may get back
> with more questions :-) later. Thanks for the idea.
> 
> 
> Thanks,
> Brendon.
> 



Alright after briefly looking into some info on the binutils and GNU
as I have a few thoughts.

I think I could insert my data into the .s file into a particular
section. I am not sure if I should create my own named section used
for meta-data (Only available using COFF from what I understand using
the as .section directive) or maybe I could insert it into the .data
section with a particular identifier. I don't understand this
completely though.

If I place it into the .data section then would it then be possible to
have conflicts with code symbols? I am also not yet sure how I would
extract this data from the objects (Using libbfd somehow I will have
to look further into it).

What would be the best directive to use to insert binary data into the
section? I was thinking at first of using the .ascii directive as my
file is just a plain text file. However in the future I would like to
convert the file into a binary format and could not find a directive
that would allow me to insert binary data simply.

I would also need to figure out how to modify the front end to insert
this data into the .s file. I have not looked into that yet so it may
be as simple as inserting a node into the tree, but I don't know the
details yet.


-- Option 2 --
The other option I could think of is to generate a .edc file from the
front end as normal along with the .s file, then run the assembler to
produce the .o file from the .s file, and then get the driver to run
some "embedding compiler" program that uses libbfd to embed the .edc
file into the .o file before it is passed onto the linker. This could
be a more "generic" approach that could be used elsewhere for
embedding meta-data into object files using gcc, thought I am not sure
of the details yet.

This "embedding compiler" would simply place meta-data into object
files using libbfd and would be driven by the gcc driver. I have never
used libbfd before, and from what I understand it would still have the
same problems of needing to decide which section to place the
meta-data into. Where would be the best section to place this sort of
data?


Also does using ar/ld on resulting .o files filter out any sections
they don't know about? Or do they always just include sections into
the resulting archive/executable even if the sections are non-standard
containing meta-data that they don't understand?


Sorry for the emails, I just find it helpful discussing these issues
with others. It has produced great results so far in the idea of
embedding the meta-data into the .o files and I am fairly new to
compiler development and so don't understand this stuff completely.


Thanks,
Brendon.




Re: C++ Frontend and combining files

2006-10-03 Thread Brendon Costa
Thanks for all the help. I have tried a few things now and decided to
try and create a new section called .edoc I tried using .comment on my
machine, however there is already data in .comment on my machine and
it will make parsing the data from the section to find my data a
little more difficult as it becomes intermixed.

At least for the moment I don't think it necessary to play with linker
scripts. I tried a few things on my machine:

* linking single .o file to create exe
* linking multiple .o files to create exe
* creating .a file from multiple .o files and linking that to create exe

where each .o file had a defined .edoc section with some data in it
and in all cases the resulting exe has a .edoc section with the data
from each .o file .edoc section appended to each other.

This works well for my purposes (Though I guess I( should try on a few
different platforms as I really would like to support at least NetBSD,
Linux and Win32 using MinGW. The last is most likely a problem
platform but ill see how I go). However when I tried using the
.comment section, I ended up with my data intermixed within a number
of other comments that I did not insert. I would need to find some way
of separating my data from the other inserted data which starts to get
more complex. By using my own section I don't think I need to define
any symbols or sync headers for my data or anything like that.

Thanks again for the help. I now have somethings I can try.
Brendon.



Creating a VAR_DECL in a named section.

2006-10-05 Thread Brendon Costa
Hi all,

I have been trying to place some data into a named section of a .o
file. I can do it currently by hooking into various of the RTL to
assembly routines and emitting the asm code directly, however I am now
trying to do it from within the C++ front end by inserting a VAR_DECL
node and setting it as belonging into a named section.

I have some code that looks like:


void bjc_add_var_decl(tree node)
{
   tree identifier;
   tree section_name;
   tree var_decl;
   tree var_decl_type;
   size_t data_len;
   const char* data = "Some data to save to the named section.";
   const char* id_name = "BJC_SOME_ID";
   tree init;


   identifier = get_identifier(id_name);
   data_len = strlen(data);
   var_decl_type = build_array_type(char_type_node,
  build_index_type(size_int(data_len)));
   var_decl_type = c_build_qualified_type(var_decl_type,
  TYPE_QUAL_CONST);

   var_decl = build_decl(VAR_DECL, identifier, var_decl_type);

   TREE_STATIC(var_decl) = 1;
   TREE_READONLY(var_decl) = 1;
   DECL_ARTIFICIAL(var_decl) = 1;


   init = build_string(data_len + 1, data);
   TREE_TYPE(init) = var_decl_type;
   DECL_INITIAL(var_decl) = init;
   TREE_USED(var_decl) = 1;

   section_name = build_string(strlen(".edoc"), ".edoc");
   DECL_SECTION_NAME(var_decl) = section_name;

   LogFine("Need to attach it somewhere in the tree.");
   bind(identifier, var_decl, node, false, false);

   finish_decl(var_decl, init, NULL_TREE);
   pushdecl(var_decl);
}

I initially had it without the pushdecl() call and thought that the
bind call would bind the new var_decl to the node passed into the
function.

I got most of the code above from looking at a few different places.
One was some documentation for GEM, and another was from the code used
in creating var decls for the nodes created when encountering
__FUNCTION__ in the source.


I have tried calling this method using a few different nodes, one was
the global namespace node of type NAMESPACE_DECL just after
finish_translation_unit() (I think that was the entry point), and
another just after a call to finish function with the FUNCTION_DECL node.

In all situations there has been nothing emitted in the resulting
assembly source code.

Up until now my hacking of GCC has only been reading values from the
tree. I have not tried generating nodes and inserting them into the
tree so this is really a first for me.

Any ideas what I am doing wrong?

Thanks for any help.
Brendon.







Re: Creating a VAR_DECL in a named section.

2006-10-06 Thread Brendon Costa
Brendon Costa wrote:
> Hi all,
> 
> I have been trying to place some data into a named section of a .o
> file. I can do it currently by hooking into various of the RTL to
> assembly routines and emitting the asm code directly, however I am now
> trying to do it from within the C++ front end by inserting a VAR_DECL
> node and setting it as belonging into a named section.
> 

Well i have been trying a few things and found that I can do it now by
making a call to assemble_variable() i am also trying to remove a few
things from the code i posted that I dont think are necessary, like
the bind() call or the pushdecl() call among a few other things.

Brendon.


Plugin Branch

2008-01-21 Thread Brendon Costa

Hi all,

I have been away from the GCC mailing list for a while. I searched the 
archives but could not find a resolution to the issue of inclusion of 
plugins to GCC.


Has it been decided if the GCC plugin branch will be added to GCC or not?

I am not after a discussion on the merits/issues of doing so as that 
has been covered previously, just wondering if an official decision 
has been made?


Thanks,
Brendon.


Re: Plugin Branch

2008-01-22 Thread Brendon Costa

Can we count on the fact that the SVN branch will be synced from time to
time to the mainline ?


synchronized ...from... the mainline...


Yes.



When you say it will be synchronized from time to time, is it possible 
to make it so that at least for each mainline GCC release we could 
produce a corresponding GCC plugin branch release or tag which is the 
same but with the plugin functionality?


Having a minimum tagging policy like this would be very helpful for 
projects wishing to make use of the plugin branch of GCC.


I know if there is at least a tag on the plugin branch for each GCC 
release, then i will use it for my open source project: EDoc++


If enough people end up using it, it might give some weight to the 
argument of merging it into the GCC main-line sometime in the future.



Thanks,
Brendon.



Re: How to get fndecl on C++ CALL_EXPR?

2008-01-30 Thread Brendon Costa

Andrew Pinski wrote:

On Jan 30, 2008 7:59 PM, H.J. Lu <[EMAIL PROTECTED]> wrote:

I am trying to get fndecl on a C++ CALL_EXPR with get_callee_fndecl.
But get_callee_fndecl returns NULL. What is the proper way to
get fndecl on a C++ CALL_EXPR in the middle end?


If it is returning NULL, then there is no function decl associated with
that call, meaning it is an indirect call.  There is nothing you can do then.



If it is an indirect call it is still possible to gather SOME useful 
information (depending on what you are trying to do). I have an 
application (EDoc++) where i find a list of "possible" functions that 
may be executed as the result of an indirect call.


There are two situations i check for:

virtual function calls
function pointer calls

Note: I am not overly familiar with things in GCC. This has worked for 
me so far with GCC 4.0.1.


For virtual functions it seems to be possible to obtain the fndecl for 
the virtual function that is being referenced. This is NOT the actual 
function that is called as that is determined at runtime.


Elsewhere i generate a list of functions that MAY be called as a 
result of a CALL_EXPR to a specific virtual function.



The other case is function pointer calls. In this case i get the type 
of the function pointer being called.


Elsewhere i maintain a list of functions which have had their 
addresses taken and then can match all of these to the function 
pointer type to determine what "MAY" be called.


Obviously there are problems with this approach like you don't get a 
restricted more accurate callgraph but an overly expanded one. It also 
requires data from other translation units, i.e. the fndecl's that MAY 
be called may be in different translation units. However it works fine 
for my purposes.


This was done for a project on sourceforge called EDoc++ that performs 
static analysis of C++ exception propagation.



I do all this in the C++ front end with the GENRIC tree. I am not sure 
if the data i use still exists at the GIMPLE level.


If you are interested in more information on how to do this let me 
know and i will pull out the relevant code. But as Andrew said, 
usually this is only known at runtime and most applications have no 
use knowing this information.


Brendon.



GCC Plugins (again)

2008-09-04 Thread Brendon Costa
Hi all,

Every now and then I poke my head into this list to see if there is any
more progress on the GCC Plugin branch issue. In particular I don't want
to give up on this feature as it will be enormously useful for my open
source project EDoc++.

In the past, we have had a lot of discussion about the feature, but the
end result has been that RMS is opposed to it so nothing will be done
about it because he has the power.

Can anyone suggest where to go from here?

Preferably, I wish we could convince RMS that this is a good move
forward. Barring that the only solution I can think of is to create a
"fork" of GCC. Where for every GCC release, I provide a patched release
with plugin support. The issue then would be getting the various distros
to use the plugin variant rather than the "official" one (Which could be
quite difficult).


The following wiki page (not sure who created it) is a decent summary of
the past discussion on this issue:
http://gcc.gnu.org/wiki/GCC_Plugins

Thanks,
Brendon.



Re: Defining a common plugin machinery

2008-09-18 Thread Brendon Costa
h Original format:
g++ -fplugin=edoc -fplugin-arg=file:blah.edc
g++ -fplugin=edoc -fplugin-arg-file=blah.edc
g++ -fplugin=edoc -fplugin-arg=file=blah.edc
g++ -fplugin=edoc -fplugin-key=file -fplugin-value=blah.edc

New:
g++ -fplugin=edoc -fedoc-file=blah.edc

Personally the "original" method seems to be a little awkward and "bulky".

3) Automatic loading of plugins

If we allow automatic loading of plugins as i propose elsewhere in this
email. Then passing arguments to those plugins might be a bit awkward.
In particular (Assuming plugin edoc is loaded automatically):

Original method:
g++ -fplugin-arg=file:blah.edc (or one of the methods described above)

New method:
g++ -fedoc-file=blah.edc

In this case, the original method, it is not obvious which plugin the
argument belongs to and in particular if multiple plugins are loaded
automatically it is not possible to differentiate between the various
plugins and which should recieve the argument.. In the method i am
proposing it is.

There are some problems though with my proposal. In particular:
* A plugins name can not be the same as an existing -fxyz option.
* Implementing it may be a bit more work


--
At what point in the compilation stage should it kick in?

I think plugins should be loaded as soon as possible. I would propose
loading plugins immediately after the processing of command line
arguments or maybe even at the same time as command line argument
processing depending on what might be easier/cleaner.


--
Some extra questions.
--
What platforms do we want to support? I can think of the following
categories:

* Old platforms (No dlopen support)
Do we use the libltdl dlpre-opening to at least support "plugins" that
may be officially distributed with GCC?

* Windows (cygwin/mingw)
As i understand the issue (I am not very familiar with this) you can't
have unresolved references in a plugin back to the GCC executable. I.e.
Building GCC with -rdynamic will not work on this platform. Do we move
most of the GCC implementation into a "library/DLL" having a "stub"
main() that just calls the library implementation. Then both the
application AND the plugins can link with this library/DLL in a way that
will work on Windows.
Or are we going for the quick solution of using -rdynamic and not
supporting this platform (Not my preferred option)?

* "Newer" ELF platforms
These should be fine regardless of the method we use i think.


--
How do we search for plugins and make sure we don't load incompatible ones?

We want to avoid loading plugins for different/incompatible
builds/versions of GCC. Things i think we need to take into account
* Multiple installed versions of GCC
* Multiple installed builds for the same version of GCC (Is this worth
catering for?)
* Cross compilers
* Finding the directory where to install a plugin should not be TOO
difficult for an external plugin project

A few methods i can think of achieving this include:
* Enforced plugin naming conventions
* Searching for plugins only in a directory that expects plugins for a
particular version/build
* Embedding a version symbol in plugin binaries that can be queried
before dlopening the plugin

With that in mind, i assume the best option will be to define a specific
directory from where a specific build of GCC will search for plugins.
Would the following directory be one of the better locations to put
those plugins or is this a directory for objects that get linked into
generated binaries?
$libdir/gcc/i386-unknown-netbsdelf3.0/4.3.2/plugins/

Then comes the issue of thinking about how does an external project
locate that directory? Do we add an option to the command line of GCC to
obtain this directory name?

--
How/where do we install headers and the library for external plugin
projects to use?


Thanks,
Brendon.


Re: Defining a common plugin machinery

2008-09-18 Thread Brendon Costa
Joseph S. Myers wrote:
> I think this is a bad idea on grounds of predictability.  
I can understand this view and was initially reluctant to suggest the
auto-load feature for this same reason. However i can not see another
solution that can be used instead of this to achieve simple usability
for a small subset of plugin types (described below).

> Separately developed plugins have many uses for doing things specific to 
> particular projects 
This is the case for the Dehydra, which is specific to the Mozilla
projects. However the plugin i would like to develop it is NOT designed
to be used with a specific project, but to be used while compiling any
project.

Just to provide an example:
Say i am developing a project called foo. Foo makes use of a library
that I am NOT involved in the maintainence of such as: libblah. In order
to get data on the complete callgraph and possible exception propogation
(this is what my plugin does), i need to build both libblah and foo with
my plugin to analyse the source for both. I.e. I can't give accurate
results without data for the complete callgraph.

In many cases this may be as simple as:
cd blah
./configure CXXFLAGS="-fplguin=edoc -edoc-embed"
make && make install

cd ../foo
./configure CXXFLAGS="-fplguin=edoc -edoc-embed"
make && make install

But this is not always be the case (Mostly projects that dont use GNU
auto-tools for the build system or do so "incorrectly")

I have control over my project: foo, however i do not have control over
project blah. The problem is with badly defined build system that do NOT
allow a user to pass flags they want to to the compiler. This will
likely result in having to edit the build system code for project blah
just to build with the necessary compiler flags to include use the plugin.

To overcome this, my "plugin" looks for environment variables to enable
it. Note: Initially i just used command line options to GCC, but i came
across this problem a number of times and decided to make it more
usable, i needed to find a different solution. As unconventional and
possibly nasty as the environment variable solution is, i can not think
of a better one.

Using environment variables the above would change to:
export EDOC_EMBED=yes
...build blah however it needs to be built ...

cd ../foo
./configure
make && make install

This will work regardless of the structure of either foo or blah's build
system. This problem is likely to exist not just for my project but for
a small subset of source analysis plugins that can not work with partial
data sets.

In the end, i can understand if this is not a good enough reason to
out-weight the costs associated with automatically loading plugins (I.e.
Predictability). I did however think it worth bringing up as an option.

I hope that describes the problem sufficiently. If you can think of any
other solutions to this problem, i would love to hear them.


Thanks,
Brendon.


Re: Defining a common plugin machinery

2008-09-18 Thread Brendon Costa
Ian Lance Taylor wrote:
> Write a one-line shell script to use as your compiler (that's what I
> would do), or define an environment variable which tells gcc which
> plugins to load (e.g., GCC_PLUGINS=/a/file:/another/file).
>
>   
Thanks for the input.

The one-liner shell script is a very good option.

The GCC_PLUGINS env var method is very similar to what i use currently,
though would require GCC to look for such an environment variable. It
would have basically the same affect as searching a directory for such a
list of plugins, but is more configurable and another way of achieving
"automatically loaded" plugins. I think i will follow up the script idea.

Thanks,
Brendon.


Re: Defining a common plugin machinery

2008-09-19 Thread Brendon Costa
ch paths for plugins
2) Ignore the problem and put it down to the user needs to understand
what they are doing
3) Somehow embed something in all plugins that can safely be queried,
which indicates the version/build of GCC the plugin belongs to and only
load plugins that match.

There may be some other options?
The above example, is more likely to happen when versions of GCC are
upgraded but the plugins are not.

> Yes. The minor point is should we avoid loading plugins when they are
> not used? My view is that this is not a priority. But I am probably
> not caring enough about the time of small compilations (i.e. the time
> of gcc -O0 helloworld.c)
Plugins will only be loaded on request with the -fplugin= option.
So there is no overhead unless a plugin is requested (I.e. Autoload of
plugins is not required for now).

> We really should not care about plugins on plateforms without dlopen.
> What is more important is that GCC should be configurable to have
> plugins disabled and still (at least) be able to compile itself (or
> perhaps a reduced version of itself).
Agreed. Using libltdl we can achieve this, even if plugins are necessary
to build GCC itself, as it just links them in statically and "pretends"
they are dynamically loaded.

> I would prefer that (ie using -rdynamic). Perhaps a convention (i.e.
> adding a EXPORT_PLUGIN macro to mark exported symbols) should be defined.
The issue is, if we want to support windows, we need to mark ALL symbols
that may be used with an export macro. Otherwise, we can just use the
-rdynamic, not use hidden visibility building the GCC application.
Basically it is a matter of time/effort.

To support Windows we would need:
* EXPORT macro marking of all code used externally
* All code compiled into a separate library with a stub main() application

Otherwise, for most ELF platforms (with default visibility) we don't
have to mark anything exported and dont have to move the code into a
library like libgccimpl.so.

Do we say for the first version we use -rdynamic and don't support
windows (will need build machinery to disable all this on windows if we
do) or do we add the export macros and library and support the windows
platform from the beginning (Are there any other platforms with quirks
not covered here)?


I may not be able to work on the wiki page or reply to any messages for
about 3 days as it seems my phone line at home has gone down and thus so
has the internet. I will check back sometime after that and see how best
to summarise peoples responses on the wiki.

Thanks,
Brendon.



Re: (Side topic EDoc++ binary embedding)

2008-09-19 Thread Brendon Costa
> Sorry to nitpick, but there is nothing Mozilla-specific in
> dehydra/treehydra. There are users outside of Mozilla.

Sorry, i didn't realise this.

> However, I do think it's awesome to be able to store plugin results in
> the resulting binary to do LTO-style analyses. How well is that working
> for you?

It makes life a WHOLE lot easier.

For example, say a project compiles a lot of object files into a few
libraries:
libstuff1.a contains: O01.o O02.o O03.o O04.o ...
libstuff2.so contains: O11.o O12.o O13.o O14.o ...
then main.o is linked with libstuff1.a and libstuff2.so into main.exe

The result of the way the linking works is that the main.exe contains
all the embedded data from the objects that were used from libstuff1
O01.o ..., and main.o

My application then uses ldd to find any shared libs: libstuff2.so and
adds the data from those too.

So regardless of how complex the build/link procedure is to generate an
application binary: main.exe. To look at the embedded data for that
main.exe is simply a matter of:

edoc main.exe --format simple

It saves having to manage extra files in the build system. The other
option i have is to define --edoc-dir=/home/me/blah. Where /home/me/blah
is some absolute directory outside the build. The data is then placed in
separate files for each object file in that directory for everything
that is built. Again this results in usage similar to:

edoc /home/me/blah/ --format simple

I am currently not using the modified GCC to do the LOT-like analysis. I
have brought that out into a separate post compilation tool as it was
just easier to code that way.

To be honest the idea of embedding the data into the binary came from
someone else on this list.

Brendon.


Re: Adding to G++: Adding a warning on throwing unspecified exceptions.

2008-09-24 Thread Brendon Costa

> I agree that it won't be very useful initially due to lots of third
> party code like boost neither defining nor adhering exception
> restrictions 100% of the time (STL may be guilty also). However, this
> is a catch 22. Why not provide the mechanism for verifying exception
> specifications so that these libraries can, in future, become fully
> compliant? It won't take much work, especially when you can get a
> warning telling you exactly what you need to change, and the only
> thing you need to do 99% of the time is add throw() clauses to
> definitions and declarations. I bet you could get boost & STL
> compliant within a week even if they had 0 throw() clauses to begin
> with.
>
>   
One of the problems i see with this is generic template classes like the
STL containers. The author of the template class or container can't know
what types of exceptions will be thrown from them, so you must define
them as being able to throw all exceptions (which is how they are
currently). Then if those containers are used all over the place then
you are back to square one. Personally i am starting to see that usage
of the exception specifier in C++ is "almost" useless (I find them a bit
more useful with EDoc++ but i still only use them very rarely).

I initially thought the same way as you are now, but in the end decided
it was necessary to construct a complete list of exceptions that can be
thrown before then applying such rules. This is what EDoc++ does. It
first calculates all possible exceptions, then it performs all sorts of
analysis to enforce things like exception specifiers (and much more).

After developing EDoc++, i have also realized how pervasive exceptions
can be in C++ code, which works against the sorts of checks that can be
done within a single translation unit. I had to go to lengths to provide
a "suppressions" system for EDoc++, simply because almost all code can
throw all sorts of exceptions i had never heard of before. A developer
"generally" is not interested in exceptions like:
__gnu_cxx::recursive_init. But after writing EDoc++ i found exceptions
like this turn up all over the place, though they should not occur in
normal operation...

With all this said though. If your purpose is to learn more about GCC
internals, then i think it would be a good project to undertake to
achieve that. You may have to do so though, with the understanding that
the maintainers may not choose to accept your patch into the official
GCC release. I don't know if they would or not, as i am not a GCC
maintainer but it would be worth getting clarification before doing the
task.



On a slight side note, there is a feature i have been postponing from
EDoc++ until "the distant future". It is to provide "meta-data" or code
"markup" which can be used to enforce certain restrictions on code being
used. This is primarily of use for defining a "contract" when writing
template code that wants to define a certain "exception guarantee".

For example, i assume you are familiar with the various "guarantees"
that have been described in literature about exceptions in C++ (if not
then it is worth looking up. It sure opened my eyes to writing safer and
better code in the presence of exceptions). To make some generic
container satisfy the "strong guarantee" will require certain
restrictions on the type used to instantiate the template. A good
example of this is that the destructor for the type must not throw an
exception. For a given generic implementation, there may be additional
restrictions as well. If these restrictions are not met, then the
container will no longer meet the "strong guarantee". I wish to define a
method of marking up source code such that those requirements are
described inline in the code of the template implementation such that
someone who instantiates the template with a particular type can then
run EDoc++ over their code and be warned when that particular
instantiation may break the requirements.

Anyhow, this is a preliminary idea and i just don't have the time now to
look at implementing it. It will likely require changes to the C++
parser and front end to insert extra nodes into the tree for this
markup. I am also not sure if others would accept such a modification
into the official GCC release unless it was generic enough to be used
for other purposes (maybe such as providing optimization hints for
certain segments of code or other forms of markup).

Thanks,
Brendon.



Re: Adding to G++: Adding a warning on throwing unspecified exceptions.

2008-09-24 Thread Brendon Costa
Simon Hill wrote:
> Brendon Costa said:
>> The author of the template class or container can't know
>> what types of exceptions will be thrown from them, so you must define
>> them as being able to throw all exceptions (which is how they are
>> currently).
> Ouch, you have a point. But couldn't you put this round the other way.
> Place the onus on the user of the template to comply with the
> exception guarantees inside the template. Unfortunately that would
> likely cause problems with some existing code.
>
> ...
>
> In other words, would you gain more from a tight exception
> specification than you'd lose by not being able to do this?
>
You as an author of a new template class "could" define it the other way.

The issue here is that doing so restricts usage of the generic
component. In specific cases this may be desirable, but not for generic
components like STL containers or those in boost. For generic components
you want to make them as useful as possible. Even if at the time of
writing the container you do not foresee that throwing any type of
exception is a good idea, users may come up with some way of using it
that you just never thought about.

I.e. I think the general philosophy is to define only what restrictions
are necessary to implement the template, not necessarily what we think
constitute good uses of the template at the time we created it.


Brendon.


Re: Adding to G++: Adding a warning on throwing unspecified exceptions.

2008-09-25 Thread Brendon Costa

> The above works on code::blocks, which uses some form of GCC, and
> looks OK to me.
> Of course this only works for exactly one exception type.
> You'd have to wait for C++0X variadic templates (and hope you can
> throw them) if you need zero or more than one.
> It's also very verbose, a little cryptic, and nested templates could
> make variadic versions horrifically long.
> However it's a start.
>   
All these are reasons why such a feature will not be used. Trying to
write code with C++ exception specifications has a LOT of problems in
addition to things you mentioned above i can think of a few:

* It is VERY difficult to get correct (compounded across platforms and
versions of libraries/compilers, or with different compile time options
etc that may cause different exceptions to be thrown)

* It has the great feature, where if you do get it wrong it likes to
call that lovely terminate function (Dripping with sarcasm...)

* Maintaining code with such specifications is cumbersome and a LOT of
work (I tried it).

* A lot of difficulties arise in the presence of templates in
determining what exceptions may be thrown (And solutions described are
cumbersome)

* There is no compile time assertion of compliance (This is solved by
EDoc++, but may also be solved in part by the warning feature being
proposed).
This is where Java differs, it has a compile time compliance that
"non-runtime" exceptions are specified/handled. But as i understand it
no runtime assertion (it is not needed). Where as C++ is the exact
opposite. This brings across the important point about "runtime"
exceptions. In Java they do not need to be specified in an exception
specifier, they will just propagate out the function and may be handled
somewhere (or crash out the main entry point). In C++ if you add some
exceptions to the specifier, but forget/omit the runtime ones, then your
program will be terminated early, there is no option for other code to
actually handle that exception (std::bad_exception in my opinion is a
bad substitute in this situation).


What is the benefit of exception specifiers that can not be provided
with documentation (I am sure there are some or they wouldn't have been
added to the standard, but i have become dis-enchanted with exception
specifiers)?

The only thing i can think of is optimization, and that is discussed by
the boost page on exception specifiers mentioned earlier.

It has been a while since i have looked in detail on this topic.
Initially EDoc++ was written so that i could then go and write all my
C++ code with exception specifiers and then use EDoc++ to enforce them
(I wanted to reproduce what Java had for C++). But even though EDoc++
can enforce such usage, to actually write code that lists all its
exception specifiers is just WAY too much work and there is no or little
gain that i can see from doing so. As a result, EDoc++ has evolved to
cover different aspects of exception usage analysis and the initial
purposes of it i rarely use.

What i mentioned in an earlier post is that rather than a function
specify what it can throw, i propose that for the C++ community a more
useful compile time assertion is to allow code that uses a function to
define what exceptions it can deal with that function throwing. I.e.
moving the contract from the used to the user. I noticed when i started
to become aware of writing exception safe code, that i would be acutely
aware that certain portions of code i was using needed to have
particular exception requirements (Such as the nothrow destructor, or i
use certain members of the std::list or pointers as they have a number
of no-throw operations). I can not enforce that say some client code
include exception specifiers, but i can modify my code to define the
contract that i require of them. Then i could use EDoc++ or some other
similar tool to tell me if those contracts have not been met. If they
are not, then i either request change in the client code or change the
way i use them.

With all this in mind, i think that the proposed warning will not be
used by many and will not shape the future of C++ due primarily with the
issues inherent in the C++ language definition of how exception
specifiers work. I do however think as i mentioned before that it would
make a great project to learn more about GCC internals and would
encourage giving it a try on those grounds.

Brendon.


Official Inclusion of GCC Extension Modules (Or similar)

2007-08-25 Thread Brendon Costa
Hi all,

I have a project that could benefit a lot from using something similar
to GEM (http://www.ecsl.cs.sunysb.edu/gem/). I have not used GEM (As
doing so is pointless currently and thus my email), but to summarise for
others not familiar with it following is an except from their website:

GEM is ... "a framework for writing compiler extensions as dynamically
loaded modules... similar to that of the the Linux Security Modules project"

The problem with GEM is that any benefits gained from using GEM itself
are meaningless unless GEM is included as part of the official GCC
distribution to allow the official GCC to be extended with "plugins".

If people are interested in the reasons why this is the case I can go
into that in more detail, however for now I wanted to ask the following:

What kind of requirements would need to be met by such a "plugin
framework" in order to be included in the official GCC distribution?

Is it a long shot to even think that such a framework would ever be
included in GCC?

Thanks,
Brendon.



Re: GCC plugin - was: Official Inclusion of GCC Extension Modules (Or similar)

2007-08-26 Thread Brendon Costa
Ben Elliston wrote:
> I'm not sure how GEM (another Stony Book University project) relates to
> the talk given at this year's GCC Summit, but there was a talk about a
> plug-in architecture for GCC to allow modules to operate on the GIMPLE
> IR ("Extending GCC with Modular GIMPLE Optimisations" by Sean Callanan).
> 
> This work was well received and is currently being prepared on a branch;
> I expect it will be introduced to the mainline at some point.  You might
> like to check out the proceedings of the summit for more details on it:
> 
> http://gcc.gnu.org/wiki/HomePage?action=AttachFile&do=get&target=GCC2007-Proceedings.pdf
> 
> Cheers, Ben

Thank you for the information. It sounds like this is almost what I am
looking for. I require hooks into the C GENERIC transformation stage
(before GENERIC is transformed into GIMPLE). The paper mentioned that
this is not yet done but may be added as a future improvement.

I might look a bit more into this branch. It looks great. Does anyone
know if this is the best place to direct any questions or thoughts if
i have any or should i email Sean directly?

Thanks for the response.
Brendon.


GCC Plugin Branch

2007-09-01 Thread Brendon Costa
Hi all,

I have just recently had time to checkout and build the GCC plugin
branch. I am interested in building a simple plugin to give it a try.
After reading through the patches it seems simple enough, I just need to
create a shared library that defines the symbols:

pre_translation_unit
transform_ctrees
transform_gimple
transform_cgraph
transform_rtl
post_translation_unit

Does anyone have a template/example autoconf project that is already
setup with the needed gcc headers + build infrastructure to create a GCC
plugin?

Also i noticed that it uses the -export-dynamic flag with libtool which
means some platforms like Cygwin are unable to support plugins (Noted in
the documentation). Are there any plans for supporting the libtool
dlpreopening for plugins so that they can still be used on systems if
the plugins are built into GCC?

One other thing I noticed is that the configure check for plugin support
and inclusion of libltdl is $host based, I.e. pre-defined which hosts
support plugins instead of doing a generic check for plugin support. I
assume it should be possible to find out from libtool whether shared
library support is enabled or not. But then again the $host based check
seems to be the way a lot of the other checks in the GCC configure.ac
files are done. I mention this because i often use NetBSD which supports
shared libraries and is not listed for including libltdl or for enabling
plugin support.

Otherwise seems like a nice simple implementation.

Thanks,
Brendon.



Re: GCC Plugin Branch

2007-09-05 Thread Brendon Costa
Ben Elliston wrote:
>> Does anyone have a template/example autoconf project that is already
>> setup with the needed gcc headers + build infrastructure to create a GCC
>> plugin?
> 
> The talk at the GCC Summit mentioned a handful of existing plug-ins and
> Sean spoke about them all being autoconfiscated.  I would recommend
> either finding those plug-ins (are they not on the branch) or contacting
> Sean to get hold of them.
> 


Thanks for the info. I could not see any plugin projects on the branch.
I have been told off list by someone that Sean Callanan has a few
plugins but they are not public yet as they want to work on them a bit
first.

I am on holidays for a month as of tomorrow, so i wont get a chance to
look at it until i return anyway. Thanks for the info.

Brendon.


Re: Progress on GCC plugins ?

2007-11-07 Thread Brendon Costa
>> The concern is the many forms of shim layers that possibly could
>> be written more easily with a plug-in framework.
> 
> there is also a difference in these two scenarios:
> 
> 1. a) Company X writes a modification to GCC to generate special
> intermediate stuff with format Y.
> 
>   b) Company X writes propietary back end which can only
> read format Y stuff written by that modification.
> 
> 2. Same as 1, except that the GCC project does step a)
> thus semi-standardizing the output.
> 
> TO me it is pretty clear that these are very different
> situations from a license point of view.

I do exactly point 1 for my (Open Source) C++ exception static analysis
tool:
http://edoc.sourceforge.net/

I need a subset of the data in a format that is easy for me to process
and it must stay around for a long period of time so i dont want all the
information that GCC could generate in a more generic form. So in this
case i feel a non-standardized format is best suited for my project.
This same argument would apply for other projects too, though i can see
how it can be helpful to have a single more generic format. It just
might not be suitable in a lot of cases. If you already have a plugin
distributed with GCC that writes in a generic format, then most people
will use that anyway if they can, rather than write and maintain their
own plugin.

This is a problem with or without the plugin framework. By adding
plugins you have the same issues as before but just make it easier for
all developers whether open source or proprietary to develop new GCC
functionality.

My project is an example of using a patch against GCC in order to
achieve the "shim layer". I would much prefer to do this with a plugin,
but a lack of the plugin framework is not going to stop me doing it
whatever way i can.

Brendon.


Re: Progress on GCC plugins ?

2007-11-07 Thread Brendon Costa
Robert Dewar wrote:
> Brendon Costa wrote:
>>>> The concern is the many forms of shim layers that possibly could
>>>> be written more easily with a plug-in framework.
>>> there is also a difference in these two scenarios:
>>>
>>> 1. a) Company X writes a modification to GCC to generate special
>>> intermediate stuff with format Y.
>>>
>>>   b) Company X writes propietary back end which can only
>>> read format Y stuff written by that modification.
>>>
>>> 2. Same as 1, except that the GCC project does step a)
>>> thus semi-standardizing the output.
>>>
>>> TO me it is pretty clear that these are very different
>>> situations from a license point of view.
>>
>> I do exactly point 1 for my (Open Source) C++ exception static analysis
>> tool:
>> http://edoc.sourceforge.net/
> 
> Well assuming that your tool is GPL'ed there is no issue and
> this is not a case of 1 above!
> 

The patch against GCC is GPL, the main library that is capable of
manipulating the data exported by the patched GCC is LGPL and could
theoretically be under any license.

What i was trying to point out is that proprietary projects can
already (without plugins) make exporters for GCC which are GPL and
then create the majority of their code in other closed source apps
that use that data.

I don't see plugins as changing this except to make it easier. Which
is really a major reason for proving plugins isn't it? Making it
easier to provide additional GCC features?

My project was given as an example of how it could be done currently
without plugins even though this project is open source.

Brendon.


Re: Progress on GCC plugins ?

2007-11-07 Thread Brendon Costa
Robert Dewar wrote:
> Brendon Costa wrote:
> 
>> The patch against GCC is GPL, the main library that is capable of
>> manipulating the data exported by the patched GCC is LGPL and could
>> theoretically be under any license.
> 
> Whose theory? You don't know that!

I thought it was obvious :-) My Theory... A theory is not necessarily
true...

I will clarify where i am coming from... I am a software developer and
know very little about the legal side of things. What i have said is
really the way i understand things as i have tried ti have a basic
idea of what all this entails and how it affects my work.

If the license extends to the data generated by GPL apps (Which is
really what I think we are talking about), then wouldn't the binaries
or for that matter the intermediate source files generated by GCC (for
example using g++ -save-temps) also be covered under the GPL
regardless of the license of the source being compiled?

This data could include:
* object files
* intermediate source files (.i, .s)
* pre-compiled header files
* any other form of the original source serialized into some specific
format such as GIMPLE exports etc.

The problem with this is that each of these is really just a different
representation of the original source code. This complicates matters
even more if we have both the GPL of GCC and the license of the
original source code impacting on what is generated. Wouldn't it mean
that only certain types of licensed source code are allowed to be
compiled with GCC?


Where do you draw the limit of what is covered and what is not?

It seems to me from what you have said, nothing is safe from the GPL.
If an OS like BSD compiles itself with GCC and this mandates that
anything which uses that data must also be licensed under GPL, then
they have no choice but to say only GPL code is allowed to run on this
OS. I dont see that as being the current common view. Most of the
BSD's have a non-GPL license and still use GCC to compile.

Or are you saying that the FORMAT of the data exported is covered by
GPL, not the data itself? Does that mean if you design a particular
data format in a GPL app, that you are not allowed to write a non-GPL
application that uses data in that format? Does this also apply the
other way around?

I dont know if microsoft have some sort of license over the PExe
formation for binaries. But if so, then is GCC actually ALLOWED to
export data in that format? Look at GIF.

Then i guess it is worth asking, does the GPL applied to the code of
the plugin automatically then apply to the format of the data exported?

Sorry for the long email. I am just trying to understand the issues
involved.

>>
>> I don't see plugins as changing this except to make it easier. Which
>> is really a major reason for proving plugins isn't it? Making it
>> easier to provide additional GCC features?
> 
> No, it has other changes, since it establishes a standardized
> interface. In my view (from the point of view of an expert
> witness in copyright matters), this *does* make a big difference
> from a licensing point of view. Of course that's just my view,
> I don't know either, but I know that I don't know :-)

Are we talking about the standardized interface provided by GCC that
people can use to write plugins for? If so i would agree that anything
which uses this interface must be GPL. So the code for a plugin of GCC
must also be GPL.

If talking about the "interface" that is generated by the export of
data in a particular format. I still dont see how that affects the
tools that make use of that data format unless the particular format
has a license imposed on it.

Again, this is all just my view and I am *NOT* an expert in this
field. In fact, i have had almost NO experience in legal areas. I am
curious to know where i have mis-understood the application of the GPL
to the GCC project and how that might apply to others like my EDoc++
project.

Thanks,
Brendon.




Re: Progress on GCC plugins ?

2007-11-07 Thread Brendon Costa
David Edelsohn wrote:
>   If you want to have a legal discussion, please take this
> conversation somewhere else.
> 
> Thanks, David
> 

Sorry. I just posted another email before i got this. Is there a
suitable place to move this discussion besides private emails?

Thanks,
Brendon.



Re: Progress on GCC plugins ?

2007-11-07 Thread Brendon Costa
Joe Buck wrote:
> On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote:
>> Is there any progress in the gcc-plugin project ?
> 
> Non-technical holdups.  RMS is worried that this will make it too easy
> to integrate proprietary code directly with GCC.
> 
> If proponents can come up with good arguments about how the plugin
> project can be structured to avoid this risk, that would help.
> 

Is there anything that still needs to be done on the technical side at
all like testing or development? I am willing to help out a bit. I
have an interest in seeing this in a future release :-)

Thanks,
Brendon.


Re: Progress on GCC plugins ?

2007-11-18 Thread Brendon Costa
Tom Tromey wrote:
> Bernd> In my view, plugins will bitrot quickly as GCC's interface
> Bernd> changes; and they won't even help with the learning curve -
> Bernd> does anyone believe for a second you won't have to understand
> Bernd> compiler internals to write a plugin?
> 
> Plugins are about deployment, not development.  They don't make
> writing the code much simpler.  That is why we can argue that the risk
> they pose is small: they don't make it significantly simpler to make a
> proprietary GCC.
> 

In my situation this point is right on the mark. I follow with a
concrete example.

My project: "EDoc++", requires a patched version of GCC in order to
perform static analysis of source code. I approached the debian
maintainers list with a debian package for this project to see if they
would include it in the official repositories. It was not accepted and
the reason for that is because it includes another patched version of
GCC which takes up too much disk space. They don't want to accept
these sorts of projects because they all effectively require
duplicates of the same code(GCC). This problem of deployment could be
solved with plugins.

Development for my project would not be overly different with the
plugin framework included. I dont expect to be able to write plugins
without an understanding of the internals of GCC which the plugin is
being written for. What i expect from the framework is simplified,
standard method of deployment for additional GCC "features".


As an overall comment on the issues being discussed i think we should
not deliberately omit features that will be useful to the open source
community just to deny these same features to proprietary developers.
This is just shooting ourselves in the foot in order to do the same to
others. It makes no sense as we are restricting our own progress for a
possible misuse of the feature by others (That is already occurring
anyway).


Brendon.