Related to Optmization Options in GCC

2006-05-24 Thread Inder

Hi All,

From the GCC manual, its clear that optimization options from –O1 to
–O3 or any greater level emphasis On the performance while –Os
emphasis only on the code size, it (-Os) says nothing about the
performance (execution time).

In our case : Size in case of –Os is less than that in case of –O4
that is according to the
manual  but the performance is also better in –Os case. So, Can we
predict something like that the performance in case of –Os will always
be better than that is in –O4 or it is just undefined (Can be better
or worse)

Following is according to the manual:

-O2 :   GCC performs nearly all supported optimizations that do not
involve a space-speed tradeoff. Also it deos not  Perform
finline-functions, -funswitch-loops and -frename-registers options.
(which are basically for Function Inlining, loop unrolling and
register renaming)

-O3 [-O4]:Turns on all optimizations specified by -O2 + Function
Inlining, loop unrolling and register renaming

-Os :enables all -O2 optimizations that do not typically increase
code size and also performs further optimizations  designed to reduce
code size

Any help is greatly appreciated.

--
Thanks,
Inder


Re: Related to Optmization Options in GCC

2006-05-24 Thread Paolo Bonzini

Inder wrote:

Hi All,

 From the GCC manual, its clear that optimization options from –O1 to
–O3 or any greater level emphasis On the performance while –Os
emphasis only on the code size, it (-Os) says nothing about the
performance (execution time).

In our case : Size in case of –Os is less than that in case of –O4
that is according to the
manual  but the performance is also better in –Os case. So, Can we
predict something like that the performance in case of –Os will always
be better than that is in –O4 or it is just undefined (Can be better
or worse)


This is the case.  Actually some optimizations at -Os have definitely a 
negative effect on performance.


Also, -O3 to -O9 are the same set of options.

Paolo



Re: Related to Optmization Options in GCC

2006-05-24 Thread Paolo Bonzini

Inder wrote:

Hi All,

 From the GCC manual, its clear that optimization options from –O1 to
–O3 or any greater level emphasis On the performance while –Os
emphasis only on the code size, it (-Os) says nothing about the
performance (execution time).

In our case : Size in case of –Os is less than that in case of –O4
that is according to the
manual  but the performance is also better in –Os case. So, Can we
predict something like that the performance in case of –Os will always
be better than that is in –O4 or it is just undefined (Can be better
or worse)


This is the case.  Actually some optimizations at -Os have definitely a 
negative effect on performance.


Also, -O3 to -O9 are the same set of options.

Paolo


Re: optimizing calling conventions for function returns

2006-05-24 Thread Etienne Lorrain
> Looking at assembly listings of the Linux kernel I see thousands of
> places where function returns are checked to be non-zero to indicate
> errors. For example something like this:
> 
> mov bx, 0
> .L1
>call foo
>test ax,ax
>jnz .Lerror

 Another calling convention could be to not only return the "return value"
in %eax (or %edx:%eax for long long returns) but also its comparisson to
zero in the flags, so that you get:
call foo
jg  .Lwarning
jnz .Lerror

 The test is done in the called function, but it is often do there anyway,
for instance when another internal function failed there is a chain of
return and the same %eax value is tested over and over again, in each
function body, followed by a return.

  Etienne.






___ 
Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son 
interface révolutionnaire. 
http://fr.mail.yahoo.com


Re: optimizing calling conventions for function returns

2006-05-24 Thread Andrew Pinski


On May 24, 2006, at 2:54 AM, Etienne Lorrain wrote:
 Another calling convention could be to not only return the "return  
value"
in %eax (or %edx:%eax for long long returns) but also its  
comparisson to

zero in the flags, so that you get:
call foo
jg  .Lwarning
jnz .Lerror


And you think this will help?  It will at most 1-10 cycles depending
on the processor.  And if you have a call in the hot loop, you are  
screwed

anyways because you will have to deal with the overhead of the call.
So it will end up being about even.

-- Pinski   


Re: GCC 4.1.1 RC1

2006-05-24 Thread Martin Michlmayr
* Lars Sonchocky-Helldorf <[EMAIL PROTECTED]> [2006-05-24 01:32]:
> Could you please add http://gcc.gnu.org/ml/gcc-testresults/2006-05/ 
> msg01295.html and http://gcc.gnu.org/ml/gcc-testresults/2006-05/ 
> msg01296.html since I have no wiki account to do this myself.

I've done this now, thanks.  For the future, please note that you can
login to the wiki without a password.
-- 
Martin Michlmayr
[EMAIL PROTECTED]


Re: optimizing calling conventions for function returns

2006-05-24 Thread Etienne Lorrain
--- Andrew Pinski wrote:
> On May 24, 2006, at 2:54 AM, Etienne Lorrain wrote:
> >  Another calling convention could be to not only return the "return  
> > value" in %eax (or %edx:%eax for long long returns) but also its  
> > comparisson to zero in the flags, so that you get:
> > call foo
> > jg  .Lwarning
> > jnz .Lerror
> 
> And you think this will help?  It will at most 1-10 cycles depending
> on the processor.

  The same can be said of register passing argument, at least for ia32
 passing a parameter in %edx/%eax instead of the stack saves one or
 two loads so few cycles - and by increasing the register pressure you
 often get a slower and (a lot) bigger function (what I have seen).
 But in some cases, a function attribute placed in the right position
 (usually very small functions) can help.

> And if you have a call in the hot loop, you are screwed
> anyways because you will have to deal with the overhead of the call.

  I am not sure of what you are refering to, but there is plenty of
 places where you are screwed - for instance the stack readjustment is
 the better done by "mov %ebp,%esp" instead of "add $16,%esp".

> So it will end up being about even.

  I was thinking of very small functions, kind of one instruction,
 something like:

asm (" atomic_dec: \n lock dec (%eax) \n ret ");
  extern unsigned atomic_dec (unsigned *counter) __attribute__((return_flags));
  void fct (void) { while (atomic_dec()) wait(); }

  But it was an intermediate solution to the problem of passing two return
 values to a function - I am not sure it worth the time to implement.

  Etienne.





___ 
Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son 
interface révolutionnaire. 
http://fr.mail.yahoo.com


GCC 4.1.1 prerelease ia32 extra assembly instructions (not regression)

2006-05-24 Thread Etienne Lorrain
  Was just looking again at the assembly file generated by GCC, and noted
 this pattern I have already seen - maybe beginning with 4.0.
 In short:

[EMAIL PROTECTED]:~/projet/gujin$ /home/etienne/projet/toolchain/bin/gcc -v
Using built-in specs.
Target: i686-pc-linux-gnu
Configured with: ../configure --prefix=/home/etienne/projet/toolchain
--enable-languages=c
Thread model: posix
gcc version 4.1.1 20060517 (prerelease)

[EMAIL PROTECTED]:~/projet/gujin$ cat tmp.c
unsigned short nbinch (unsigned short Hsize_cm)
  {
  return (3 * Hsize_cm / 4 + 2) / 3;
  }

[EMAIL PROTECTED]:~/projet/gujin$ /home/etienne/projet/toolchain/bin/gcc tmp.c 
-S -Os -o
tmp.s -fomit-frame-pointer -fverbose-asm -march=i386

[EMAIL PROTECTED]:~/projet/gujin$ cat tmp.s
.file   "tmp.c"
# GNU C version 4.1.1 20060517 (prerelease) (i686-pc-linux-gnu)
#   compiled by GNU C version 4.1.1 20060517 (prerelease).
# GGC heuristics: --param ggc-min-expand=62 --param ggc-min-heapsize=60570
# options passed:  -iprefix -march=i386 -auxbase-strip -Os
# -fomit-frame-pointer -fverbose-asm
# options enabled:  -falign-loops -fargument-alias -fbranch-count-reg
# -fcaller-saves -fcommon -fcprop-registers -fcrossjumping
# -fcse-follow-jumps -fcse-skip-blocks -fdefer-pop
# -fdelete-null-pointer-checks -fearly-inlining
# -feliminate-unused-debug-types -fexpensive-optimizations -ffunction-cse
# -fgcse -fgcse-lm -fguess-branch-probability -fident -fif-conversion
# -fif-conversion2 -finline-functions -finline-functions-called-once
# -fipa-pure-const -fipa-reference -fipa-type-escape -fivopts
# -fkeep-static-consts -fleading-underscore -floop-optimize
# -floop-optimize2 -fmath-errno -fmerge-constants -fomit-frame-pointer
# -foptimize-register-move -foptimize-sibling-calls -fpcc-struct-return
# -fpeephole -fpeephole2 -fregmove -freorder-functions
# -frerun-cse-after-loop -frerun-loop-opt -fsched-interblock -fsched-spec
# -fsched-stalled-insns-dep -fshow-column -fsplit-ivs-in-unroller
# -fstrength-reduce -fstrict-aliasing -fthread-jumps -ftrapping-math
# -ftree-ccp -ftree-copy-prop -ftree-copyrename -ftree-dce
# -ftree-dominator-opts -ftree-dse -ftree-fre -ftree-loop-im
# -ftree-loop-ivcanon -ftree-loop-optimize -ftree-lrs -ftree-salias
# -ftree-sink -ftree-sra -ftree-store-ccp -ftree-store-copy-prop -ftree-ter
# -ftree-vect-loop-version -ftree-vrp -funit-at-a-time -fverbose-asm
# -fzero-initialized-in-bss -m32 -m80387 -m96bit-long-double
# -malign-stringops -mfancy-math-387 -mfp-ret-in-387 -mieee-fp
# -mno-red-zone -mpush-args -mtls-direct-seg-refs

# Compiler executable checksum: b39e0195e7cdfc211bb5a7d4c9f1eb60

.text
.globl nbinch
.type   nbinch, @function
nbinch:
movzwl  4(%esp), %eax   # Hsize_cm, Hsize_cm
leal(%eax,%eax,2), %eax #, tmp63
movl$4, %edx#, tmp67
movl%edx, %ecx  # tmp67,
cltd
idivl   %ecx#
addl$2, %eax#, tmp68
movl$3, %edx#, tmp72
movl%edx, %ecx  # tmp72,
cltd
idivl   %ecx#
movzwl  %ax, %eax   # tmp70, tmp61
ret
.size   nbinch, .-nbinch
.ident  "GCC: (GNU) 4.1.1 20060517 (prerelease)"
.section.note.GNU-stack,"",@progbits

  The extra options "-fomit-frame-pointer -fverbose-asm -march=i386" are just 
there to
 simplify the output; the temporary variables tmp67 and tmp72 are created even 
without,
 and the %edx register is only used once to initialise the %ecx register.
 The %edx register is cleared by "cltd" instruction because "%eax" is positive.

 Is there any reason why:
movl$4, %edx#, tmp67
movl%edx, %ecx  # tmp67,
 is not replaced by:
movl$4, %ecx

 and:
movl$3, %edx#, tmp72
movl%edx, %ecx  # tmp72,
 is not replaced by:
movl$3, %ecx

  Etienne.






___ 
Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son 
interface révolutionnaire.
http://fr.mail.yahoo.com


Re: SVN: Checksum mismatch problem

2006-05-24 Thread Kai Henningsen
[EMAIL PROTECTED] (Russ Allbery)  wrote on 22.05.06 in <[EMAIL PROTECTED]>:

> Bruce Korb <[EMAIL PROTECTED]> writes:
>
> > I do that also, but I am also careful to prune repository
> > directories (CVS, .svn or SCCS even).  I rather doubt it is my RAM,
> > BTW.  Perhaps a disk sector, but I'll never know now.  (Were it RAM,
> > the failure would be random and not just the one file.)  The original
> > data were rm-ed and replaced with a new pull of the Ada code.
>
> Yup, I've seen change of capitalization of a single letter in files due to
> bad disk sectors before, even on relatively modern hardware.  It's a
> single bit error, so it's an explainable failure mode.

And there's enough involved so that a disgnosis is almost impossible until  
you get a lot more errors.

Memory. Disk. Controller. Cpu ...

... or it could just be a lone alpha particle hitting the bit in one of  
those places.

Could also be a software error. A stray pointer in the kernel, being used  
to set a flag, and happening to point into a buffer for that disk block.

Until it gets reproducible - or reproduces itself ... no way to tell.

MfG Kai


Thoughts on gcc maintainership and project merging

2006-05-24 Thread Zdenek Dvorak
Hello,

I have read the mail of Steven Bosscher
(http://gcc.gnu.org/ml/gcc-patches/2006-05/msg00272.html), and while IMO
he takes the situation too personally, he raises several interesting
points.  In particular, some of the statements remind me of my own
experiences with the work on loop optimizer I spent last few years with.
As you probably noticed, we managed to replace loop.c finally; however,
I feel that if we did not have to rely on "luck" of finding a willing
reviewer for each patch, this work could have been finished at least one
release earlier, and additionally, the loop optimizer would be in much
better shape.

I have spent some time thinking about this situation, and I would like
to share some thoughts with you.  I am still quite far from concrete
proposals, but I would like to hear your ideas.

Consider a person or a group of persons without write privileges working
on some larger project.  At the moment, the only thing they can do is to
create a development branch, spend some time working there (possibly
posting some progress reports and/or asking other people for suggestions
and help), and finally get their work merged.  At this point, the
problems begin.  There are basically two ways how to get things done:

They may try to get everything merged at once.  This usually does not
work, as nobody is willing to review a nontrivial patch whose size
exceeds a few thousands of lines, especially if it has to touch several
separate parts of the compiler (and there are many other good reasons
for avoiding this).

So, they have to resort to splitting the merge into several parts.  This
however brings more problems.

-- it is quite time comsuming.  There are usually some dependences
   between the patches.  It would be nice to be able to submit all the
   patches at once, however, creating and testing the patches that
   depend on other patches is quite nontrivial and with more patches, it
   gets close to impossible.  So one has to submit the patches step by
   step, and wait each time till someone reviews the patch; if one patch
   in long dependency chain gets stuck, it is a disaster.

-- the reviewer lacks the broad picture (does not exactly know what
   other patches depend on this one).  If one of the patches makes a
   change that by itself is not very appealing, it is difficult to find
   a reviewer for it.

-- the reviewer may request changes that are incompatible with the
   followup patches, or with the general idea of the project.  In such
   case, changes have to be made during the merge, which makes it much
   more likely to introduce bugs, and makes the previous testing less
   relevant.

-- the worst-case (that never happened to me, but I felt dangerously
   close to it several times) is that reviewer requests changes which
   are totally incompatible with the rest of the project, or which the
   authors of the project consider wrong, and he does not yield to
   argument (for example, because he believes a completely different
   approach should be taken).  In such case, it is nearly impossible to
   find a different reviewer for the patch and proceed in any way,
   except for yielding yourself and doing something one believes to be
   wrong.

-- the patches usually get reviewed by several reviewers.  Each of them
   has different opinions and the changes requested by them may be
   inconsistent.  Additionally, this is inefficient for the reviewers as
   well -- each of them should understand the project and the
   relationship of the patch to it.  IMO this gets neglected sometimes,
   the patches are considered in isolation and the reviewer simply
   trusts the author that the relationship of the patch to the rest of
   the project is sane (for example, that the implemented algorithms are
   appropriate for the task they are used to).

Obviously, there is no perfect solution for this problem.  However, here
are several ideas for proposals that I believe could help:

1) Every large optimizer or major piece of the compiler should have a
   dedicated maintainer.  At the moment, maintainers are assigned on an
   ad-hoc basis.  Some parts of the compiler have a proper maintainer.
   Some have an "implicit" maintainer (e.g., everyone knows that when
   something breaks in DOM, Jeff is the right person to speak with, but
   it is not written anywhere).  Some parts have an "implicit"
   maintainer that however does not have right to approve the changes to
   the code (e.g., if something breaks in ivopts or # of iterations
   analysis, everyone knows that I am to be blamed :-).  Some parts lack
   the maintainer completely (for example, the loop optimizer as whole
   does not have a dedicated person, and the patches there are approved
   by anyone who just happens to have time).

   Proposal:  Whenever a new pass or a major functionality is added to
  gcc, a maintainer for it must be found.  Preferably the
  author, or in case he from some reason is not considered
  

Re: RFC cse weirdness

2006-05-24 Thread Bernd Schmidt

Andreas Krebbel wrote:


when cse replaces registers in an insn it tries to avoid calls to
validate_change, what causes trouble in some situations.


From validate_canon_reg:


 /* If replacing pseudo with hard reg or vice versa, ensure the
 insn remains valid.  Likewise if the insn has MATCH_DUPs.  */
  if (insn != 0 && new != 0
  && REG_P (new) && REG_P (*xloc)
  && (((REGNO (new) < FIRST_PSEUDO_REGISTER)
   != (REGNO (*xloc) < FIRST_PSEUDO_REGISTER))
  || GET_MODE (new) != GET_MODE (*xloc)
  || (insn_code = recog_memoized (insn)) < 0
  || insn_data[insn_code].n_dups > 0))
validate_change (insn, xloc, new, 1);
  else
*xloc = new;


Hard to say whether the backend's use of the insn condition is valid or 
not.  The documentation could be clearer.  I'm tempted to say that if 
two operands must be identical at all times, match_dup is the way to 
tell the compiler about it rather than sneaking the requirement in 
behind its back through an insn condition.  But then, this kind of thing 
does seem pervasive in the x86 backend, and the code above is an 
eyesore.  On those grounds, please submit a change to fix both places in 
cse.c.  If we run into further problems of this kind, we'll have to 
rethink whether such patterns are valid.



Bernd


Re: Thoughts on gcc maintainership and project merging

2006-05-24 Thread Andrew Pinski


On May 24, 2006, at 7:47 AM, Zdenek Dvorak wrote:
Obviously, there is no perfect solution for this problem.  However,  
here

are several ideas for proposals that I believe could help:

1) 

   Proposal:  Whenever a new pass or a major functionality is added to
  gcc, a maintainer for it must be found.  Preferably the
  author, or in case he from some reason is not considered
  suitable, some other person must be assigned.


This one seems to be already accpeted as the correct practice
http://gcc.gnu.org/ml/gcc/2006-04/msg00554.html

Thanks,
Andrew Pinski


Re: RFC cse weirdness

2006-05-24 Thread Andreas Krebbel
Hi,

> On those grounds, please submit a change to fix both places in 
> cse.c.  If we run into further problems of this kind, we'll have to 
> rethink whether such patterns are valid.

Ok. I'll do so as soon as possible. Unfortunately the trivial fix of
just removing the else branch doesn't work. There are callers of
canon_reg which seem to expect that the issue is solved without 
validate_change like:

cse.c:4927  src_eqv = fold_rtx (canon_reg (XEXP (tem, 0), NULL_RTX), insn);

Since fold_rtx uses validate_change without setting in_group we get an assertion
if there is already something in the change group.

I hope that can easily be fixed by just calling apply_change_group after calling
canon_reg.

Bye,

-Andreas-


Great increase in memory usage by tree-SSA optimizers

2006-05-24 Thread Andrew Haley
Recently (I can't tell when this changed, exactly, but it's within the
last few weeks) I've been unable to compile a big Java program because
my computer runs out of memory.  gcj version 4.1 compiles this program
correctly, although it uses about a gigabyte of RAM.  gcj version 4.2
can't do it even with more than 2 gigabytes.

The explosion in memory usage seems to be in PRE.

This program is the testcase for
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27754.

I'd be grateful if someone who understands the tree-SSA optimizers
could have a look and find out why there has been this severe
regression in compile-time memory usage.

This requires a gcc later than revision 114042, which includes my
patch for PR 27754.

Thanks,
Andrew.


Use a STATEMENT_LIST rather than a COMPOUND_EXPR

2006-05-24 Thread Andrew Haley
With recent gcc we're blowing up in unshare because the use of
COMPOUND_EXPRs in Java leads to very deep recursion.  

The easisest thing seems to be to use a STATEMENT_LIST rather than a
COMPOUND_EXPR.

Andrew.


2006-05-24  Andrew Haley  <[EMAIL PROTECTED]>

* decl.c (java_add_stmt): Use a STATEMENT_LIST rather than a
COMPOUND_EXPR.

Index: gcc/java/decl.c
===
*** gcc/java/decl.c (revision 113722)
--- gcc/java/decl.c (working copy)
***
*** 49,52 
--- 49,53 
  #include "target.h"
  #include "version.h"
+ #include "tree-iterator.h"
  
  #if defined (DEBUG_JAVA_BINDING_LEVELS)
***
*** 2237,2252 
  }
  
! /* Add a statement to the compound_expr currently being
!constructed.  */
  
  tree
! java_add_stmt (tree stmt)
  {
if (input_filename)
! SET_EXPR_LOCATION (stmt, input_location);

!   return current_binding_level->stmts 
! = add_stmt_to_compound (current_binding_level->stmts, 
!   TREE_TYPE (stmt), stmt);
  }
  
--- 2238,2271 
  }
  
! /* Add a statement to the statement_list currently being constructed.
!If the statement_list is null, we don't create a singleton list.
!This is necessary because poplevel() assumes that adding a
!statement to a null statement_list returns the statement.  */
  
  tree
! java_add_stmt (tree new_stmt)
  {
+   tree stmts = current_binding_level->stmts;
+   tree_stmt_iterator i;
+ 
if (input_filename)
! SET_EXPR_LOCATION (new_stmt, input_location);

!   if (stmts == NULL)
! return current_binding_level->stmts = new_stmt;
! 
!   /* Force STMTS to be a statement_list.  */
!   if (TREE_CODE (stmts) != STATEMENT_LIST)
! {
!   tree t = make_node (STATEMENT_LIST);
!   i = tsi_last (t);
!   tsi_link_after (&i, stmts, TSI_CONTINUE_LINKING);
!   stmts = t;
! }  
!   
!   i = tsi_last (stmts);
!   tsi_link_after (&i, new_stmt, TSI_CONTINUE_LINKING);
! 
!   return current_binding_level->stmts = stmts;
  }
  


Re: optimizing calling conventions for function returns

2006-05-24 Thread Jon Smirl

On 5/23/06, Paul Brook <[EMAIL PROTECTED]> wrote:

> Has work been done to evaluate a calling convention that takes error
> checks like this into account? Are there size/performance wins? Or am
> I just reinventing a variation on exception handling?

This introduces an extra stack push and will confuse a call-stack branch
predictor. If both the call stack and the test are normally predicted
correctly I'd guess this would be a performance loss on modern cpus.


I just finished writing a bunch of test cases to explore the idea. My
conclusion is that if the error returns are very infrequent (<<1%)
then this is a win. But if there are a significant number of error
returns this is a major loss.

These two instructions on the error return path are the killer:
addl$4, %esp
ret /* Return to error return */

Apparently the CPU has zero expectation that the address being jumped
to is code. In the calling routine I pushed the error return as data.

pushl   $.L11   /* push return address */

So for the non-error path there is a win by removing the error
test/jmp on the function return. But taking the error path is very
expensive.

I'm experimenting with 50 line assembly programs on a P4. I do wonder
if these micro results would apply in a macro program. My test is
losing because the return destination had been predicted and the
introduction of the addl messed up the prediction. But in a large
program with many levels of calls would the return always be predicted
on the error path?


--
Jon Smirl
[EMAIL PROTECTED]


Re: Thoughts on gcc maintainership and project merging

2006-05-24 Thread Zdenek Dvorak
Hello,

> On May 24, 2006, at 7:47 AM, Zdenek Dvorak wrote:
> >Obviously, there is no perfect solution for this problem.  However,  
> >here
> >are several ideas for proposals that I believe could help:
> >
> >1) 
> >
> >   Proposal:  Whenever a new pass or a major functionality is added to
> >   gcc, a maintainer for it must be found.  Preferably the
> >   author, or in case he from some reason is not considered
> >   suitable, some other person must be assigned.
> 
> This one seems to be already accpeted as the correct practice
> http://gcc.gnu.org/ml/gcc/2006-04/msg00554.html

however, it seems not to work yet; who is maintainer for see.c, for
example?

Zdenek


Bootstrap broken on trunk

2006-05-24 Thread Thomas Koenig
Hello world,

bootstrap appears to be broken totally on i686-pc-linux-gnu (see PR 27763).
Can somebody have a look?

Thomas


GCC 2006 Summer of Code accepted projects

2006-05-24 Thread lopezibanez

Hi,

I guess everybody is very busy. However, it would be nice to set up a
page in the GCC Wiki with the list of projects accepted for this year
SoC and some links. If someone has this information, I would volunteer
to "wikify" it.

Cheers,
Manuel.

PS: yeah, I am also interested part ;-) I would do it, though, even if
I am not accepted. Come on! It would be just 20 minutes...


use changes in STL library

2006-05-24 Thread Marek Zuk
We are students of Warsaw University of Technology and we are in our 
final year.

We've just started working on our final project at our university.
We'd like to develop the STL library and enhance it with some features.

We checked out the GNU source using the svn command:
svn -q checkout svn://gcc.gnu.org/svn/gcc/trunk gcc

Now, we'd like to modify the gnu sources that we downloaded and test the 
changes in our programm.
In other words, we don't want to recompile the standard library that is 
installed on our computer, but we'd like to make some changes in the 
downloaded repository and check if our programms if the changes work.


Could you write us what command we should use?
We'd like to emphasize that we don't want to recompile whole gcc on our 
computer, we just want to make use of changes we did in the repository.


Thank you very much for any help,

Marek Zuk
Maciej Lozinski
Piotr Borkowski

Warsaw University of Technology,
Faculty of Mathematics & Computer Science


use of changes in STL library

2006-05-24 Thread Marek Zuk
We are students of Warsaw University of Technology and we are in our 
final year.

We've just started working on our final project at our university.
We'd like to develop the STL library and enhance it with some features.

We checked out the GNU source using the svn command:
svn -q checkout svn://gcc.gnu.org/svn/gcc/trunk gcc

Now, we'd like to modify the gnu sources that we downloaded and test the 
changes in our programm.
In other words, we don't want to recompile the standard library that is 
installed on our computer, but we'd like to make some changes in the 
downloaded repository and check if our programms if the changes work.


Could you write us what command we should use?
We'd like to emphasize that we don't want to recompile whole gcc on our 
computer, we just want to make use of changes we did in the repository.


Thank you very much for any help,

Marek Zuk
Maciej Lozinski
Piotr Borkowski

Warsaw University of Technology,
Faculty of Mathematics & Computer Science



Re: use of changes in STL library

2006-05-24 Thread Paul Brook
> Now, we'd like to modify the gnu sources that we downloaded and test the
> changes in our programm.
> In other words, we don't want to recompile the standard library that is
> installed on our computer, but we'd like to make some changes in the
> downloaded repository and check if our programms if the changes work.
>
> Could you write us what command we should use?
> We'd like to emphasize that we don't want to recompile whole gcc on our
> computer, we just want to make use of changes we did in the repository.

Short answer is you can't. The gcc build system doesn't support building just 
the target libraries. You're going to have to build the whole thing.

Paul


Re: Wrong link?

2006-05-24 Thread Bill Gatliff

Dave:



  Gerald, you've jumped to a false conclusion there; "was hijacked" should
read "has bitrotted".

  "Hijacked" is a pejorative term, and also historically and factually
inaccurate.  Objsw.com maintained the FAQ initially, but some time ago (around
2001 off the top of my head) it became clear that it had fallen into
disrepair, and Bill Gatliff, who was then and is now an active and valuable
contributing member of the crossgcc community, volunteered to take it over.
He then actively maintained it for several years and it was only when his
website got hacked and wiped out sometime last year that the link became out
of date.  He has been slow in getting his website rebuilt and hasn't put the
FAQ back up yet; which is why I've Cc'd him in on this thread.
  


Indeed, "bitrotted" is in fact a better description of what is happening.


  Bill, you need to make your intentions clear as to whether you are able and
willing to resume your maintainance duties.  Are you going to get the crossgcc
FAQ back up there?  If not, probably the best thing to do would be to replace
the paragraph with a reference to the mailing list ([EMAIL PROTECTED]) and
to Dan Kegel's crosstool and the related website.
  


Thanks for the kind words, Dave.  I am still quite committed to the 
crossgcc community, but I'm doing a lot of work behind the scenes as of 
late.


It's ironic that the security breach came through the Wiki software I 
had set up as a supplement to the FAQ.  A wiki that _nobody_ seemed to 
pay any attention to.  Ever.  Even when it was clear that many of the 
information needs of the crossgcc community were not being well met by a 
FAQ-type document.  Even when I had posted tutorials and detailed build 
procedures in the Wiki, which were really too detailed for a FAQ.


I don't think that a blanket link to crosstool is what is needed, 
because there is a lot of information that crossgcc'ers need that 
crosstool doesn't address, for example how to integrate newlib into an 
embedded system.  Crosstool doesn't even do newlib, in fact.


I'm happy to resume hosting the crossgcc document, but I don't have the 
time at the moment to give it a major overhaul--- which is what it 
needs.  And I hesitate to restore a document that is out of date.  And I 
still think a Wiki is the way to go, and I'm willing to donate a 
dedicated machine and a more secure Wiki installation towards that 
goal.  But since nobody contributed before, I don't have any reason to 
believe anyone will contribute now.  Which makes me wonder if anyone is 
using it, and I don't have the time to maintain a document that nobody 
reads.  We couldn't even get anyone to change the URL in the mailing 
list to point to the right place.


To summarize, I'm happy to re-post the FAQ but it is out of date and has 
been for some time.  It needs someone with the interest and time to 
update it.  Furthermore, I'm willing to donate resources to provide a 
Wiki, which I think is a better way to provide the information that 
people might be looking for.  But in both cases only if someone will 
actually use it.   Suggestions welcome.


At any rate, I would prefer the term "hijacked" not be used, since it is 
historically and factually inaccurate.




b.g.

--
Bill Gatliff
[EMAIL PROTECTED]



Re: Wrong link?

2006-05-24 Thread Joe Buck
On Wed, May 24, 2006 at 05:17:03PM -0500, Bill Gatliff wrote:
> ...  I am still quite committed to the 
> crossgcc community, but I'm doing a lot of work behind the scenes as of 
> late

> I'm happy to resume hosting the crossgcc document, but I don't have the 
> time at the moment to give it a major overhaul--- which is what it 
> needs.  And I hesitate to restore a document that is out of date.  And I 
> still think a Wiki is the way to go, and I'm willing to donate a 
> dedicated machine and a more secure Wiki installation towards that 
> goal. 

Thanks for the offer.

But the GCC project already has a Wiki, at

http://gcc.gnu.org/wiki

which is actively maintained by the developers.  I think it would be 
best to use that wiki, we'd have better odds that active developers
would keep it current if it were in the wiki they use.


Re: Wrong link?

2006-05-24 Thread Bill Gatliff

Joe et al:



But the GCC project already has a Wiki, at

http://gcc.gnu.org/wiki

which is actively maintained by the developers.  I think it would be 
best to use that wiki, we'd have better odds that active developers

would keep it current if it were in the wiki they use.

  


I completely agree.

I recommend that we dispense with the FAQ altogether and put what we 
know into the gcc wiki.  The closer we work with the gcc team, the more 
likely it is that they will continue to support us.



b.g.

--
Bill Gatliff
[EMAIL PROTECTED]