expand_omp_parallel typo?

2006-10-17 Thread Marcin Dalecki
Looking at the function expand_omp_parallel in omp-low.c I have found  
the following line of code:


  bsi_insert_after (&si, t, TSI_SAME_STMT);

Shouldn't this bee

  bsi_insert_after (&si, t, BSI_SAME_STMT);

instead?

Marcin Dalecki




TARGET_SCHED_PROLOG defined twice

2006-10-18 Thread Marcin Dalecki
Looking at rs6000.opt I have found that the above command line switch  
variable is defined TWICE:


msched-prolog
Target Report Var(TARGET_SCHED_PROLOG) Init(1)
Schedule the start and end of the procedure

msched-epilog
Target Undocumented Var(TARGET_SCHED_PROLOG) VarExists

This appears of course to be wrong.

Marcin Dalecki




Re: TARGET_SCHED_PROLOG defined twice

2006-10-18 Thread Marcin Dalecki


On 2006-10-18, at 12:15, Steven Bosscher wrote:


On 10/18/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote:

Looking at rs6000.opt I have found that the above command line switch
variable is defined TWICE:

msched-prolog
Target Report Var(TARGET_SCHED_PROLOG) Init(1)
Schedule the start and end of the procedure

msched-epilog
Target Undocumented Var(TARGET_SCHED_PROLOG) VarExists

This appears of course to be wrong.


The latter probably ought to be TARGET_SCHED_EPILOG, if that  
exists, eh?


Actually the second of the two is marked as Undocumented and hidden  
silent option.
Most likely it was used during the compiler backend development for  
optimization testing.
The best solution appears to be just to delete it, since it will give  
proper meaning to the
msched-prolog, because as it is the first option may well alias the  
second due to

different variable preinitalization and link order.


Apparently we also don't have test cases to actually verify that the
proper forms of these options are accepted and have the desired
effect...


Marcin Dalecki




Re: build failure, GMP not available

2006-10-30 Thread Marcin Dalecki


On 2006-10-30, at 21:37, Daniel Berlin wrote:

Honestly, I don't know any mac people who *don't* use either fink or
macports to install unix software when possible, because pretty much
everything has required some small patch or another.


I guess you are joking?

Marcin Dalecki




Re: build failure, GMP not available

2006-10-31 Thread Marcin Dalecki


On 2006-10-31, at 01:59, Daniel Berlin wrote:


On 10/30/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote:


On 2006-10-30, at 21:37, Daniel Berlin wrote:
> Honestly, I don't know any mac people who *don't* use either  
fink or
> macports to install unix software when possible, because pretty  
much

> everything has required some small patch or another.

I guess you are joking?


I guess you think the vast majority of mac users who install unix
software do it manually?


This is not related to your first statement at all. I just wanted to  
make you aware
that there are actually people out there, who don't use fink or  
macports at all.
They tend to don't won't to get into the hell of apt-get or yum  
cycles starting

every working day. They tend to like to have a reproducible dependable
*stable* working environment not infected with unstable buggy software,
which can be used as a shipment basis for customers too.


do you also think more than 1% of people still use lynx? (This is a
trick question, I know the actual stats for the top 10 websites :P)


I don't care. This question is not related to the apparent  
instability and thus

low quality of GMP/MPFR at all.

Marcin Dalecki




Re: Threading the compiler

2006-11-10 Thread Marcin Dalecki


On 2006-11-10, at 21:46, H. J. Lu wrote:


On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:

How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more.  I'm assuming nice, equal sized
hunks.  For larger variations in hunk size, I'd need even more hunks.

Or, so that is just an off the cuff proposal to get the discussion
started.

Thoughts?


Will use C++ help or hurt compiler parallelism? Does it really matter?


It should be helpfull, because it seriously helps in keeping the  
semantical scope

of data items at bay.

Marcin Dalecki




Re: Threading the compiler

2006-11-10 Thread Marcin Dalecki


On 2006-11-10, at 22:33, Sohail Somani wrote:


On Fri, 2006-11-10 at 12:46 -0800, H. J. Lu wrote:

On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:

How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more.  I'm assuming nice, equal  
sized
hunks.  For larger variations in hunk size, I'd need even more  
hunks.


Or, so that is just an off the cuff proposal to get the discussion
started.

Thoughts?


Will use C++ help or hurt compiler parallelism? Does it really  
matter?


My 2c.

I don't think it can possibly hurt as long as people follow normal C++
coding rules.


Contrary to C there is no single general coding style for C++. In  
fact for a project
of such a scale this may be indeed the most significant deployment  
problem for C++.



Lots of threads communicating a lot would be bad.


This simply itsn't true. The compiler would be fine having many  
threads handling a
lot of data between them in a pipelined way. In fact it already does  
just that,

however without using the opportunity for paralell execution.

Marcin Dalecki




Re: Threading the compiler

2006-11-10 Thread Marcin Dalecki


On 2006-11-11, at 06:08, Geert Bosch wrote:


Just compiling
  int main() { puts ("Hello, world!"); return 0; }
takes 342 system calls on my Linux box, most of them
related to creating processes, repeated dynamic linking,
and other initialization stuff, and reading and writing
temporary files for communication.


And 80% of it comes from the severe overuse of the notion of shared  
libraries on linux systems.


Marcin Dalecki




Re: Question on BOOT_CFLAGS vs. CFLAGS

2006-12-15 Thread Marcin Dalecki


On 2006-12-15, at 18:27, Mike Stump wrote:


On Dec 15, 2006, at 1:56 AM, Andrew Pinski wrote:
For BOOT_CFLAGS and STAGE1_CFLAGS, if we change them to be  
affected by

CFLAGS, we are going to run into issues where the compiler you are
building with understand an option but the bootstrapping one does  
not.
An example of this is building GCC with a non GCC compiler.  So  
how do

we handle that case, we split out STAGE1_CFLAGS and BOOT_CFLAGS.


This proves the necessity of two different controls, namely  
BOOT_CFLAGS and STAGE1_CFLAGS.


For consitency I would propose to use the name STAGE0_CFLAGS instead  
of BOOT_CFLAGS,

because the stages are actually enumerated in a sequence anyway.


Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature? (was: avoid integer overflow in mktime.m4)

2006-12-20 Thread Marcin Dalecki


On 2006-12-20, at 00:10, Richard B. Kreckel wrote:



C89 did not refer to IEEE 754 / IEC 60559. Yet, as far as I am aware,
-ffast-math or the implied optimizations have never been turned on  
by GCC

unless explicitly requested. That was a wise decision.

By the same token it would be wise to refrain from turning on any
optimization that breaks programs which depend on wrapping signed
integers.


Numerical stability of incomplete floating point representations are  
an entirely different
problem category then some simple integer tricks. In the first case  
the difficulties are inherent
to the incomplete representation of the calculation domain. In the  
second case it's just some
peculiarities of the underlying machine as well as the fact that the  
unsigned qualifier is not
used nearly enough frequently in common code. Or in other words: Z/32  
resp. 64 IS AN ALGEBRA but

float isn't! Thus this argument by analogy simply isn't valid.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Marcin Dalecki


On 2006-12-20, at 22:48, Richard B. Kreckel wrote:

2) Signed types are not an algebra, they are not even a ring, at  
least when their elements are interpreted in the canonical way as  
integer numbers. (Heck, what are they?)


You are apparently using a different definition of an algebra or ring  
than the common one.


Integral types are an incomplete representation of the calculation  
domain, which is the natural numbers.


This is an arbitrary assumption. In fact most people simply are well  
aware of the fact that computer
don't to infinite arithmetics. You are apparently confusing natural  
numbers, which don't include negatives,
with integers. However it's a quite common mistake to forget how  
"bad" floats "model" real numbers.


This corroborates the validity of the analogy with IEEE real  
arithmetic.


And wrong assumptions lead to wrong conclusions.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Marcin Dalecki
But the same applies to floating point numbers. There, the  
situation is even better, because nowadays I can rely on a float or  
double being the representation defined in IEEE 754 because there  
is such overwhelming hardware support.


You better don't. Really! Please just realize for example the impact  
of the (in)famous 80 bit internal (over)precision of a

very common IEEE 754 implementation...

volatile float b = 1.;

if (1. / 3. == b / 3.) {
   printf("HALLO!\n")
} else {
   printf("SURPRISE SURPRISE!\n");
}

or just skim through http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

However it's a quite common mistake to forget how "bad" floats  
"model" real numbers.


And it's quite a common mistake to forget how "bad" finite ints  
"model" integer numbers.


No it isn't. Most people don't think in terms of infinite arithmetics  
when programming.
And I hold up that the difference between finite and infinite is  
actually quite a fundamental
concept. However quite a lot of people expect the floating  
arithmetics rouding to give them

well behaved results.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-21 Thread Marcin Dalecki


On 2006-12-21, at 22:19, David Nicol wrote:



It has always seemed to me that floating point comparison could
be standardized to regularize the exponent and ignore the least  
significant

few bits and doing so would save a lot of headaches.


Well actually it wouldn't "save the world". However adding an
op-code implementing: x eqeps y <=> |x - y| < epsilion, would be  
indeed helpful.
Maybe some m-f has already patented it, and that's the reason we  
don't see it

already done. But that's of course only me speculating.


Would it really save
the headaches or would it just make the cases where absolute  
comparisons

of fp results break less often, making the error more intermittent and
thereby worse?  Could a compiler switch be added that would alter
fp equality?


However in numerical computation there isn't really a silver bullet  
to be found.
If you are serious about it you simply do the hard work - which is  
the numerical
analysis of your algorithms with respect to computational stability.  
That's the
real effort and perhaps the reason, why the peculiarities of the FP  
implementations

are not perceived as problematic.

I have argued for "precision" to be included in numeric types in  
other forae
and have been stunned that all except people with a background in  
Chemistry
find the suggestion bizarre and unnecessary; I realize that GCC is  
not really
a good place to try to shift norms; but on the other hand if a  
patch was to
be prepared that would add a command-line switch (perhaps -sloppy- 
fpe and
-no-sloppy-fpe) that would govern wrapping ((fptype) == (fptype))  
with something

that threw away the least sig. GCC_SLOPPY_FPE_SLOP_SIZE bits in
the mantissa, would it get accepted or considered silly?


No that's not sufficient. And a few bit's of precision are really not  
the
center-point of numerical stability of the overall calculation.  
Please look up
as an example a numerical phenomenon which is usually called  
"cancelation" to see

immediately why.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-21 Thread Marcin Dalecki


On 2006-12-21, at 23:17, Robert Dewar wrote:



> Marcin Dalecki:

Well actually it wouldn't "save the world". However adding an
op-code implementing: x eqeps y <=> |x - y| < epsilion, would be   
indeed helpful.
Maybe some m-f has already patented it, and that's the reason we   
don't see it

already done. But that's of course only me speculating.


There would be nothing wrong in having such an operation, but
it is not equality!


Of course I didn't think about a substitute for ==. Not! However I think
that checks for |x-y| < epsilion, could be really given a significant  
speed edge

if done in a single go in hardware.



Re: GCC optimizes integer overflow: bug or feature?

2006-12-21 Thread Marcin Dalecki


On 2006-12-21, at 23:42, Robert Dewar wrote:


Marcin Dalecki wrote:

Of course I didn't think about a substitute for ==. Not! However I  
think
that checks for |x-y| < epsilion, could be really given a  
significant  speed edge

if done in a single go in hardware.


One thing to ponder here is that "thinks" like this are what lead
to CISC instruction messes. It just seems obvious to people that
it will help efficiency to have highly specialized instructions,
but in practice they gum up the architecture with junk, and a
careful analysis shows that the actual gain is small at best.
How many applications spend a significant amount of time doing
such an epsilon test -- best guess: none.


Well actually you are interpreting too much wight in to my speculations.
I was just curious whatever similar analysis has been already  
seriously done?

And after all, well the most commonly used instruction set architectures
for numerical calculations, are not exactly what one would call "lean  
and mean"...

People simply use what's already there and what is cheap.
Or could you imagine something uglier then the stacked/MMX/XMM/SSE4  
mess?
After all even the supposedly competent instructions set designers  
admitted their previous

fallacy by the introduction of SSE2.

Marcin Dalecki




Suspicious expand_complex_division() in tree-comlpex.c

2007-01-04 Thread Marcin Dalecki

Peeking at the implementation of the expand_complex_division() function
inside tree-complex.c I have compared the conditions for satisfying  
the -fcx-limited-range and flag_isoc99

condition. And thus I have some questions about the logics of it.
In esp. the following lines (starting at 1222 inside the file):

case PAIR (ONLY_REAL, VARYING):
case PAIR (ONLY_IMAG, VARYING):
case PAIR (VARYING, VARYING):
  switch (flag_complex_method)
{
case 0:
==
This is the "quick and don't care about overflow case". This seems  
just fine.


  /* straightforward implementation of complex divide acceptable.  */
  expand_complex_div_straight (bsi, inner_type, ar, ai, br, bi, code);
  break;

case 2:
==
This is supposed to be the ISO C99 conforming full overflow robust case.
Fine.
  if (SCALAR_FLOAT_TYPE_P (inner_type))
{
  expand_complex_libcall (bsi, ar, ai, br, bi, code);
==
However here the break is causing update_complex_assignment() to be  
called TWICE
IN A ROW. I guess it should be a return statement instead. Much like  
in the case

of the multiply by intrinsic function. That's suspicious.

  break;
}
  /* FALLTHRU */
==
The intrinsic function got expanded...

case 1:
==
And now we are expanding the supposedly a bit less precise version of  
the division
directly behind it? This doesn't to be the same kind of "overflow  
robustness logic"

as in the case of multiplication.

  /* wide ranges of inputs must work for complex divide.  */
  expand_complex_div_wide (bsi, inner_type, ar, ai, br, bi, code);
  break;

default:
  gcc_unreachable ();
}
  return;

default:
  gcc_unreachable ();

Thus my question to whoever looked at this code close enough is:

1. Is the FALLTHRU really OK?
2. Shouldn't the logic when to decide which kind of implementation to  
take

not look a bit more like in the multiplication case?

Marcin Dalecki




Re: Suspicious expand_complex_division() in tree-comlpex.c

2007-01-04 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-05, o godz08:36, przez Andrew Pinski:




Thus my question to whoever looked at this code close enough is:

1. Is the FALLTHRU really OK?


yes, see above. Plus the fall through is only for non fp types.


Yes I see now... a quite complicated way to express the choice logic:

1. if -fcx-limited-range is set go straight for the quick overflowing  
version.

2. be strict in case of ISO C99.
3. handle floaing poing divisions more precisely then multiplications  
else.



2. Shouldn't the logic when to decide which kind of implementation to
take
not look a bit more like in the multiplication case?


It is the same in general anyways but division is the one that is
different between flag_complex_method==0 and flag_complex_method==1.


OK. This confirms that the three state flag_complex_method could be
eliminated, and fcx_limited_range used directly instead.
(My final goal is of course something in the way of #pragma STDC  
CX_LIMITED_RANGE)...


Marcin Dalecki




Re: raising minimum version of Flex

2007-01-22 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-22, o godz06:49, przez Ben Elliston:


I think it's worth raising the minimum required version from 2.5.4 to
2.5.31.  The latter version was released in March, 2003, so it is  
hardly

bleeding edge.


Your definition of bleeding edge doesn't fit mine:

$ flex --version
flex version 2.5.4
$ uname -a
Darwin xxx 8.8.0 Darwin Kernel Version 8.8.0: Fri Sep  8 17:18:57 PDT  
2006; root:xnu-792.12.6.obj~1/RELEASE_PPC Power Macintosh powerpc


∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-23, o godz23:54, przez Diego Novillo:



So, I was doing some archeology on past releases and we seem to be  
getting into longer release cycles.  With 4.2 we have already  
crossed the 1 year barrier.


For 4.3 we have already added quite a bit of infrastructure that is  
all good in paper but still needs some amount of TLC.


There was some discussion on IRC that I would like to move to the  
mailing list so that we get a wider discussion.  There's been  
thoughts about skipping 4.2 completely, or going to an extended  
Stage 3, etc.


Thoughts?


Just forget ADA and Java in mainstream. Both of them are seriously  
impeding casual contributions.
The build setup through autoconf/automake/autogen/m4/. has  
problems in this area as well.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz01:48, przez David Daney:

I missed the discussion on IRC, but neither of those front-ends are  
release blockers.


I cannot speak for ADA, but I am not aware that the Java front-end  
has caused any release delays recently.  I am sure you will correct  
me if I have missed something.


What's blocking is not the formal process per se but instead the  
technical

side of things. And from a technical point of view both seriously add
impedance to the overall package.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz02:30, przez David Carlton:


For 4, you should probably spend some time figuring out why bugs are
being introduced into the code in the first place.  Is test coverage
not good enough?


It's "too good" to be usable. The time required for a full test suite
run can be measured by days not hours. The main reason is plain and
simple the use of an inadequate build infrastructure and not the pure
size of code compiled for coverage. Those things get completely  
ridiculous

for cross build targets.


If so, why - do people not write enough tests, is it
hard to write good enough tests, something else?  Is the review
process inadequate?  If so, why: are rules insufficiently stringent,
are reviewers sloppy, are there not enough reviewers, are patches too
hard to review?

My guess is that most or all of those are factors, but some are more
important than others.


No. The problems are entierly technical in nature. It's not a pure human
resources management issue.


My favorite tactic to decrease the number of
bugs is to set up a unit test framework for your code base (so you can
test changes to individual functions without having to run the whole
compiler), and to strongly encourage patches to be accompanied by unit
tests.


That's basically a pipe dream with the auto based build system.  
It's even

not trivial to identify dead code... A trivial by nature change like the
top level build of libgcc took actually years to come by.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz04:32, przez Andrew Pinski:



It's "too good" to be usable. The time required for a full test suite
run can be measured by days not hours.


Days, only for slow machines.  For our PS3 toolchain (which is really
two sperate compilers), it takes 6 hours to run the testsuite, this
is doing one target with -fPIC.  So I don't see how you can say it
takes days.


Quantitatively:

gcc/testsuite dalecki$ find ./ -name "*.[ch]" | wc
66446644  213514
ejh216:~/gcc-osx/gcc/testsuite dalecki$ find ./ -name "*.[ch]" -exec  
cat {} \; | wc

  254741 1072431 6891636

That's just about a quarter million lines of code to process and you
think the infrastructure around it isn't crap on the order of 100?  
Well... since
one "can drive a horse dead only once" the whole argument could  
actually stop here.



No, not really, it took me a day max to get a spu-elf cross compile
building and runing with newlib and all.


Building and running fine, but testing??? And I guess of course that  
it wasn't
a true cross, since the SPUs are actually integrated in to the same  
OS image as

the main CPU for that particular target.


My favorite tactic to decrease the number of
bugs is to set up a unit test framework for your code base (so  
you can

test changes to individual functions without having to run the whole
compiler), and to strongly encourage patches to be accompanied by  
unit

tests.


That's basically a pipe dream with the auto based build system.


Actually the issues here are unrelated at all to auto* and unit  
test framework.


So what do the words "full bootstrap/testing" mean, which you hear  
when providing
any kind of tinny fix? What about the involvement of those utilities  
through
zillions of command line defines and embedded shell scripting for  
code generation
on the ACUTAL code which makes up the gcc executables? Coverage? Unit  
testing?  How?!
Heck even just a full reliable symbol index for an editor isn't easy  
to come by...

Or are your just going to permute all possible configure options?


The real reason why toplevel libgcc took years to come
by is because nobody cared enough about libgcc to do any kind of  
clean up.


Because there are actually not that many people who love to dvelve  
inside the
whole .m4 .guess and so on... Actually It's not that seldom that  
people are incapable

to reproduce the currently present build setup.


  The attitude has
changed recently (when I say recent I mean the last 3-4 years) to  
all of these problems and
in fact all major issues with GCC's build and internals are  
changing for the better.


And now please compare this with the triviality of relocating source  
files in:


1. The FreeBSD bsdmake structure. (Which is pretty portable BTW.)
2. The solaris source tree.
3. A visual studio project.
4. xcode project.


PS auto* is not to blame for GCC's problems, GCC is older than auto*.


It sure isn't the biggest problem by far. However it's the upfront  
one, if you
start to seriously look in to GCC. Remember - I'm the guy who  
compiles the whole
of GCC with C++, so it should be clear where I think the real issues  
are.


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz10:12, przez Michael  
Veksler:




Andrew, you are both correct and incorrect. Certainly,  
deterministic unit testing
is not very useful. However, random unit testing is priceless. I  
have been doing
pseudo-random unit tests for years, believe me it makes your code  
practically

bug free.


However the problem with applying this kind of methods to GCC would  
actually be finding
the units inside the spaghetti. Even a curious look at for example  
how -ffast-math works
did give me recently a run from the compiler 'driver' down to the c4x  
back-end.
An trivial attempt at doing something about it resulted in a request  
for a "full

bootstrap without regressions" for a platform:

1. which can't be bootstrapped at all
2. can't by nature run the test-suite
3. isn't in good shape overall.

Thus the small bolt against a layering violation will remain simply  
unplugged.

The intention was of course to look at how move stuff, which simply
doesn't belong to the scope of a compilation unit a bit more in the  
direction
of the #pragma level. There is after all enough of stuff sitting  
inside bugzilla, that

actually pertains to a request for a #pragma STDC_C99 implementation.

That was a code path example. I'm not going to start about the data.
The polymorphism by preprocessor macro/GTY fun of some(all?) crucial  
data types

makes me think that the whole MFC stuff looks sleek and elegant...

∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz14:05, przez Michael  
Veksler:


From my experience on my small constraint solver (80,000 LOC) by  
making stuff

more suitable for random unit testing you get:

  1. Maintainable+reusable code (with all its benefits).
  2. Faster code: Due to simplicity of each unit, it is
 easier to understand and implement algorithmic enhancements.



There is no need to argue that this kind of approach would make some.
For 3 particular platforms I think the instrumentation of the code
could even be done by using for example the DTrace facilities. My  
concerns
are about how well defined and reproducible such a setup would be in  
view of
the way the build setup goes? It's practically impossible to make any  
guarantees here.


From a general point of view it appears that the GCC contains  
already itself more

then one constrain solver...

If it will involve CP  ( http://en.wikipedia.org/wiki/ 
Constraint_programming ) or
CSP solving then there is a slight chance that I will be able to  
squeeze some of this
into my schedule of 2008, not before. (I refer only to proof of  
concept, at most

a prototype not more).
--

Michael Veksler
http:///tx.technion.ac.il/~mveksler



∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz19:53, przez Mike Stump:


On Jan 23, 2007, at 11:03 PM, Marcin Dalecki wrote:
That's just about a quarter million lines of code to process and  
you think the infrastructure around it isn't crap on the order of  
100?


Standard answer, trivially, it is as good as you want it to be.  If  
you wanted it to be better, you'd contribute fixes to make it better.


This argument fails (trivially) on the assumption that there is an  
incremental way ("fixes") to improve it in time not exceeding the  
expected remaining life span of a developer.


∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz19:53, przez Mike Stump:


On Jan 23, 2007, at 11:03 PM, Marcin Dalecki wrote:
That's just about a quarter million lines of code to process and  
you think the infrastructure around it isn't crap on the order of  
100?


Standard answer, trivially, it is as good as you want it to be.  If  
you wanted it to be better, you'd contribute fixes to make it better.


Well... Let's get out of the somehow futile abstract discussion  
level: One thing that would
certainly help as a foundation for possible further improvement in  
performance in this area
would be to have xgcc contain all the front ends directly linked into  
it. Actually the name
libgcc is already taken, but a "compiler as a library" would be  
usefull in first place.
It could be a starting point to help avoiding quite a lot of overhead  
needed to iterate over

command line options for example.

∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz23:26, przez Andrew  
Pinski:




On Wed, 24 Jan 2007 03:02:19 +0100, Marcin Dalecki  
<[EMAIL PROTECTED]> said:


That's largely because individual tests in the test suite are too
long, which in turn is because the tests are testing code at a
per-binary granularity: you have to run all of gcc, or all of one
of the programs invoked by gcc, to do a single test.  (Is that true?
Please correct me if I'm wrong.)


No, they are almost all 1 or 2 functions long, nothing more than 20  
lines.
The larger testcases are really testing the library parts rather  
than GCC

itself.


Hugh? I guess this wasn't intentional. But please don't make  
sentences by abbreviation change

attribution. It makes me look more GCC "innocent" then actually true ...

∎ Marcin Dalecki ∎




Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz23:52, przez Mike Stump:


On Jan 24, 2007, at 1:12 PM, Marcin Dalecki wrote:
One thing that would certainly help as a foundation for possible  
further improvement in performance in this area would be to have  
xgcc contain all the front ends directly linked into it.


That does seem debatable.

It could be a starting point to help avoiding quite a lot of  
overhead needed to iterate over command line options for example.


Odd.  You think that time is being spent iterating over the command  
line options?


No I think plenty of time goes to dejagnu churning and iterative  
restarts of new gcc process instances as well as other system calls.  
That's at least what a casual look at a top and the sluggishness of  
the disk system I'm using suggests. The option churning I did have in  
mind was like


gcc -O0 test.c, gcc -01 test.c, gcc -O2 test.c

runs in sequence, which are quite common.


Do you have any data points to back this up?  I suspect we're  
spending less than 0.01% of the compilation time in duplicative  
argument processing.  After a quick check, yup, 0ms out of 1158 ms  
are spent in option processing.  11 ms in xgcc, 1 ms in all of  
xgcc, and 10 ms in system calls.  So, I think due to measurement  
error, I can say that no more than 0.17% of the time is spent in  
duplicative option processing.


∎ Marcin Dalecki ∎




Re: Failure to build libjava on 512MB machine

2007-01-31 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-31, o godz12:50, przez Andrew Haley:


Benjamin Kosnik writes:


I am somewhat concerned with the response of the java maintainers
(and others) that it's OK to require >512MB to bootstrap gcc with
java, or that make times "WORKSFORME."


Well, I didn't say that, so I hope you aren't referring to me.  But
before we do anything we need to know more about the machine on which
it failed.

Ultimately, it's a question of what we consider to be a reasonable
machine on which to build libgcj.  I don't know the answer to that.


512MB is *certainly* resonable. It's the most common amount of  
shipping RAM
for in esp. notebooks and it's what usually get's allocated to  
virtualization

solutions.


Re: GCC 4.2.0 Status Report (2007-02-19)

2007-02-19 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-02-20, o godz01:27, przez Joseph S.  
Myers:



This is *not* the only such prediction for a previous release, by far,
just the one I found quickest.  *All* releases seem to have the
predictions that they are useless, should be skipped because the next
release will be so much better in way X or Y, etc.; I think the  
question
of how widely used a release series turned out to be in practice  
may be
relevant when deciding after how many releases the branch is  
closed, but

simply dropping a release series after the branch is created is pretty
much always a mistake.


Well... Please don't forget that in esp. the Linux distributors are  
actually
what one would call *revision junkees*. Not all decissions taken by  
them seem
to be based on technical meritt alone. And then there are still thoes  
huge
patches included by them before shipping, hiding behind a familar  
sounding version

number.

∎ Marcin Dalecki ∎




Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-04-01, o godz13:58, przez Paul Brook:

If you're already switching compilers, moving to an already  
obsolete release

(3.3) seems a strange choice. At this point I'd recommend skipping 3.x
altogether and going straight to gcc4.1/4.2.

Many of the improvements in c++ code generation were as a result of  
tree-ssa,

you only get with 4.x.


I wouldn't recommend it. One has to adapt gradually to the patience  
required to

use the later compiler editions.

➧ Marcin Dalecki ❖




Re: Inlining and estimate_num_insns

2005-02-27 Thread Marcin Dalecki
On 2005-02-27, at 23:28, Richard Guenther wrote:
People already know to use __attribute__((always_inline)) (ugh!),
When they discover after countelss ours of debugging dessions during 
kernel coding
that the compiler suddenly and unpredicably doesn't honor what they 
told him explicitely
to do thus breaking expected behaviour on IO ...



Re: Inlining and estimate_num_insns

2005-02-27 Thread Marcin Dalecki
On 2005-02-27, at 23:39, Andrew Pinski wrote:
On Feb 27, 2005, at 5:30 PM, Steven Bosscher wrote:
Interesting.  You of course know Gaby is always claiming the exact
opposite: That the compiler must *honor* the inline keyword (explicit
or "implicit", ie. inline in class definitions), that inline is not
a hint but an order that the compiler must follow.
And much to my own surprise, I'm actually beginning to agree with him.
I always say that inline is like register, it is just a hint to the
compiler and nothing more (well in C++ it changes the linkage).
This same discussion in a way have come up for register in the past
which is why I always compare it to that keyword, if we did what you
are suggesting for inline, we may as well do the same for register.  
And
now when someone compiles code made for ppc (which has lots of 
registers
available) on x86, you will get an ICE because the code uses register
a lot.
And I view the register keyword as a fine tunning tool. In view of the 
fact
that this kind of work is highly target related task you make it 
useless by
your definition. People, please, PREDICTABILITY of behaviour is a 
paradigm of software
development!



Re: Inlining and estimate_num_insns

2005-02-27 Thread Marcin Dalecki




Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 03:22, Ed Smith-Rowland wrote:
On 1 Mar 2005 at 8:17, James A. Morrison wrote:
Hi,
  I've decided I'm going to try to take the time and cleanup and 
update
  the
Pascal frontend for gcc and try it get it integrated into the upstream
source. I'm doing this because I wouldn't like to see GPC work with 
GCC
4+. I don't care at all at supporting GPC on anything less than GCC 
4.1
so I've started by ripping out as much obviously compatibility code 
as I
can and removing any traces of GPC being a separate project.
My guess is that inclusion of Pascal into gcc would give that language
more exposure and would lead to faster development.
I object it. There is no single application of importance in pascal for 
me
other then TeX, which GPC doesn't handle anyway. It's not worth the 
bandwidth for me and
like java it's another candidate which will drag a ton of library 
framework with
itself later on. Like java it would significantly impede any attempt to 
do full coverage
builds of the whole compiler tree.



Fortran libs.

2005-03-01 Thread Marcin Dalecki
After trying to build the fortran compiler I'm convinced that at a cut
down version of the multi precision libraries it requires should be 
included
in to the compiler tree. The reasons are as follows:

1. They should be there for reference in bug checking.
2. This would make installation on systems which don't fall in to the 
category of
   JBLD (Joe Users Bloated Linux Distro) much easier.
3. Stuff not required for the proper operation of the compiler could be 
taken out.
   It's actually just a tinny subset of the library, which the compiler 
truly required.
4. It would see just to be consequent in face of a copy of the zip 
library./
5. It would make it easier to guarantee that the source code setup 
choices between what the
   fortran compiler expects and how the library was build match.
6. Since there are multiple releases of the libraries in question this 
would just reduce
   the combinatorial complexity of the maintainance issues.



Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 03:36, Andrew Pinski wrote:
Actually I disagree with you GPC is much smaller than Java,
If you have only USCDII in mind yes. But not if you look after any of 
the usable, aka
Delfi, implementation of it. You always have to have runtime libraries.

and doing full converage
for a large project like GCC is sometimes a hard thing to do anyways.
So the reasoning is: "it is pain, give me more of it."?
In fact it is even harder than you thing, especially with code added 
(in reload) to do any full coverage.
Hugh? I see the argument that another front-end will exercise more of 
the back-end, since
chances are that it will trigger code paths in it which other languages 
don't use.
However I can hardly see any Pascal language feature/construct, which 
wouldn't be already
covered by the C++ or Java ABI.

Who cares it gets slower or more impractical to do what you are doing
I care :-).


Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 05:20, Ed Smith-Rowland wrote:
In fact, I'm somewhat curious what caused folks to jump into the 
breach with parsers.  From reading the lists it seems to be 
maintainability and stomping out corner case problems for the most 
part.

Perhaps a parser toolset is emerging that will decouple the front ends 
from the middle and back ends to a greater degree.  I think I will 
look at the new C/C++ parsers and see what's what.
You know this "shift/reduce conflict" stuff the former parsers where 
barking at you?
Contrary to C and C++ a LR-grammar parser generator like bison or yacc 
is a fully adequate
tool for pascal.



Mudlfap disable doesn't work as expected.

2005-03-02 Thread Marcin Dalecki
Apparently largely unnoticed by compilation with a C compiler the 
tree-mudflap.c and
tree-nomudflap.c files are used both at the same time on my system 
(powerpc-apple-darwin7.8.0):

c++   -O2 -fsigned-char -DIN_GCC   -W -Wall -Wwrite-strings 
-Wstrict-prototypes -Wmissing-prototypes  -fno-common   -DHAVE_CONFIG_H 
 -o cc1 \
c-lang.o stub-objc.o attribs.o c-errors.o c-lex.o c-pragma.o 
c-decl.o c-typeck.o c-convert.o c-aux-info.o c-common.o c-opts.o 
c-format.
o c-semantics.o c-incpath.o cppdefault.o c-ppoutput.o c-cppbuiltin.o 
prefix.o c-objc-common.o c-dump.o c-pch.o c-parser.o darwin-c.o 
rs6000-c.o
 c-gimplify.o tree-mudflap.o c-pretty-print.o main.o tree-browser.o 
libbackend.a ../libcpp/libcpp.a ../libcpp/libcpp.a ./../intl/libintl.a 
-lic
onv  ../libiberty/libiberty.a
ld: multiple definitions of symbol _Z19mudflap_finish_filev.eh
tree-mudflap.o definition of _Z19mudflap_finish_filev.eh in section 
(__TEXT,__eh_frame)
libbackend.a(tree-nomudflap.o) definition of absolute 
_Z19mudflap_finish_filev.eh (value 0x0)
ld: multiple definitions of symbol 
_Z20mudflap_enqueue_declP9tree_node.eh
tree-mudflap.o definition of _Z20mudflap_enqueue_declP9tree_node.eh in 
section (__TEXT,__eh_frame)
libbackend.a(tree-nomudflap.o) definition of absolute 
_Z20mudflap_enqueue_declP9tree_node.eh (value 0x0)
ld: multiple definitions of symbol 
_Z24mudflap_enqueue_constantP9tree_node.eh
tree-mudflap.o definition of _Z24mudflap_enqueue_constantP9tree_node.eh 
in section (__TEXT,__eh_frame)

Oj waj


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-07, at 17:09, Duncan Sands wrote:
Mathematically speaking zero^zero is undefined, so it should be NaN.
I don't see the implication here. Thus this certain is no 
"mathematical" speak.

This already clear for real numbers: consider x^0 where x decreases
to zero.  This is always 1, so you could deduce that 0^0 should be 1.
However, consider 0^x where x decreases to zero.  This is always 0, so
you could deduce that 0^0 should be 0.
There is no deduction involved here. If you want to speak 
"mathematically" please
use the following terms: functions definition domain, continuous 
function
and so on... You wouldn't then sound any longer like a pre-Newtonian
mathematician needing a lot of words to describe simple concepts like
smoothness or derivability at least.

 In fact the limit of x^y
where x and y decrease to 0 does not exist, even if you exclude the
degenerate cases where x=0 or y=0.  This is why there is no reasonable
mathematical value for 0^0.
There is no reason here and you presented no reasoning. But still there 
is a
*convenient* extension of the definition domain for the power of 
function for the
zero exponent.



Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-07, at 17:31, Duncan Sands wrote:
I would agree with Paolo that the most imporant point is arguably
consistency, and it looks like that is pow(0.0,0.0)=1
just so long as everyone understands that they are upholding the
standard, not mathematics, then that is fine by me :)
All the best,
Duncan.
PS: There is always the question of which standard is being upheld,
since presumably both the Fortran and Ada standards have something
to say about this.
The "blind appeal" to higher authority isn't a proper argument as well.
The proper argument is just that: convenience in numerical computations 
during
the modeling of real numbers on finite devices. And just this 
convenience is
represented by what the "standard" says.



Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-07, at 17:16, Chris Jefferson wrote:
| Mathematically speaking zero^zero is undefined, so it should be NaN.
| This already clear for real numbers: consider x^0 where x decreases
| to zero.  This is always 1, so you could deduce that 0^0 should be 1.
| However, consider 0^x where x decreases to zero.  This is always 0, 
so
| you could deduce that 0^0 should be 0.  In fact the limit of x^y
| where x and y decrease to 0 does not exist, even if you exclude the
| degenerate cases where x=0 or y=0.  This is why there is no 
reasonable
| mathematical value for 0^0.
|

That is true.
It's not true because it's neither true nor false. It's a not well
formulated statement. (Mathematically).


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 01:11, Robert Dewar wrote:
Marcin Dalecki wrote:
There is no reason here and you presented no reasoning. But still 
there is a
*convenient* extension of the definition domain for the power of 
function for the
zero exponent.
The trouble is that there are *two* possible convenient extensions.
Depending of course on the definition of convenience you apply ...
However, the one chosen by the C standard has indeed become pretty
prevalent (the Ada RM for instance specifies that 0**0 = 1 (applies
to complex case as well).
Yes of course sure...


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 01:14, Robert Dewar wrote:
Marcin Dalecki wrote:
On 2005-03-07, at 17:16, Chris Jefferson wrote:
| This is why there is no reasonable
| mathematical value for 0^0.
|
That is true.
It's not true because it's neither true nor false. It's a not well
formulated statement. (Mathematically).
I disagree with this, we certainly agree that 0.0 ** negative value
is undefined, i.e. that this is outside the domain of the ** function,
and I think normally in mathematics one would say the same thing,
and simply say that 0**0 is outside the domain of the function.
It's not a matter of reason it's a matter of definition. Thus the
statement "This is why there is no reasonable mathematical value for 0^0
is neither true nor false", has the same sense as saying that... well 
for example:
"rose bears fly very high". Just a random alliteration of terms not a 
statement and
thus neither true nor false because the concept of decisibility  
doesn't apply.



Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 01:21, Robert Dewar wrote:
Paolo Carlini wrote:
Interesting, thanks. The problem is, since the C standard is 
admittedly
rather vague in this area, some implementation of the C library simply
implement the basic formula (i.e., cexp(c*clog(z))) and don't deal 
*at all*
with special cases. This leads *naturally* to (nan, nan).
In other cases we are more "lucky", though.
Well this is a grotesquely inaccurate way of computing exponentiation
anyway unless you have a lot of extra guard bits for the intermediate
value, so any library that does this is brain dead anyway.
Amen.


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 01:47, Ronny Peine wrote:
Hi again,
a small proof.
How cute.
if A and X are real numbers and A>0 then
A^X := exp(X*ln(A)) (Definition in analytical mathematics).
0^0 = lim A->0, A>0 (exp(0*ln(A)) = 1 if exp(X*ln(A)) is continual 
continued

The complex case can be derived from this (0^(0+ib) = 0^0*0^ib = 1 = 
0^a*0^(i*0) ).
Well, i know only the german mathematical expressions, so maybe the 
translations to english are not accurate, sorry for this :)
You managed to hide the proof very well. I can't find it.


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 02:01, Robert Dewar wrote:
Yes, and usually by definition in mathematics 0**0 is outside the 
domain
of the exponentiation operator.
Usually the domain of the function in question with possible extensions
there of is given explicitly where such a function is in use.
"There is no reasonable mathematical
value ..." is just another way of saying the same thing, and is a
perfectly reasonable statement.
Last time I checked I missed the definition of what a mathematical 
value is.
'Just know about numbers as values you know...



Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 02:55, Ronny Peine wrote:
Maybe i found something:
http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps
page 9 says:
Lot's of opinions few hard arguments... I see there.
I wouldn't consider the above mentioned paper authoritative in any way.


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 04:07, David Starner wrote:
On Tue, 8 Mar 2005 03:24:35 +0100, Marcin Dalecki <[EMAIL PROTECTED]> 
wrote:
On 2005-03-08, at 02:55, Ronny Peine wrote:
Maybe i found something:
http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps
page 9 says:
Lot's of opinions few hard arguments... I see there.
I wouldn't consider the above mentioned paper authoritative in any 
way.
I guess just because someone wrote _the_ standard programs for testing
the quality of floating point support in C and Fortran, and has got a
ACM Turing Award for:
"For his fundamental contributions to numerical analysis. One of the
foremost experts on floating-point computations. Kahan has dedicated
himself to 'making the world safe for numerical computations.'"
doesn't mean you should actually pay attention to anything they have
to say on the subject.
Are we a bit too obedient today? Look I was talking about the paper 
presented
above not about the author there of.



Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Marcin Dalecki
On 2005-03-08, at 05:06, David Starner wrote:
The author's opinion comes from experience in the field. When someone
with a lot of experience talks, wise people listen, if they don't
agree in the end. I see no reason to casually dismiss that article.
My point is that there are really few hard arguments in the publication 
but
plenty of opinions instead. Thus there isn't much to agree with in it 
presented,
but plenty of things which you can feel free to obey instead. BTW. 
Isn't the
author the same person we have the gratitude to thank for the register 
stack
and hidden bits design of the 8087? Hmm...



Re: Merging calls to `abort'

2005-03-13 Thread Marcin Dalecki
On 2005-03-14, at 05:20, Gary Funck wrote:
Richard Stallman wrote (in part):
What's the point of cross-jumping?  It saves a certain amount of
space; it has no other benefit.  All else being equal, there's no
reason not to do it.  But cross-jumping abort calls interferes with
debugging.  That's a good reason not to do it.
t's get rid of this optimization.  Those who want to use a
fancy_abort function will still be able to do so, but this change will
be an improvement for the rest.
Would a new attribute be in order, that disables the optimization?
For example, __attribute__ ((unique_call))?  That way, the programmer
can designate other procedures than abort() as procedures which
should not be cross-jumped.
This may be generally useful. But in this particular case the meaning of
the function in question is already well defined by the standard. Thus 
it shouldn't
be required in esp. in lieu of the fact that not everybody is going to 
use a still
not existent version of glibc which may catch up in about est. 4 years 
on this.



Re: Dumping Low Level Intermediate Representation for Another Compiler

2005-03-28 Thread Marcin Dalecki
On 2005-03-29, at 05:59, Wei Qin wrote:
Hello GCC developers,
I am avid user of gcc and have 5 cross-gcc's installed on my machine.
Thanks for your great work. Recently I want to do some compiler work
that involves analyzing low level intermediate representation. I
thought about using research compilers such as SUIF or SUIF II but
those lack maintenance and cannot even be built without much hacking.
So I am now considering using gcc. But since I am not familiar with
its source code structure, I am not comfortable to work directly on
it. Instead, I am thinking of dumping the MIPS RTL prior to the first
scheduling pass and then use an RTL parser to read it into another
compiler such as mach-SUIF. But the document says that RTL does not
contain all information of a source program, e.g. global variables. So
I wonder if there is a way for me to get complete low level MIPS IR
that is largely optimized but not register allocated. Or should I give
up this thought? Expert please advise.
I don't feel like an expert. However you may consider looking at
the fall-out the the LLVM keyword in google a bit closer. It may
very well give you quite what you want. If you are interested generally
in compilers TenDRA (www.ten15.org) will fit too.


Re: PCH and moving gcc binaries after installation

2005-03-30 Thread Marcin Dalecki
On 2005-03-30, at 08:37, Dan Kegel wrote:
Since I need to handle old versions of gcc, I'm
going to code up a little program to fix all
the embedded paths anyway, but I was surprised
by the paths in the pch file.  Guess I shouldn't
have been, but now I'm a little less confident
that this will work.  Has anyone else tried it?
I did try it some time ago. Well the fallout was that it's all really
painful. There seem to be all different path handling code in
gcc.c
collect
cpp
and cc1
Then you have to take in to account that the path handling will be
subtly different whatever configure did feel like cross compiling or 
not.
There seem to be some special VMX (shrug) trickery sparcled around.
One can see that the collect code does contains some independent not 
finished
attempt to unify all of the above. Frequently enough spec files will 
come from
behind and bite you.

Look for example at the requirement to double install as and 
i386-linux-as
and to have the as first in PATH in a cross build environment. It's just
accidental.

So the short story is: Yes it's a "huge" mess.
The solution involved basically just patching all of the above listed 
tools
to include and additional mechanism first before even starting to muddle
through the defaults.



Re: RFC: #pragma optimization_level

2005-04-01 Thread Marcin Dalecki
On 2005-04-01, at 23:17, Richard Guenther wrote:
On Apr 1, 2005 11:07 PM, Mark Mitchell <[EMAIL PROTECTED]> wrote:
Dale Johannesen wrote:
Agree.  (And documentation will be written.)
Yay.  It sounds like we're definitely on the same page.  I think that 
as
long as we keep the semantics clear, this will be useful 
functionality.

That's what I assumed.  Anything finer than that is insane. :-)

Actually there are cases where it makes sense:  you could ask that a
particular call be
inlined, or a particular loop be unrolled N times.
True.  Consider my remark regarding insanity qualified to 
whole-function
optimizations. :-)
But the question is, do we want all this sort of #pragmas?  It would
surely better to improve the compilers decisions on applying certain
optimizations.  As usual, in most of the cases the compiler will be
smarter than the user trying to override it (and hereby maybe only
working around bugs in a particular compiler release).
Compilers are good. But they still don't think. And they don't have a 
clue
about the input characteristics and thus affected runtime behavior of 
the code.

Sometimes you  definitively want to switch the optimizers GOALS
at the level of a single file. Or in other words: you want to provide
additional information about the runtime of the code to the compiler for
better optimization decisions.
Like for example optimizing the whole system for size but letting him
optimize some well identified hot-spot functions for speed.


Re: RFC: #pragma optimization_level

2005-04-01 Thread Marcin Dalecki
On 2005-04-01, at 23:36, Mark Mitchell wrote:
Richard Guenther wrote:
But the question is, do we want all this sort of #pragmas?  It would
surely better to improve the compilers decisions on applying certain
optimizations.  As usual, in most of the cases the compiler will be
smarter than the user trying to override it (and hereby maybe only
working around bugs in a particular compiler release).  All opposition
that applied to stuff like attribute((leafify)) (hi Gaby!) applies 
here, too.
So what is your opinion to all this babysitting-the-compiler?
I agree, in general.
In fact, I've long said that GCC had too many knobs.
(For example, I just had a discussion with a customer where I 
explained that the various optimization passes, while theoretically 
orthogonal, are not entirely orthogonal in practice, and that truning 
on another pass (GCSE, in this caes) avoided other bugs.  For that 
reason, I'm not actually convinced that all the -f options for turning 
on and off passes are useful for end-users, although they are clearly 
useful for debugging the compiler itself.  I think we might have more 
satisfied users if we simply had -Os, -O0, ..., -O3.  However, many 
people in the GCC community itself, and in certain other vocal areas 
of the user base, do not agree.)
I think the point here is not quite the number of options but the 
character they currently
frequently have. Currently you tell the means about what the compiler 
should do:
-fdo-this-and-that-obscure-optimization.

I think that most people would prefer to state goals:
-ftry-to-achive-small-code-size (like -Os)
-fplese-try-to-be-nice-to-foo
-fplease-dont-care-about-aliasing.
-fgive-me-well-debuggable-code-I-dont-care-now-for-execution-speed
-fdo-as-much-as-you-think-is-good-for-execution-speed (Ok that one 
exists: -O9)

Thus the -Ox family is what people like. More of it would be good and 
the -f family
indeed degraded to compiler development help.



Re: RFC: #pragma optimization_level

2005-04-04 Thread Marcin Dalecki
On 2005-04-04, at 19:46, Dale Johannesen wrote:
On Apr 3, 2005, at 5:31 PM, Geert Bosch wrote:
On Apr 1, 2005, at 16:36, Mark Mitchell wrote:
In fact, I've long said that GCC had too many knobs.
(For example, I just had a discussion with a customer where I 
explained that the various optimization passes, while theoretically 
orthogonal, are not entirely orthogonal in practice, and that 
truning on another pass (GCSE, in this caes) avoided other bugs.  
For that reason, I'm not actually convinced that all the -f options 
for turning on and off passes are useful for end-users, although 
they are clearly useful for debugging the compiler itself.  I think 
we might have more satisfied users if we simply had -Os, -O0, ..., 
-O3.  However, many people in the GCC community itself, and in 
certain other vocal areas of the user base, do not agree.)
Pragmas have even more potential for causing problems than 
command-line options.
People are generally persuaded more easily to change optimization 
options, than
to go through hundreds of source files fixing pragmas.
I would hope so.  But the reason I'm doing this is that we've got a 
lot of customer
requests for pragma-level control of optimization.
I don't agree with the argument presented by Geert Bosch. It's even 
more difficult to
muddle through the atrocities of autoconf/automake to find the places 
where compiler
switches get set in huge software projects then doing a grep -r 
\\#pragma ./
over a source tree.



Re: RFC: #pragma optimization_level

2005-04-05 Thread Marcin Dalecki
On 2005-04-05, at 01:28, Nix wrote:
On 4 Apr 2005, Marcin Dalecki stipulated:
I don't agree with the argument presented by Geert Bosch. It's even 
more difficult to
muddle through the atrocities of autoconf/automake to find the places 
where compiler
switches get set in huge software projects
What's so hard about
find . \( -name 'configure.*' -o -name Makefile.am \) -print | xargs 
grep CFLAGS

anyway?
Tha fact the you actually *seldomly* have the precise version
of autoconf/automake/perl/gawk installed on the host you wont to 
reproduce the
THRE stages to get a hand and Makefile.

I could turn the question back: What's so hard about grepping the 
source?

I think this is a straw man. Manipulating CFLAGS is just *not that
hard*.
In practice it IS.
 A few minutes will suffice for all but the most ludicrously
byzantine project (and I'm not talking `uses automake', here, I'm
talking `generates C code in order to compute CFLAGS to use when
compiling other code'.)
Thus by your definition most projects are byzantine. glibc, tetex ans 
so on...



Re: RFC: #pragma optimization_level

2005-04-08 Thread Marcin Dalecki
On 2005-04-05, at 16:12, Nix wrote:

I could turn the question back: What's so hard about grepping the 
source?
Because one does not expect to find compilation flags embedded in the
source?
One does - if not grown up in the limited gcc "community". All the high 
scale
compilers out there DO IT.

 Because generated source is fairly common?
No it isn't.
 Because eventually
that runs you into `what's so hard about predicting the behaviour of
this code generator in order to...' and then you ram straight into
the halting problem, or, worse, Qt's qmake program. :)
(None of these are strong arguments, I'll admit, but if your argument
*for* is `it's convenient', then an argument *against* of `it's
unexpected' is stronger than it looks.
... Lot's of perfectly valid descriptions of by autoconf/automake are a 
horror from
software maintenance point of view deleted... Please look around the 
corner
away from the JBLD (Joes Bloated Linux Distro). It doesn't have to be 
all
the difficult to just control the compilation process ...


Actually, glibc is a good argument *against* the need for this feature:
it has a large number of places where unusual one-off CFLAGS are
required, and it manages to do them all via one-file overrides in the
makefiles. See e.g. linuxthreads/sysdeps/i386/Makefile for an extensive
use of this. csu/Makefile has an example of *completely* overriding the
CFLAGS for a single file that is especially delicate (initfini.s).
Please reread what you describe: You describe a packages, which needs a 
genuine
way to fine control the compilation options. I think it can't be 
questioned that
a sufficiently low level of system programming there is such a need.
Another striking example is the need of the Linux kernel to disable the 
frame pointer
firmly in the schedule() function to maintain the stack layout for a 
context switch.
You see: The requirement is for a single function and not a file.
Some goes for glibc where what you describe is less then esthetically 
neutral.

Since those packages don't have any other choice they resort to ugly 
manipulating of
very very fragile thing: environment variables.

Look there *are* good reasons why so many high grade professional 
compilers do it.
It's not something pretty. But the alternatives are just plain ugly 
(glibc/kernel).



Re: The Linux binutils 2.16.90.0.1 is released

2005-04-10 Thread Marcin Dalecki
On 2005-04-10, at 19:43, H. J. Lu wrote:
 Patches for 2.4 and 2.6 Linux kernels are
available at
http://www.kernel.org/pub/linux/devel/binutils/linux-2.4-seg-4.patch
http://www.kernel.org/pub/linux/devel/binutils/linux-2.6-seg-5.patch

The primary sites for the beta Linux binutils are:
1. http://www.kernel.org/pub/linux/devel/binutils/
The announced patch files are missing.


Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving order of functions and top-level asms via cgraph]

2005-04-11 Thread Marcin Dalecki
On 2005-04-11, at 14:01, Andrew Haley wrote:
Nathan Sidwell writes:
Andrew Haley wrote:
Nathan Sidwell writes:
Andrew Haley wrote:
Might it still be possible for a front end to force all pending 
code
to be generated, even with -fno-unit-at-a-time gone?
I think this is a bad idea.  You're essentially asking for the 
backend
to retain all the functionality of -fno-unit-at-a-time.
OK.  So, what else?
As steven asked, I'd like to understand why this is not a problem
for the C++ community.  There are several alternatives
1) The C++ programs are smaller than the java programs
That's my guess.  Usually, C++ users compile one source file at a
time, whereas Java users find it convenient to compile a whole
archive.
If it is an archive the answer would be to change gcc.c to recognize 
this
and to let it compile the archive one item at a time. (.jar).



Re: Heads-up: volatile and C++

2005-04-14 Thread Marcin Dalecki
On 2005-04-15, at 01:10, Richard Henderson wrote:
On Thu, Apr 14, 2005 at 11:30:20PM +0200, Jason Merrill wrote:
Consider Double-Checked Locking
(http://www.cs.umd.edu/~pugh/java/memoryModel/ 
DoubleCheckedLocking.html).
I used DCL with explicit memory barriers to implement thread-safe
initialization of function-local statics
(libstdc++-v3/libsupc++/guard.cc).  The proposed change to volatile
semantics would allow me to write it more simply, by just making the
initialized flag volatile.  Yes, volatile would be stronger than is
actually necessary for DCLP, but I don't have to use it if I want
finer-grained control over the synchronization.
Is there any reason to want to overload volatile for this, rather than
  template T acquire(T *ptr);
  template void release(T *ptr, T val);
where the functions do the indirection plus the memory ordering?
Templates are a no-go for a well known and well defined subset for C++
for embedded programming known commonly as well embedded C++.
Actually speaking about embedded C++. Well, it would be helpful
to have an option for the C++ frontend which would disallow
constructs prohibited by it. If I only could find the time right now...


Re: Heads-up: volatile and C++

2005-04-15 Thread Marcin Dalecki
On 2005-04-15, at 20:18, Mike Stump wrote:
On Thursday, April 14, 2005, at 08:48  PM, Marcin Dalecki wrote:
Templates are a no-go for a well known and well defined subset for C++
for embedded programming known commonly as well embedded C++.
My god, you didn't actually buy into that did you?  Hint, it was is, 
and always will be a joke.
You dare to explain what's so funny about it?


Re: Heads-up: volatile and C++

2005-04-15 Thread Marcin Dalecki
On 2005-04-15, at 19:58, Gabriel Dos Reis wrote:
| Templates are a no-go for a well known and well defined subset for 
C++
| for embedded programming known commonly as well embedded C++.

You'd be surprised to learn that embedded systems people do use
templates for various things -- among which, maybe the most
"paradoxical" is to avoid code bloat.

Embedded C++ was a mistake, alas a mistake that seems to last.
The "seems to last" point is the significant one. Not your superstitions
about what I do or don't know about.


Re: Heads-up: volatile and C++

2005-04-15 Thread Marcin Dalecki
On 2005-04-15, at 23:59, Mike Stump wrote:
On Friday, April 15, 2005, at 02:52  PM, Marcin Dalecki wrote:
My god, you didn't actually buy into that did you?  Hint, it was is, 
and always will be a joke.
You dare to explain what's so funny about it?
Oh, it wasn't funny.  Maybe the English is slightly too idiomatic?  
I'd need someone that understands the English and German to translate.

You can read it as, it was and will always be, just a bad idea.
When will be a full and standard conforming template implementation in 
GCC finished then?



Re: Heads-up: volatile and C++

2005-04-15 Thread Marcin Dalecki
On 2005-04-16, at 00:38, Mike Stump wrote:
Seriously, what does that have to do with anything?
Well perhaps my view is somehow shed by the project I'm currently 
sitting on.
It's actually kernel programming. And this occurs for me quite to be 
quite the kind of
stuff, which may be very well put this kind of object serialization to 
good use...

I know, let's not recommend C for device driver programming, because 
there doesn't exist a C compiler that doesn't have a bug in the world.
Actually I see you point and agree. However what one really needs for 
device driver programming
is actually a super-set of C.

My claim would be, there is a reasonable portable subset of C, or C++ 
that one can use in their endeavors.  Further, while in 1992, this 
subset maybe didn't reasonably include templates, that in 2005, it 
does.

If for you, it doesn't, well, ok, fine.
OK agreed.
Go research why that thing that I refuse to even name, doesn't include 
templates, report back here, when you have the answer.
As you already pointed out the reasons stated that long ago are of no 
particular argumentative
interest nowadays. However the conclusions itself may still be valid 
for other reasons.
Well as a side note - my main problems with templates are not the ideas 
involved here. Or the
coding style design paradigmas it permits.
However there are serous validation concerns in shed of the seemingly 
always evolving
and very very extensive specification. It looks like there is always a 
corner case in behavior
which has to be plunged in the next standard revision. Template driven 
code too tends to be somehow
immune to even simpler things like independent peer review. There are 
pragmatical deficiencies in
*all* the implementations I had to deal with thus far. There are 
related maintenance problems over
time and when changing compilers. And so on. However I fully agree that 
those points don't apply
universally  and there are a lot of valid uses of the whole C++ set. I 
recognize as well that
in the case of GCC itself the implementation of this facility got much 
much better during recent years.

I just don't agree that something specified as a subset guided 
effectively at the lie of exclusion of
all language constructs which can be described as conceptually 
difficult should be dismissed straight
away. It can be useful if only for example for the case where you have 
some people with truly bad habits on
your team to put a stop gap on them. (Most of template driven code out 
there written in house sure as
hell didn't look exactly pretty thus far to me...)

 Don't accept the marketing explanation either.
Agreed. Maybe the idea of a subset of C++ basically coming down what 
would be a C with inheritance
was somehow over-hyped by too much marketing bragging as an excuse for 
some defective C++ compiler
implementation at introduction? Thus you still have grief memories 
about it?

  I can then check, and see if your answer matches mine.  We can then 
discuss if, at the time, it made for a good decision for the so named 
subset, it did not.
Well I don't thing it's a question of good or bad. It's more like a 
question of whatever it's sufficiently
usefull.



Re: C++ ABI mismatch crashes

2005-04-18 Thread Marcin Dalecki
On 2005-04-18, at 04:22, Dan Kegel wrote:
Once the gcc C++ ABI stabilizes,
i.e. once all the remaining C++ ABI compliance bugs have
been flushed out of gcc, this requirement can be relaxed."
"Thus in esp. on Judgment Day we will relax this requirement".
The changes in CPU instrution sets surpasses the presumed ABI
stabilization in C++. And not only due to gcc "bugs".


Re: function name lookup within templates in gcc 4.1

2005-04-18 Thread Marcin Dalecki
On 2005-04-18, at 04:37, Gareth Pearce wrote:
So I just started trying out gcc 4.1 - with a program which compiles 
and
runs fine on gcc 3.3.

Attached is a reduced testcase which shows runtime segfault due to 
stack
overflow if compiled with 4.1 but does not with 3.3.  Trivial work 
around is
to move the specific declaration above the template definition.  Now I 
see
potential for this to be 'the way the standard wants it to be', but 
given I
don't have a copy of the standard I am unsure.

The type of 1 doesn't have nothing to do with sstring_t. Thus the 4.1 
behavior
is correct.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-27 Thread Marcin Dalecki
On 2005-04-27, at 21:57, Steven Bosscher wrote:
On Wednesday 27 April 2005 17:45, Matt Thomas wrote:
The features under discussion are new, they didn't exist before.
And because they never existed before, their cost for older platforms
may not have been correctly assessed.
If someone had cared about them, it would have been noticed earlier.
But since _nobody_ has complained before you, I guess we can conclude
that by far the majority if GCC users are quite happy with the cost
assesments that were made.
That is simply not true. I don't feel happy about the whole gcj thing.
Already libstdc++ provided alongside with the compiler is pain enough.
But the runtime support for Java is a disaster in the making since it's
getting to be a truly huge behemoth which is constantly changing and 
"eating"
new APIs. Worst thing about it is it's grief for external libraries 
like GTK+ for
example.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-27 Thread Marcin Dalecki
On 2005-04-28, at 03:06, Peter Barada wrote:

Well, yes.  1 second/file is still slow!  I want "make" to complete
instantaneously!  Don't you?
Actually I want it to complete before I even start, but I don't want
to get too greedy. :)
What's really sad is that for cross-compilation of the toolchain, we
have to repeat a few steps (build gcc twice, build glibc twice)
because glibc and gcc assume that a near-complete environment is
available(such as gcc needing headers, and glibc needing -lgcc-eh), so
even really fast machines(2.4Ghz P4) take an hour to do a cross-build
from scratch.
Actually what GCC needs to know are the following:
1. What will the signal strucutre look alike on the target system.
   In esp.: What does the kernel think? What does the glibc think?
2. Do we do TLS?
3. Do we do linuxthreads or nptl?
glibc just wants:
1. Say hello to libgcc_s
2. Does the compiler support TLS?
And then don't forget that libgcc_s wants:
1. Say hello to C++, in a way requiring libc functions for exception
handling.
With a "tad bit" of work the double compilation can be avoided for the 
glibc.
You will have to build a GCC with static libgcc first and you will only 
need
the second gcc build cycle to get a dynamic libgcc_s as well as C++, 
since
C++ makes a shared libgcc mandatory.

The whole double build could be avoided if:
1. It would be possible to build libgcc for itself without rebuilding 
the
whole compiler.

2. It would be possible to build first the C compiler and then just the 
C++ compiler.

2. The information required by glibc could be provided statically to it.
All of the above are basically problems of the "configure" system - 
which isn't pretty.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-27 Thread Marcin Dalecki
On 2005-04-28, at 01:35, Joe Buck wrote:
I will agree with you on this point, but more than half of the time
to bootstrap consists of the time to build the Java library, and 
speeding
that up is a losing battle, as Sun keeps adding new stuff that
libgjc/classpath is then expected to clone, and the addition rate
seems to exceed Moore's Law.
Those are just symptoms of the fact that Java is an
emulated machine with his own OS, which just happened to forget
about any lesson in this area learned during the last 30 years.
Other then this:
please.please.bePolite.toThe.niceFine.javaApi(nowAndAlways);


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-27 Thread Marcin Dalecki
On 2005-04-27, at 22:54, Karel Gardas wrote:
Total Physical Source Lines of Code (SLOC)= 2,456,727
Development Effort Estimate, Person-Years (Person-Months) = 725.95 
(8,711.36)
 (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 6.55 
(78.55)
 (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 110.90
Total Estimated Cost to Develop   = $ 
98,065,527
 (average salary = $56,286/year, overhead = 2.40).
Please credit this data as "generated using 'SLOCCount' by David A. 
Wheeler."
One question remains open: Who is Mr. Person?


Re: GCC 4.0 build fails on Mac OS X 10.3.9/Darwin kernel 7.9

2005-04-27 Thread Marcin Dalecki
On 2005-04-22, at 16:30, Lars Sonchocky-Helldorf wrote:
 James E Wilson <[EMAIL PROTECTED]> wrote:
Andrew Pinski wrote:
Does anyone read the installation instructions?
Yes, but not everyone. And even people that read the docs can miss 
the info if they can't figure out which part of the docs they are 
supposed to be looking at.

If you don't want people sending bug reports like this, then you 
idiot proof it by adding a configure test. Maybe even something like 
the attached.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com
Index: config.gcc
===
RCS file: /cvs/gcc/gcc/gcc/config.gcc,v
retrieving revision 1.529
diff -p -r1.529 config.gcc
*** config.gcc  4 Apr 2005 17:18:49 -   1.529
--- config.gcc  22 Apr 2005 07:44:06 -
*** case ${target} in
*** 366,371 
--- 366,377 
case ${enable_threads} in
  "" | yes | posix) thread_file='posix' ;;
esac
+   # Verify the cctools version number.
+   cctools=`cat /dev/null | as -v 2>&1 | grep cctools-576`
not quite right. It is completely sufficient to have something equal 
or above cctools-528 for building gcc-4.0:

http://gcc.gnu.org/ml/gcc-patches/2004-08/msg00104.html
cctools-576 are only needed to build HEAD:
http://gcc.gnu.org/ml/gcc/2005-03/msg01149.html :
"It's not necessary to upgrade cctools to use 4.0, since the features 
that need the cctools fixes aren't there."

And btw.: cctools-576 have a bug which causes regressions if you want 
to use the FSF Objective-C runtime:

http://gcc.gnu.org/ml/gcc-testresults/2005-04/msg00798.html
http://gcc.gnu.org/ml/gcc-testresults/2005-04/msg00949.html
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20959
Their fail vim building at least too.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Marcin Dalecki
On 2005-04-28, at 12:03, Dave Korn wrote:
Original Message
From: Marcin Dalecki
Sent: 28 April 2005 02:58

On 2005-04-27, at 22:54, Karel Gardas wrote:
Total Physical Source Lines of Code (SLOC)= 2,456,727
Development Effort Estimate, Person-Years (Person-Months) = 725.95
 (8,711.36) (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 6.55 
(78.55)
 (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 110.90
Total Estimated Cost to Develop   = $ 
98,065,527
 (average salary = $56,286/year, overhead = 2.40).
Please credit this data as "generated using 'SLOCCount' by David A.
Wheeler."
One question remains open: Who is Mr. Person?

  What makes you think it's not Ms. Person?  Chauvinist!
Oh indeed... Yes I missed it: It's Lady Wheeler!


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Marcin Dalecki
On 2005-04-28, at 16:26, Lars Segerlund wrote:
 I have never done any 'memory profiling' but I think it might be time 
to give it a
 shot, does anybody have any hints on how to go about something like 
this ?
The main performance problem for GCC as I see it is structural. GCC is 
emulating
the concept of high polymorphism of data structures in plain C. Well 
actually it's
not even plain C any longer... GTY().

Using C++ with simple inheritance could help *a lot* here. (Duck...)
There is too much of pointer indirection in the data structures too.
Look throughout the code and wonder how very few iterations go around 
plan,
simple, fast, spare on register set, allowing cache prefetch, to work C 
arrays.
GCC is using linked lists to test the "efficiency" of TLB and other 
mechanisms
of the CPU.



Re: FORTH frontend?

2005-05-05 Thread Marcin Dalecki
On 2005-05-06, at 04:04, Sam Lauber wrote:
There are a few diffciulties here, particularly with addressing the
open stack in an efficient way.
This problem is probably going to get a little off-topic for this
group, and it may be better to discuss this on comp.lang.forth.
I wasn't asking about the langauge implementation.  What I want to  
know is
how to plug a frontend I write into GCC's code-generator and  
optimizer.
You should perhaps start by deriving from the treelang, which is  
already there.
This should be on the same scale of implementation complexity as  
forth. It's pretty
complete in interface terms however still the smallest of languages  
currently
in the tree.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-06 Thread Marcin Dalecki
On 2005-05-06, at 18:14, Andrew Haley wrote:
Rutger Ovidius writes:
Friday, May 6, 2005, 8:06:49 AM, you wrote:
AH> But Java isn't compatible with static linking.  Java is, by  
its very
AH> nature, a dynamic language, where classes invoke and even  
generate
AH> other classes on the fly.  There is no way when linking to  
determine
AH> what set of libraries is required.  This is a simple fact, and no
AH> amount of declaring " this is what users want!"  is going to  
change
AH> it.

I didn't know that java had a nature.
Now you do.

It has features. Some features will work when it is implemented in a
certain way and some won't.
The set of features that work when linking statically is unspecified
and changes over time, depending on implementation details within the
library.
If we wanted to come up with a new language subset compatible with
static linkage we could do that, but it would be a substantial design
effort and we'd need someone to do the work.  Personally speaking, I
don't think it's a very good idea, as a lot of the Java language as
specified depends on dynamic linking, but I wouldn't obstruct someone
who really wanted to do it.

You don't understand that it's perfectly valid to put PIC symbols  
inside an .a file.
From the users perspective this may very well appear to be like  
static linkage.
There are no principal reasons to shatter every single application in  
to a heap
of .so files.



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-17 Thread Marcin Dalecki
On 2005-05-17, at 11:14, Ralf Corsepius wrote:
On Tue, 2005-05-17 at 03:31 +0200, Steven Bosscher wrote:
On Tuesday 17 May 2005 03:16, Joe Buck wrote:
How is it helpful to not follow the rules when posting patches
Quite simple:
* I wasn't aware about this fortran specific patch posting policy. I
never have sent any gcc patch to any other list but gcc-patches for
approval before, so I also had not done so this time.
* How could I know that the responsible maintainers aren't  
listening to
bugzilla and gcc-patches, but are listening to a fortran specific  
list,
I even didn't know about until your posting?
If something goes in to the mainline GCC then it should be required
that the maintainers of it don't stay in their cave and read gcc and  
gcc-patches
at least. This is at least what one can reasonable expect looking
from the outside, since there are no specific gcc-c-patches or gcc-c+ 
+-patches.



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-17 Thread Marcin Dalecki
On 2005-05-17, at 11:29, Richard Earnshaw wrote:
On Tue, 2005-05-17 at 01:59, Steven Bosscher wrote:

No, I just don't build gfortran as a cross.  There are many reasons
why this is a bad idea anyway.
Such as?
The dependence on external packages which don't cross compile well  
for example.


Re: libgcc_s.so.1 exception handling behaviour depending on glibc version

2005-05-18 Thread Marcin Dalecki
On 2005-05-18, at 14:36, Mike Hearn wrote:
On Wed, 18 May 2005 11:35:30 +0200, Stephan Bergmann wrote:
If I build main with C1, and libf.so with C2, and execute the  
program so
that it uses C2's libgcc_s.so.1, it works.

Out of interest, what happens if you build main with C2 and libf  
with C1?
That would seem to be a more common situation for distributors of  
Linux
binaries than the other way around.

This policy of not supporting "build on newer, run on older" is a  
massive
pain for developers who distribute Linux binaries even though it's  
very
common: developers often use very new distributions but users often  
don't.
It requires all kinds of stupid hacks to work around.
Like building on the system you are targeting?
Like cross building for the target system? 


Re: libgcc_s.so.1 exception handling behaviour depending on glibc version

2005-05-19 Thread Marcin Dalecki
On 2005-05-19, at 15:18, Mike Hearn wrote:
On Wed, 18 May 2005 17:26:30 +0200, Marcin Dalecki wrote:
Like building on the system you are targeting?
Like cross building for the target system?
No, like messing around with headers and linkers and compilers, so  
if you
are targetting Linux/x86 your binaries can in fact run on Linux/x86.

I can't see why this is so controversial 
Ah I see. You live under the perhaps marketing inflicted self delusion
that Linux/x86 denominates a single well defined system.
Welcome to the reality then.


Re: GCC 3.4.4 RC2

2005-05-19 Thread Marcin Dalecki
On 2005-05-16, at 22:03, Mark Mitchell wrote:
Georg Bauhaus wrote:

On Mac OX X 10.2, the results are slightly discomforting,
even though I do get a compiler with
--enable-languages=c,ada,f77,c++,objc.
gcc summary has
# of unexpected failures1080
First, I would suggest disabling Ada, in order to get further.
As for the GCC failures, 1080 is certainly enough to say that the  
compiler is not working very well.  It may be the case that GCC  
3.4.4 requires newer versions of Apple's "cctools" package than you  
have installed -- and that the newer cctools cannot be installed on  
your version of the OS.  If that's the case, there may be no very  
good solution.
I highly recommend NOT TO install cctools as casual. Ignore them.
They are overriding other components of the main developer packages
and missing an uninstall script.


Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-23, at 08:15, Gabriel Dos Reis wrote:



Sixth, there is a real "mess" about name spaces.  It is true that
every C programmers knows the rule saying tags inhabit different name
space than variable of functions.  However, all the C coding standards
I've read so far usually suggest

   typedef struct foo foo;

but *not*

   typedef struct foo *foo;

i.e. "bringing" the tag-name into normal name space to name the type
structure or enumeration is OK, but not naming a different type!  the
latter practice will be flagged by a C++ compiler.  I guess we may
need some discussion about the naming of structure (POSIX reserves
anything ending with "_t", so we might want to choose something so
that we don't run into problem.  However, I do not expect this issue
to dominate the discussion :-))



In 80% of the cases you are talking about the GCC source code already
follows the semi-convention of appending _s to the parent type.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 09:09, Zack Weinberg wrote:


Gabriel Dos Reis <[EMAIL PROTECTED]> writes:

[dropping most of the message - if I haven't responded, assume I don't
agree but I also don't care enough to continue the argument.  Also,
rearranging paragraphs a bit so as not to have to repeat myself]



with the explicit call to malloc + explicit specification of sizeof,
I've found a number of wrong codes -- while replacing the existing
xmalloc/xcallo with XNEWVEC and friends (see previous patches and
messages) in libiberty, not counting the happy confusion about
xcalloc() in the current GCC codes.  Those are bugs we do not have
with the XNEWVEC and friends.  Not only, we do get readable code, we
also get right codes.


...


I don't think so.  These patches make it possible to compile the
source code with a C++ compiler.  We gain better checking by doing
that.



Have you found any places where the bugs you found could have resulted
in user-visible incorrect behavior (of any kind)?

If you have, I will drop all of my objections.


You could look at the linkage issues for darwin I have found several  
months

ago. They where *real*.


Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 06:00, Andrew Pinski wrote:



On May 24, 2005, at 12:01 AM, Zack Weinberg wrote:


Use of bare 'inline' is just plain wrong in our source code; this has
nothing to do with C++, no two C compilers implement bare 'inline'
alike.  Patches to add 'static' to such functions (AND MAKING NO  
OTHER

CHANGES) are preapproved, post-slush.


That will not work for the cases where the bare 'inline' are used
because they are external also in this case.  Now this is where C99  
and

C++ differs at what a bare 'inline' means so I have no idea what to
do, except for removing the 'inline' in first place.


This actually applies only to two function from libiberty:

 /* Return the current size of given hash table. */
-inline size_t
-htab_size (htab)
- htab_t htab;
+size_t
+htab_size (htab_t htab)
{
   return htab->size;
}
/* Return the current number of elements in given hash table. */
-inline size_t
-htab_elements (htab)
- htab_t htab;
+size_t
+htab_elements (htab_t htab)
{
   return htab->n_elements - htab->n_deleted;
}

It could be resolved easy be moving those "macro wrappers" in to a  
header
and making the static inline there. Actually this could improve the  
GCC code

overall a bit.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 18:06, Diego Novillo wrote:


On Mon, May 23, 2005 at 01:15:17AM -0500, Gabriel Dos Reis wrote:



So, if various components maintainers (e.g. C and C++, middle-end,
ports, etc.)  are willing to help quickly reviewing patches we can
have this done for this week (assuming mainline is unslushed soon).
And, of course, everybody can help :-)



If the final goal is to allow GCC components to be implemented in
C++, then I am all in favour of this project.  I'm pretty sick of
all this monkeying around we do with macros to make up for the
lack of abstraction.


Amen. GCC cries and woes through struct tree for polymorphism.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-25, at 08:06, Christoph Hellwig wrote:


On Tue, May 24, 2005 at 05:14:42PM -0700, Zack Weinberg wrote:

I'm not sure what the above may imply for your ongoing  
discussion, tough...




Well, if I were running the show, the 'clock' would only start  
running

when it was consensus among the libstdc++ developers that the soname
would not be bumped again - that henceforth libstdc++ was  
committed to
binary compatibility as good as glibc's.  Or better, if y'all can  
manage

it.  It doesn't sound like we're there yet, to me.



Why can't libstdc++ use symbol versioning?  glibc has maintained  
the soname

and binary comptiblity despite changing fundamental types like FILE


Please stop spreading rumors:

1. libgcc changes with compiler release. glibc is loving libgcc. ergo:
   glibc has not maintained the soname and binary compatibility.

2. The whole linker tricks glibc plays to accomplish this
   are not portable and not applicable to C++ code.

3. Threads are the death to glibc backward compatibility.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 21:34, Scott Robert Ladd wrote:


For many practical problems, the world can be considered flat. And  
I do
plenty of spherical geometry (GPS navigation) without requiring the  
sin

of 2**90. ;)


Yes right. I guess your second name is ignorance.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-27, at 00:00, Gabriel Dos Reis wrote:

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*,



The problem with people who work only with angles is that they are  
without sin.




Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 22:39, Gabriel Dos Reis wrote:


Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Richard Henderson wrote:
| > On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
| >
| >>static const double range = PI; // * 2.0;
| >>static const double incr  = PI / 100.0;
| >
| >
| > The trig insns fail with large numbers; an argument
| > reduction loop is required with their use.
|
| Yes, but within the defined mathematical ranges for sine and  
cosine --

| [0, 2 * PI) --

this is what they call "post-modern maths"?

[...]

| I've never quite understood the necessity for performing trig  
operations

| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.


I agree fully. And who was this Fourier anyway?


Re: Sine and Cosine Accuracy

2005-05-27 Thread Marcin Dalecki


On 2005-05-27, at 15:36, Olivier Galibert wrote:

Floating point values represent intervals,


This is mathematically wrong. The basic concept is that the
calculations domain as given by floating point numbers is used
to *model* the real number calculus. Certain constrains apply of course.
But there isn't any concept of representation here. Just a mapping.


and when the interval size is way bigger than 2pi any value in [-1,1]
is a perfectably acceptable answer for sin or cos.


???


Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-31 Thread Marcin Dalecki


On 2005-05-31, at 19:14, Dave Korn wrote:



Speak up now, or we're going to send the firing squad.



  Just don't let them use x87 intrinsics to calculate the line of  
fire, or

we'd all better run!


Some remarkable time ago I was exposed to a 12 bit "RISC" CPU with  
two banks
of 4k ferrite core memory... We where able to hit with the precision  
of about

half a meter, yes just meters not km, from a distance of 45km!
No no there where no laser guided "funnies" involved just pure boring  
ari.


Re: Tracking down source of libgcc_s.so compatibility?

2005-06-09 Thread Marcin Dalecki


On 2005-06-09, at 00:57, Daniel Kegel wrote:


Can somebody suggest a place to start looking for
why the libgcc_s.so built by crosstool's gcc-3.4 can't handle
exceptions from apps built by fc3's gcc-3.4?

The C++ program

#include 
void foo() throw (int) {
 std::cout << "In foo()" << std::endl;
 throw 1;
}
int main() {
 try {
   foo();
 } catch (int i) {
   std::cout << "Caught " << i << std::endl;
 }
 std::cout << "Finished" << std::endl;
 return 0;
}

works fine when built by FC3's gcc-3.4.
It also works fine when built by crosstool's gcc-3.4.

But when you add the libgcc_s.so built by crosstool into
ld.so.conf, the results are different; apps built
by fc3's gcc-3.4 die when they try to throw exceptions,
but apps built by crosstool's gcc-3.4 keep working.
Help!


NTPL vers. non NTPL signal handling differences.
The FC3 compiler contains some "backward compatibility" shims at
quite a few places, which are allowing old binaries to execute.
However stuff compiled with the FC3 version of the compiler, which
contains some "features" not seen otherwise will be of course
inconsistent with anything else.


Re: Porposal: Floating-Point Options

2005-06-14 Thread Marcin Dalecki

On 2005-06-14, at 16:32, Scott Robert Ladd wrote:


To support different expectations, I suggest defining the following
floating-point options for GCC. This is a conceptual overview; once
there's a consensus the categories, I'll propose something more  
formal.


-ffp-correct


Please define correct.


This option focuses code generation on mathematical correctness,
portability, and consistency.


Please define this. Please define mathematical correctness for a FPU  
implementation.
Preferable in formal terms. A Verilog file for example wold be fine.  
Don't forget
to perform *full* verification of the design to be. I mean in  
mathematical terms.

Seriously, I don't mind if this will take a tad bit of time.
Please lay down in esp. how GCC currently deviates from the  
principles in this statement.



No 80-bit long doubles, no fsin/fcos,
making certain that comparison operators work within reason.


Please give the definition of reason.

Note that this option can only go so far in ensuring portability,  
given that not

every system supports IEEE 754 floats.


Please give the distance for "so far". Preferable in metric terms.


-ffp-ieee754

To the best of our ability, assume and follow IEEE 754. Similar to the
above, but assuming IEEE floats and doubles. Note that IEEE 754 has  
been

undergoing some revision.


Note taken. I hence forth declare to have no abilities. Thus this  
goal is
immediately accomplished. Note that you can derive your own abilities  
from

this statement.


-ffp-balanced (default)


Yes. My iMac is already well balanced on the aluminum tadpole on  
which it sits.


Balance correctness with speed, enabling performance optimizations  
that

may reduce portability or consistency, but which do not alter the
overall quality of results.
Yeah, I know that's a bit fuzzy;


Yeah just a tad little bit... we can stop considering minor stuff  
like cache sizes

RAM latencies versus clock rate and all those other minor irrelevant
stuff which doesn't allow us to define a single only total optimum  
for the generated code.
You are right - this kind of nit-picking is really fully irrelevant  
for the

task at hand.


formal
definition of such an option depends on categorizing existing FP code
generation (something I'm working on).

-ffp-damn-the-torpedoes-full-speed-ahead


I just love cute references to military terms. I served myself  
proudly but

involuntarily in the ari. You know buddy. They just make me feel like...
actually like... oh dare memories.


Okay, maybe that should be something shorter,


How about:
-ffp-damn-I-have-no-clue

Nearly half the size - you see?


like -ffast-math or
-ffp-fast. This option enables dangerous hardware intrinsics,


Will it pose the danger for overheating my CPUs thermal throttling?
Perhaps it should be accompanied with an interactive question asserting
question at compiler run time then:

gcc -ffp-fast helloworld.c
ALLERT: The -ffp-fast option is dangerous. Are you sure to proceed  
[YES/NO]?:



and
eschews all concerns about portability and consistency in the quest  
for
speed. The behavior is likely to be the same as the current -ffast- 
math.


Actually, I like *-ffp-damn-the-torpedoes-full-speed-ahead*.


I love it too...

As Stroustrup once said, if you're going to do somethimg ugly, it  
should

look ugly. ;)


Yes! The road to success is paved with imitation. Success is always
reproducible and not accidental. And I'm sure Stroustrup was a better  
philosopher
then coder, since competence in one area automatically projects  
itself on

everything...



Re: Porposal: Floating-Point Options

2005-06-14 Thread Marcin Dalecki


On 2005-06-14, at 19:29, Russell Shaw wrote:


The original bug was about testing the equality of doubles. I think  
that's

just plain mathematically bad. Error bands should be used to test for
"equality", using a band that is in accordance with the minimum  
precision

specified in the compiler documentation.



To be a bit more precise: don't use the precision specified in the  
compiler
but the precision you expect for the overall algorithms numerical  
stability.




Re: Porposal: Floating-Point Options

2005-06-14 Thread Marcin Dalecki


On 2005-06-15, at 06:19, R Hill wrote:


Marcin Dalecki wrote:


[snip]


If you don't have anything constructive to contribute to the  
discussion then feel free to not participate.  If you have  
objections then voice them appropriately or risk them being  
dismissed as bullshit baiting.


Sorry but I just got completely fed up by the references to "math" by  
the original post,
since the authors leak of basic experience in the area of numerical  
computation was

more then self evident.

Writing number crunching code without concern for numerical stability
is simply naive. Doing it leads you immediately to the fact that
this engagement makes your code *highly* platform specific, even in  
the case of
mostly innocent looking operations ("cancellation phenomenon" for  
example).
In view of those issues the problems discussed here: supposed  
"invalidity" of

the == operator, excess precision in the intel FPU implementation,
trigonometric function domain range, are completely irrelevant.
You will have to match your code anyway tightly for the actual FPU  
handbook.


Making the code generated by GCC somehow but not 100% compliant with  
some
idealistic standard, will just increase the scope of the analysis you  
will
have to face. And in esp. changing behavior between releases will  
just make

it even worser.

Only the following options would make sense:

1. An option to declare 100% IEEE compatibility if possible at all on  
the particular arch,

   since it's a well known reference.

2. An option to declare 100% FPU architecture exposure.

3. A set of highly target dependent options to control some well defined
   features of a particular architecture.
   (Rounding mode, controll, use of MMX or SSE[1234567] for example...)

Any kind of abstraction between point 1. and 2. and I would see  
myself analyzing the
assembler output to see what the compiler actually did anyway. Thus  
rendering the
reasons they got introduced futile. In fact this is nearly always  
anyway the "modus operandi",
if one is doing numerical computations. It's just about using the  
programming language
as kind of "short cut" assembler for writing the algorithms down and  
then disassembling
the code to see what one actually got - still quicker and less error  
prone then using

the assembler directly. Just some kind of Formula Translation Language.
I simply don't see how much can be done on behalf of the compiler  
with regard

to this.

And last but not least: Most of this isn't really interesting at the
compilation unit level at all. This is the completely uninteresting  
scope.

If anything one should discuss about the pragma
directive level, since this is where fine control of numerical
behavior happens in the world out there. The ability to say for example:

#pragma unroll 4 sturd 8

would be really of "infinite" more value then some fancy -fblah-blah.



Re: Porposal: Floating-Point Options

2005-06-15 Thread Marcin Dalecki

On 2005-06-15, at 13:50, Scott Robert Ladd wrote:


Perhaps my understanding of math isn't as elite as yours, but I do  
know

that "worser" isn't a word. ;)


Please bear with me. English is my 3th foreign language.


Only the following options would make sense:

1. An option to declare 100% IEEE compatibility if possible at all on
 the particular arch, since it's a well known reference.

2. An option to declare 100% FPU architecture exposure.

3. A set of highly target dependent options to control some well
defined features of a particular architecture. (Rounding mode,
controll, use of MMX or SSE[1234567] for example...)



I would replace 3 above with:

3. When compiled without either of the above options, adhere  
strictly to

the relevant language standard.

Given that Java, Fortran 95, C and C++ all have differences in their
definitions of floating-point, this will need to be (as it already is,
to some extent) language-specific.


You should always place Java last in discussions about numerical work.
It's largely irrelevant in this area due to the inherent inefficiency  
it bears.

Fortran first due to tons of well tested legacy code out there,
C and C++ next is indeed fine.


Any kind of abstraction between point 1. and 2. and I would see
myself analyzing the assembler output to see what the compiler
actually did anyway.



On further reflection, I don't see how a middle ground can even be
defined. The three states above seem to make sense: IEEE, hardware, or
language standard.



And last but not least: Most of this isn't really interesting at the
compilation unit level at all. This is the completely uninteresting
scope. If anything one should discuss about the pragma directive
level, since this is where fine control of numerical behavior happens
in the world out there. The ability to say for example:

#pragma unroll 4 sturd 8

would be really of "infinite" more value then some fancy -fblah-blah.


GCC seems to have a fancy for options, since that is what most people
ask for. I agree that a set of "tuning" pragmas would be a good way of
defining the compilation requirements for a given algorithm.


My understanding is that in the past pragmas where rejected in GCC in  
favor
of the __attribute__((()) syntax due to misguided  
opinions about
esthetics. Thus there are implementation problems with the approach  
outlined

here. However I advise you to read for example the documentation about
Cray C to see how this kind of stuff is handled there. Taking a look  
at Sun's
C compiler will not hurt as well to see how a FPU which is quite  
unique get's

handled.


Re: Reporting bugs: there is nothing to gain in frustrating reporters

2005-06-19 Thread Marcin Dalecki


On 2005-06-19, at 17:59, Vincent Lefevre wrote:


On 2005-06-19 09:57:33 -0400, Andrew Pinski wrote:


Also I think GCC is not the one who is defining it either. It is
glibc who is defining that so complain to them instead.



Thanks for the information (I'm a bit surprised because these are gcc
command-line options that are the first cause of these definitions).
Is there a way to know what gcc defines and what glibc defines?


For GCC, sure, yes. GLIBC: certainly NO. To be more precise:
It's only possible in a time-span which exceeds the MTBF of an  
developer.


Re: signed is undefined and has been since 1992 (in GCC)

2005-06-28 Thread Marcin Dalecki


On 2005-06-29, at 03:21, Diego Novillo wrote:


On Wed, Jun 29, 2005 at 03:13:45AM +0200, Gabriel Dos Reis wrote:


Robert Dewar <[EMAIL PROTECTED]> writes:
|  You did not read anything even vaguely saying that in what  
I wrote.


and you, did you?



Folks, can you take this offline?  It's getting rather tiresome.


The only thing this thread teaches me is the conviction that *every*
instruction set architecture, which relies on compilers to make the most
out if it is severely ill guided.


Re: [patch] Fix i386-mingw32 build failure

2005-08-10 Thread Marcin Dalecki


On 2005-08-10, at 19:05, Mark Mitchell wrote:


The even more correct solution is to not build anything with the  
compiler (including libgcc) until after it is installed.  Then, it  
will just look where it would normally look, and all will be well.


install host != build host

Most of the time if you do deilver software...

You just don't want the fresh instalation to involve trashing a  
working tool chain too.


However, I also recognize that we need to do something before 4.1  
to fix this problem and that we need something simple enough to be  
viable in that timeframe.  But, if Paolo can fix this without  
having to add #!, I think that would be great.




Re: 4.2 Project: "@file" support

2005-08-25 Thread Marcin Dalecki


On 2005-08-25, at 09:14, Christoph Hellwig wrote:


That's what I meant with my comment btw.  It's a horrible idea to
put in all the junk to support inferior OSes into gcc and all other
other programs, and with cygwin and djgpp there are already two nice
enviroments for that.


man xargs?



  1   2   >